Invisibly

Lack of AI Security in Big Tech: a Data Security Crisis

AI Security Jeopardizing Big Tech Data image
The financial, legal, and moral risks Big Tech is facing in the age of AI, why everyone is concerned, and solutions to the problem.

Over the last ten years, artificial intelligence (AI) has grown exponentially, resulting in ground-breaking developments across a range of industries, including entertainment and healthcare. But this quick spread of AI has also brought to light serious problems with AI security related to the data of Big Tech corporations. According to the Electronic Privacy Information Center (EPIC), 

“Ten states included AI regulations as part of larger consumer privacy laws that were passed or are going into effect in 2023, and even more states have proposed similar bills. Several states proposed task forces to investigate AI, and others expressed concern about AI’s impact on services like healthcare, insurance, and employment.”

This growing concern is rooted in the numerous AI security flaws that have surfaced, such as the ability to manipulate machine learning models, be vulnerable to data breaches, and misuse of AI capabilities, all of which put mass amounts of sensitive data, that are shared by tech companies, at risk. 

However, these are not just theoretical fears of what could happen. Many Big Tech companies are experiencing these challenges in real-time.

Unapproved access and data breaches

Samsung reported that employees accidentally leaked confidential internal source code and meeting recordings while using ChatGPT. The leak prompted Samsung to ban the use of ChatGPT among its employees. 

This is just one example amongst many but it goes to show that there is an inherent risk in utilizing 3rd party AI tools. When users release data to these tools they are also releasing primary control of the information. The AI platforms have their own internal policies and practices for how they store and utilize data and those policies may not always align with its corporate partners. There could be anything from financial catastrophes to invasions of privacy as a result of this circumstance.

Poisoning and model tampering

A new poisoning technology named Nightshade is compromising training data from platforms like DALL-E, Midjourney, and Stable Fusion to create inaccurate responses and image outputs. However, this poisoning strategy is not exclusive to image generation.

AI tools are open platforms, which means anyone who knows how to set up a simple API can utilize and manipulate their data. Even though this creates diversification and customized use cases it also leads to malicious actors. This creates an increased risk of erroneous predictions and data misuse. As more companies rely on AI skills for critical decision-making processes, this risk is more concerning.

Criminal use of AI models

Viral “deepfake” content has been making its way across the internet causing blurred lines between virtual and reality. From a Drake and Kanye collab to election misinformation, deepfakes have been utilized to influence unsuspecting and vulnerable groups and are wreaking havoc on a large scale. 

“The AI Race” is becoming highly competitive and is moving faster than companies can keep up. Anytime there’s an increase in competition, a requirement to be efficient, and profit to be won it can create unfair and unregulated play resulting in increased risk and significant losses. 

Additionally, these issues were occurring years before the utilization of AI became a mainstream race.

History of Big Tech data breaches

It’s hard to forget the infamous Cambridge Analytica Facebook breach during the 2016 United States election. The data analytics firm harvested various data points from millions of Facebook (Meta) profiles of users who were likely to vote in the 2016 election. Meta (Facebook’s parent company) recently agreed to settle the lawsuit as a result of the scandal and took ownership for their role in allowing their users’ data to be exposed to “serious risk of harm”.

More recently, Google’s AI was breached in 2022, resulting in the theft of sensitive information about their training data and AI models. 

These hacks highlight the reality that, in our increasingly digitized world, AI security is not just a nice-to-have but a requirement. They act as harsh reminders that Big Tech companies must prioritize bolstering their AI security operations. Without regulation in place, it not only poses risks to Big Tech but also to other businesses and consumers.

Privacy violations and identity theft

Identity theft and major privacy violations could result from weak AI security. Contact information, social security numbers, and even financial information are examples of personal information that can be stored, sold, and utilized illegally. 

This privacy threat poses a serious threat to the trust that businesses have established with their users and can pose significant financial and legal issues for the businesses affected by the breach. Unfortunately, a company may be doing everything right internally, but still face repercussions from a breach of their 3rd party tools and software.

Financial losses and disruption of operations

The financial risk of monetary loss cannot be ignored when discussing AI security. For example, when the Cambridge Analytica scandal occurred, Facebook’s market value decreased by more than $119B and its shares plunged by 19%. A significant loss not only for the company but also for its investors.

The more Big Tech becomes reliant on AI the greater their risk of disruptions to business operations and loss of business-critical data from AI security breaches. This may result in businesses being unable to deliver goods or services, which in turn would cause significant financial loss of trust with the general public.

Loss of trust in big tech companies

Big Tech is already under intense scrutiny from citizens and lawmakers for its role in declining mental health amongst youth, data misuse, and the spreading of misinformation. Needless to say, their business practices are under a microscope. 

Moreover, restoring trust can be extremely difficult once it has been lost and may have an impact on a company’s long-term growth and competitive position. It’s critical that these businesses step up their AI security in a society that places a high value on digital trust.

Considering the above risks and examples of AI misuse, there is a light at the end of the tunnel. Not only are laws being implemented on the federal and state levels, but Big Tech and AI platforms are partnering together to implement safer practices.

Federal Regulations on AI

On October 30, 2023, President Biden issued an executive order to create oversight of AI and its threat to both privacy and national security. The primary goals of this executive order include new standards for AI safety and security, protection of Americans’ privacy, advancing equity and civil rights, standing up for consumers, patients, and students, and much more. The order follows many other federal and state initiatives but is still only the beginning of creating secure Artificial Intelligence.

Actions Being Taken By Big Tech and AI Companies

1. Security by design

Adding security features from the beginning, all the way through the development and training phases, and ending with the deployment of AI models is the proactive approach. When strong security features are integrated into AI models early in the development process, the likelihood of AI security vulnerabilities increasing can be greatly decreased.

2. Threat modeling and vulnerability scanning

Security operations should include regular threat modeling exercises to determine and evaluate the security risks related to AI systems. Similar to this, vulnerability scanning is essential for identifying weak points so that they can be fixed before hackers take advantage of them. Vulnerability scanning, when combined with threat modeling, offers a thorough understanding of the security state of AI systems and makes prompt detection and response possible.

3. Security testing and AI security awareness training

Before AI models are used, regular security testing and penetration testing can help find and fix bugs within models. Conversely, AI security awareness training can turn staff members into an invaluable line of defense by giving them the knowledge to identify and steer clear of security hazards. A knowledgeable workforce can be vital in reducing the risks related to AI security flaws.

AI has proven to be a valuable tool in accelerating growth and innovation globally, but without proper regulation, the very thing helping us will be the thing that hurts us. Big Tech companies need to be encouraged to work together in order to exchange best practices concerning AI security. Coming together to fight a common foe will accelerate the development of AI security and create a group defense system to counter threats in the landscape of artificial intelligence.

A combination of proactive threat mitigation, transparency, data security measures, and government regulations can help build consumer confidence and trust while still enabling AI to be a revolutionary powerhouse. 

See your data work for you. 

Invisibly black logo

Invisibly

See your data work for you.

Subscriptions — without the strings attached.

Invisibly News Feed & Article UIs
Invisibly News Feed & Article UIs