7 AI Privacy Violations (+What Can Your Business Learn)
Table of Contents
These days it’s hard to spend any time online without hearing about artificial intelligence (AI). From services like ChatGPT to AI-generated photography, there are countless examples of it infiltrating daily life. Sometimes this can be a good thing — such as when search results are aggregated to make finding information easier.
But just as often, AI has gotten people into trouble. From hinting at gender-driven emotional dysregulation to pure intellectual property theft, navigating this new AI-infused world can be difficult. More importantly, with this new technology, it’s very easy to run afoul of privacy regulations. Here are 7 real-world examples of technologies and companies that didn’t respect privacy and data collection laws when implementing AI — and what you can learn from their mistakes.
7 Real-World Examples of AI Privacy Violations
AI can make processing volumes of data significantly easier for businesses, but it’s not without drawbacks. Left unchecked, it can pose serious privacy risks by exposing unwitting consumers’ information with virtually no guidelines to prevent improper use or data collection.
1. Facebook and Cambridge Analytica (2016)
One of the most widely recognized examples of the dangers of unfettered access to data by AI technology goes back to the months leading up to the 2016 presidential election. Facebook (now Meta) was and still is one of the primary ways that U.S. citizens consume political content and also engage with news.
Cambridge Analytica, a political consulting firm, created what seemed like a harmless personality quiz and deployed it on Facebook. However, the data mined from that quiz was used to create highly specific psychological profiles of the respondents which then allowed the consulting firm to deploy incredibly targeted ads that played into (or against) political sentiments. The fallout is believed to have contributed to 2016 presidential election results and ultimately Facebook was fined $5 billion by the Federal Trade Commission (FTC) and also found itself under fire with international privacy regulators.
2. Strava’s Heatmap Fiasco (2018)
Strava is a popular fitness tracking app that people used to follow workouts. It allowed people to share their workouts with friends. However, in 2018 it rolled out a new Heatmap function that also revealed workout locations for its users. In theory, this seemed like an innocuous thing.
Unfortunately, beyond showing the streets, gyms, and parks where users broke a sweat, it also included people’s home addresses and even sensitive locations like military bases or even troop patrol routes. This was because the app defaulted user settings to allow data sharing, rather than requesting permission as the default. Many users were unaware that their locations were being shared.
Although Strava didn’t face legal action in the US, its lax data management and murky privacy settings interface do put it at odds with the General Data Protection Regulation (GDPR) that oversees privacy and data practices in the European Union.
3. Dinerstein v. Google (2019)
Don’t be fooled by the fact that Google is the lead defendant in this case. In truth, Google along with various healthcare systems and the University of Chicago were all under fire. This suit alleges that the Health Insurance Portability and Accountability Act (HIPAA) was violated when the University of Chicago and multiple healthcare organizations granted access to countless patient medical files for AI data-mining to a company called DeepMind.
The data was then used by Google to train its AI via machine-learning diagnostics and search algorithms in support of a proposed potential patent that would allow the search behemoth to create a subscription or pay per use service. Although the suit was eventually dismissed, it highlights how easily consumer data can be accessed without consent.
4. Janecyk v. International Business Machines (2020)
It’s no secret that facial recognition technology is often biased. While not intentional, a lack of including darker skin toned people in the research often results in technology that works well with fairer skinned individuals, but poorly for anyone with a deeper skin tone. To combat this, IBM developed a dataset called “Diversity in Faces.” The intent was to use real-world photos to help foster AI machine learning that would better recognize deeper skin tones.
Unfortunately, rather than explicitly soliciting participants to willingly have their photos taken, IBM opted to upload random photos found on the popular photo sharing site, Flickr. Specifically, the computer firm relied extensively on photos from a photographer, Tim Janecyk, among others to populate the data set which was then shared with other researchers. Those photos were coded based on physical descriptions.
While well intentioned, IBM was sued by Janecyk in January 2020 and faced fines under Illinois’ Biometric Information Privacy Act (BIPA) of $1,000 to $5,000 for each violation for every Illinois resident that “had their biometric identifiers, including scans of face geometry, collected, captured, received, or otherwise obtained by IBM from photographs in its “Diversity in Faces Dataset.”
5. Clearview AI’s Facial Recognition Database (2020)
Around the same time that IBM was sued for its “Diversity in Faces” dataset, Clearview AI faced similar pushback for its facial recognition database. Clearview leveraged data scraping to pull millions of photos off of social media and even peer-to-peer payment platforms like Venmo to train its facial recognition software and create a comprehensive database. More importantly, that database was then sold to over 600 law enforcement agencies and various private entities.
Two separate cases were brought forward in Illinois and California. In the Mutnick v. Clearview AI, et al. case, the plaintiff David Mutnick not only sought monetary damages under BIPA violations, but to bring an injunction to stop further release of the database to any future parties. The Burke v. Clearview AI, Inc., class action suit made the same charges as in Mutnick. Yet it sought additional relief under the California Consumer Privacy Act (CCPA) because it alleged that Clearview failed to inform consumers “at or before the point of collection” of what biometric data would be collected, and how it would be used.
In May 2020, the American Civil Liberties Union (ACLU) of Illinois also brought forth a similar case against Clearview AI with a settlement agreement being reached in May 2022. It restricted Clearview from any further making or disseminating of its facial recognition software across the U.S. Additionally, the corporation is barred from selling database access to any Illinois entity including state and local police for five years or until May 2027. Clearview AI must also allow Illinois residents to block their facial data from the database.
6. AI Bias in Aon’s Hiring Software (2023)
For human resources departments, Aon Consulting is probably a familiar name. The firm creates hiring assessments that many businesses rely on as part of their candidate vetting process. However, the ACLU filed a complaint with the FTC, charging that many of the assessments the firm produces incorporate biases that discriminate on the basis of race and disability. The legal organization then also brought Equal Employment Opportunity Commission (EEOC) class charges against Aon in 2023.
Specifically, the Adaptive Employee Personality Test (ADEPT-15) which is algorithmically driven impacts neurodivergent individuals as well as those with mental health conditions like depression or anxiety. The ACLU alleged in their complaint that many of the personalities or behaviors that Aon’s tests screened for would adversely impact these two groups. Likewise, the ADEPT-15 test also includes a video screener portion which can heighten the risk of discrimination on the basis of race and disability. Similarly, Aon’s “gridChallenge” is a gamified cognitive assessment tool which has racial disparities and can also unfairly screen disabled individuals.
The ACLU alleged that while Aon marketed and sold its tests as being completely free from bias they knew that the opposite was true and failed to create safeguards to prevent discrimination. As a result, workplaces run the risk of becoming less diverse while employers potentially increase their risk of discrimination lawsuits due to unfair hiring practices.
7. Italy Fines the City of Trento for AI Violations (2024)
While we usually think of AI violations as only being perpetrated by individuals and businesses, cities and municipalities can also cross the line. Earlier this year, the city of Trento in northern Italy was the first in the country to get slapped with a fine for violating privacy protections through AI misuse.
Specifically, regulators charged that the administration used AI improperly for street surveillance projects. The GPDP, a local watchdog oversight organization in the EU, charged that Trento officials did not properly anonymize the data that was collected. Nor did officials provide the proper provisions required under GDPR to share that data with third-party organizations.
Even though the GPDP recognized that Trento officials acted in good faith, the lacklustre oversight and “multiple violations of privacy regulations” that occurred resulted in a 50,000 euro fine (approximately $54,225).
What Businesses Can Learn from AI Violations
If there’s a common thread in the seven cases listed above, it’s that consent matters. In nearly every case we highlighted, a company or municipality found themselves in violation of the law because they chose to access consumer data without receiving express permission from the owners. Often enough, the violator wasn’t being intentionally malicious, but intent doesn’t influence whether or not the courts or regulators will absolve businesses of wrongdoing.
Some cases may be more nuanced, like Dinerstein v. Google since more stringent HIPAA regulations were violated. Similarly, the ACLU’s charge against Aon highlights how easily AI can be used to uphold systemic and systematic discrimination. And the Facebook and Cambridge Analytica scenario demonstrates how easily data can be mined and used to influence large scale events that have far-reaching implications.
Be Smart About Data Collection and Use
Allowing consumers to give informed consent over what data is shared and how it will be used is one of the biggest safeguards a company can implement to avoid getting slapped with a lawsuit. Enzuzo knows that data privacy is critical and that regardless of business size, it’s an essential component that firms must have.
Our suite of services can ensure that whether you’re focused on compliance with GDPR, CCPA, or other local, federal, or international privacy regulations, we have the tools in place to allow you to easily integrate safety protocols. More importantly, these solutions don’t require an extensive background in coding, and take minimal effort to customize to your business’ data usage needs. Whether your business relies on AI or not, data compliance isn’t a game. Let us help ensure that your website has the proper data usage notices to protect your business.
Schedule a free consultation to learn how Enzuzo can help with AI governance, consent, and incident management reporting.
Osman Husain
Osman is the content lead at Enzuzo. He has a background in data privacy management via a two-year role at ExpressVPN and extensive freelance work with cybersecurity and blockchain companies. Osman also holds an MBA from the Toronto Metropolitan University.