Italy’s (The Italian Data Protection Authority), Garante, has imposed a €15.58 million fine on OpenAI, citing violations of GDPR in ChatGPT’s handling of personal data. According to Garante, OpenAI lacked a legal basis for processing user data, failed to ensure transparency, and did not implement effective age verification measures to protect children under 13. “This decision underscores the importance of accountability in AI,” said Garante, emphasizing that companies must adhere to privacy laws while innovating. The fine follows a temporary ban on ChatGPT in 2023, which was lifted after OpenAI introduced features like allowing users to opt out of data usage for algorithm training. However, the watchdog remained unconvinced about the company’s transparency efforts. This landmark case, the first significant GDPR penalty against a major AI player in Europe, raises critical questions about balancing innovation and compliance in the era of generative AI. Read the complete article to know more. Link in the comments 🔗
AIM’s Post
More Relevant Posts
-
The italian Autorità Garante per la protezione dei dati personali has imposed a notable sanction on OpenAI, underlining the growing scrutiny over AI technologies and their compliance with data protection regulations. This decision mandates OpenAI to: 1. Pay a €15 million fine 2. Launch a six-month public awareness campaign across major Italian media platforms (radio, TV, newspapers, and the internet). This campaign must educate the public about: a) How ChatGPT operates. b) The implications of its data processing on personal privacy, particularly regarding the collection of data for AI training purposes. c) Individuals’ rights under the GDPR, such as the right to object and the right to data deletion. OpenAI has 60 days to submit its campaign plan for approval by the Authority, with implementation starting 45 days post-approval. 3. Demonstrate compliance by providing detailed reports to the Authority within 60 days of the campaign's conclusion. This ruling represents a significant precedent in the enforcement of privacy laws in the AI domain. The implications of this case will likely resonate globally.
To view or add a comment, sign in
-
First GDPR Fine for Generative AI - 15 million Euro. Italy's privacy watchdog fined ChatGPT maker OpenAI 15 million Euros after closing an investigation into use of personal data by the generative artificial intelligence application. The authority said it found OpenAI processed users' personal data "to train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI had no immediate comment. It has previously said it believes its practices are aligned with the European Union's privacy laws. Last year the Italian watchdog briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules. Text of Decision: https://lnkd.in/eFJyu2-m
To view or add a comment, sign in
-
🔎 December 20, 2024: The Italian DPA has imposed a €15 million fine on OpenAI. Why? you guessed it - GDPR. The investigation revealed that OpenAI used personal data to train ChatGPT without proper legal grounds, violating the principles of transparency and user information obligations under GDPR. But that’s not all! The DPA also raised concerns about OpenAI’s age verification system, which it claims failed to protect children under the age of 13 from potentially harmful AI-generated content. OpenAI has expressed disagreement with the fine, calling it disproportionate and plans to appeal. They have pointed out that the fine actually exceeds their revenue in Italy during the non-compliance period! (for context: under GDPR, fines can reach up to 4% of global annual turnover). 📢 In addition to the fine, the Italian authority has ordered OpenAI to run a 6-month awareness campaign across Italian media to help the public better understand how ChatGPT works - specifically around data collection and AI training. 🚨 This case is a reminder for every AI, governance, and privacy professional to: -Reevaluate your own AI initiatives: are they compliant with relevant privacy regulations? -Embed privacy: is privacy considered at every stage of your product and process development? -Use this case as a conversation starter: how are you managing transparency and user data in your AI projects? #GDPR #AI #Privacy #DataPrivacy #OpenAI #ChatGPT #Compliance #AIRegulation #Regulation #Innovation
To view or add a comment, sign in
-
What do you do when AI hallucinates your personal information...trippy question, hey? 😵 It's important because many people already rely on AI as though its outputs are fact even though it may be synthesising inaccurate inputs (and the companies themselves often have disclaimers about the limits of the results). It's possible then that AI can spread incorrect information about us and we have no way of stopping it. There was a case last year, for example, when ChatGPT insisted on telling the tech journalist Tony Polanco that he was dead, despite him being very much alive to clap back. It's now become a legal case too as the privacy rights group noyb has filed a complaint against OpenAI. In EU GDPR law, consumers have the right to access the data companies hold about us, to assess its veracity, and even to have it erased. noyb's allegation is that AI hallucinations lack an audit trail and so any legal way of removing these "facts" is impossible. https://lnkd.in/et4RTHDK #GDPR #AI
To view or add a comment, sign in
-
🛡️The ChatGPT Taskforce Report published by the European Data Protection Board (EDPB) sheds light on the challenges and opportunities in the dynamic AI landscape. It’s a must-read for anyone curious about the Large Language Models (LLMs), such as OpenAI's ChatGPT. 🔍 Key Highlights: 🔹 GDPR Compliance: LLMs, processing vast amounts of data, including personal data, must adhere to GDPR standards 🔹 Investigations Underway: Supervisory Authorities (SAs) are actively investigating OpenAI's data practices for ChatGPT versions 3.5 to 4.0. 🔹 EU Establishment and OSS: With OpenAI's establishment in the EU as of February 2024, the One-Stop-Shop (OSS) mechanism for cross-border data processing is now applicable. 🔹EDPB Taskforce: A dedicated taskforce has been formed to coordinate enforcement actions and information exchange among SAs. 🔹Legal and Ethical Obligations: OpenAI relies on legitimate interest under Article 6(1)(f) GDPR for data processing, with stringent safeguards to balance interests. 🔹Transparency and Fairness: Users must be clearly informed about data usage, ensuring fairness and avoiding undue risks. 🔹User Rights: OpenAI must facilitate the easy exercise of rights like access, deletion, and rectification for users. 📚👉 Read the Report #AI #Privacy #EDPB #DataProtection
To view or add a comment, sign in
-
Italy’s Garante Fine on OpenAI: A Wake-Up Call for AI and Privacy Compliance 🚨 €15 million fine! Italy’s privacy watchdog, Garante, has made headlines by penalizing OpenAI for breaching the GDPR. The issue? OpenAI reportedly processed personal data to train ChatGPT without a valid legal basis, falling short on transparency and user rights. This isn’t just a fine; it’s a lesson for all AI innovators: 🔍 Transparency is non-negotiable—users must know how their data is being used. 📜 Consent matters—or, at the very least, a solid legal basis for processing data. 🤖 Privacy by design should be part of every AI system's DNA. This case highlights the growing tension between AI innovation and data privacy laws. As AI continues to shape our world, responsible innovation must go hand-in-hand with ethical data practices. Garante’s decision is a reminder that trust and compliance aren’t just regulatory hurdles—they’re key to sustainable growth in AI. For OpenAI, this is an opportunity to lead by example. By addressing these gaps, they can demonstrate that privacy-first AI is the future. 🚀 💡 Takeaway for businesses and developers: 1️⃣ Embed privacy by design in your processes. 2️⃣ Prioritize clear communication with users. 3️⃣ Stay ahead of evolving regulations—compliance isn’t optional! The future of AI depends on balancing innovation with respect for individual rights. Let’s build tech that empowers without compromising trust. 💼🌍 #DataPrivacy #AICompliance #GDPR #AIInnovation #EthicalAI #ResponsibleTech #PrivacyMatters #TrustInTech #AI
To view or add a comment, sign in
-
If you all recall the the European Union’s recent #AIAct that aims to regulate and protect against the abuse of #AI technology, then you also have probably wondered how exactly this would end up being enforced on a regular basis as well. The answer? Today I learned from TechCrunch that a dedicated taskforce exists specifically to enforce the data protection rules the EU is mandating to platforms such as OpenAI’s ChatGPT. Problem is, there is still a lack of #clarity on how current data protection laws apply to various platforms, and so there’s a lot of disagreement and vagueness on how enforcers should restrict AI platforms as well as manage the amount of complaint tickets the taskforce receives. While I don’t think a monitoring taskforce is a bad idea, I think the way it has been implemented now in the EU is a good example of how difficult it is to enforce regulations without a clear vision or idea of how to do it efficiently and effectively, especially to ever-evolving technology like AI. Do any of you suggestions or ideas? Is it even truly possible on an every-day scale? Should we present AI platforms with certain freedoms like we do with the #Internet? What do you all think?
To view or add a comment, sign in
-
The European Data Protection Board recently issued the preliminary conclusions of its "GPT taskforce" related to an analysis of how OpenAI's ChatGPT AI model complies with European legislation, and the results are, well, not so good. Firstly, it is noted that there are concerns regarding privacy, transparency, and users' data protection. Secondly, it raises questions about data accuracy. Here are a few excerpts from the report: "(...) due to the probabilistic nature of the system, the current training approach leads to a model which may also produce biased or made-up outputs. In addition, the outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy." "In line with the principle of transparency pursuant to Article 5(1)(a) GDPR, it is important that proper information on the probabilistic output creation mechanisms and their limited level of reliability is provided by the controller, including explicit reference to the fact that the generated text, although syntactically correct, may be biased or made up." One could wonder what the conclusions would have been if Google Gemini had been under scrutiny... While the final report is still a few months away, these preliminary conclusions strongly suggest that, in the end, ChatGPT and other LLMs operating in the EU will need to have very strong disclaimers on the accuracy of their generated content, at a minimum. How this will affect their usage in business and personal life remains to be seen.
To view or add a comment, sign in
-
A number of people have asked me ‘why are you posting all of these things on regulation as it relates to #AI? This is why… “When OpenAI was asked to correct or remove this misinformation, they said it was ‘technically impossible’ and also failed, when asked, to disclose information about how the data was processed, instead offering to simply filter or block data on certain prompts, related to the complainant.” While still a champion of the transformation AI will have, regulation and monitoring is critical to the health of global society. https://lnkd.in/epFZd7kM
ChatGPT in trouble with the EU (again)
aitoolreport.com
To view or add a comment, sign in
-
ChatGPT4 Tip of the Day - GDPR & Gen AI: Generative AI tools, such as ChatGPT, require vast amounts of data for training, some of which may include personal information. The processing of such data must comply with GDPR's strict requirements for lawful processing, transparency, and user consent. In 2023, Italian authorities temporarily banned ChatGPT, citing non-compliance with GDPR principles, highlighting how AI providers must ensure rigorous data protection practices to avoid penalties.
To view or add a comment, sign in
184,471 followers
https://analyticsindiamag.com/ai-news-updates/italy-fines-openai-15-million-euros-over-chatgpt-privacy-violations/