Ad
Ad
Ad
Short

The Italian data protection authority has concluded its investigation into OpenAI's ChatGPT, which began in March 2023. The investigation found multiple privacy violations, resulting in a €15 million fine and mandatory public education requirements. According to regulators, OpenAI failed on several fronts: the company didn't report a data breach, lacked a legal basis for processing personal data, violated transparency principles, and didn't implement proper age verification measures for the AI chatbot. As part of the ruling, OpenAI must run a six-month information campaign to educate the public about ChatGPT. The campaign will explain how the AI system works, including details about data collection for model training and users' privacy rights. During the investigation, OpenAI moved its European headquarters to Ireland, shifting the responsibility for ongoing privacy investigations to Irish regulators.

Ad
Ad
Ad
Ad
Short

O2? Can't do! OpenAI has confirmed it will bypass the name "o2" for its next reasoning model to avoid trademark conflicts with British telecommunications giant O2. Instead, the company will jump straight to "o3" for its next release. The company is currently investing heavily in developing the o-series, its first "reasoning" models. To support this work, OpenAI is developing a new large language model called "Orion" that will generate synthetic training data, according to The Information. Microsoft's Phi-4 model recently demonstrated the effectiveness of incorporating synthetic data in AI training, at least in achieving impressive benchmark results. Interestingly, Sébastien Bubeck, who helped create the Phi series at Microsoft, has since joined OpenAI's team.

Google News