What happens when AI claims don't match reality? The FTC is watching and making sure there are consequences. Under the FTC’s proposed consent order, IntelliVision Technologies, a vendor of AI-powered facial recognition software, would be prohibited from making misrepresentations about the performance of their AI capabilities. What were the FTC’s complaints? - IntelliVision claimed its AI was completely free of racial or gender bias – but didn’t have the evidence to support this. - They claimed their software was trained on a data sample of millions of faces – but the true number of unique faces was closer to 100,000. - Lastly, they lacked sufficient evidence to claim that their AI was robust and could not be “spoofed” with photos or videos. What are the implications? As with the FTC’s previous enforcement efforts against five AI developers in September, this is a sign they are watching and taking action. Companies should take steps to ensure that their claims are substantially supported by the evidence, and avoid exaggerations that stretch the truth. Read the FTC’s press release below, linked in the comments. #FTC #AIgovernance #responsibleAI
FairNow’s Post
More Relevant Posts
-
California’s SB1047 (2024 | CA), which we talked about last week, has been amended to address opponent concerns about stifling innovation. Removed from this bill is a provision that would have allowed California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. Also removed from the legislation is the creation of a new state agency, the Frontier Model Division (FMD). The bill also removed criminal penalties for AI companies, but the bill still does require AI companies to submit public “statements” concerning their safety practices. #infohive #AI #tech
To view or add a comment, sign in
-
🚓 Artificial Intelligence (AI) is making waves across industries, and law enforcement is no exception. While AI is often discussed in the context of crime prediction or facial recognition, its potential to revolutionize administrative operations within police departments is equally powerful. Here’s how AI-driven administrative tools are reshaping law enforcement: 1️⃣ Streamlined Workflows – From automating time-consuming tasks like report generation to managing complex schedules, AI can drastically reduce the administrative burden on officers. Less time on paperwork means more time serving the community. 2️⃣ Predictive Budgeting – AI can analyze historical data to predict future needs, whether it's staffing during high-crime periods or forecasting equipment maintenance. This allows departments to proactively plan and make smarter budget requests. 3️⃣ Data-Driven Decision Making – By integrating AI with administrative tools, departments can get deeper insights into everything from resource allocation to performance evaluations. This data empowers leadership to make informed decisions that improve operational efficiency and community outcomes. 🔍 The future of law enforcement is data-driven and AI-powered. As departments adopt these advanced tools, they can work more efficiently, justify budget requests with solid evidence, and ultimately provide better service to the communities they protect. #AIInLawEnforcement #AdminTech #ArtificialIntelligence #LawEnforcementInnovation #PublicSafety #DataDrivenDecisions #OperationalEfficiency #LawEnforcement
To view or add a comment, sign in
-
AI technology has taken the world by storm in the last few years, but few regulations have been imposed regarding its use. 📌 In a landmark ruling, the FCC has banned robocalls using AI-generated voices to combat scams and misinformation. 📌 This applies to robocalls made with AI voice-cloning tools under the Telephone Consumer Protection Act. 📌 Companies using AI voices in their calls now risk fines or potential service blocks from the FCC. In this digital age, it's important to be cautious and informed. Always be wary of unsolicited calls, and remember, if something sounds too good to be true, it probably is! 📞🚫 #AI #TechNews #FCC #Robocalls #DigitalSafety Source: https://lnkd.in/erGFuQ3c
To view or add a comment, sign in
-
The appointment of an AI system as a director poses many interesting questions, foremost of which is whether or not AI systems have legal personalities. Other considerations are, can the same system be held liable and accountable in an event of breach of a fiduciary duty? Can an AI system be described as a person in this regard? In this article, our Founding Partner, Mobolaji Oriola and Associate Halleluyah Edem delve into the role of Technology and AI in Corporate Governance. Visit https://lnkd.in/dux84GEK to read the full piece. Do have an enjoyable read. #artificialintelligence #technology #corporategovernance #lawyers
To view or add a comment, sign in
-
In our latest AI newsletter, Akin Intelligence, we highlight: ◾ The EU AI Act, which was approved by the European Parliament, moving one step closer to becoming the first major AI law. ◾ The U.S. DOJ bringing criminal charges over trade secret theft related to AI, and lawmakers continuing to propose new legislation. ◾ Industry news: NVIDIA’s annual conference, which emphasized the expanding role of AI across every sector and their new AI-focused hardware to power the growing demand. ◾ And more. Subscribe to Akin Intelligence today: https://lnkd.in/gSg4rKDT By: Shiva Aminian, Nathan Brown, Desiree Busching, Cono Carrano, Davina Garrod, Jingli Jiang, Jaelyn Edwards Judelson, Michael Kahn, Natasha Kohne, Lauren Leyden, Ed Pagano, Michelle Reed, Hans Rickhoff, Corey Roush, David Vondle, Ryan Thompson, Lamar Smith, Reggie Babin, Jenny Arlington, Ryan Dowell, Marshall Baker, Jan Walter, Megan Mahoney, Jaylia Yan, Omar Farid, Taylor Daly, Joseph Hold #AI #ArtificialIntelligence #TradeSecrets #AIRegulation
To view or add a comment, sign in
-
Can AI stop 100% of consumer fraud? Ours records 5 points of data including facial recognition to ensure the signer is the buyer. Our AI proctors ensure the customer understands & approves your key disclosures. Consumers lost over $10 billion to fraud in 2023. Do you think it's possible to get that number to $0 with AI?
To view or add a comment, sign in
-
Make sure you have policies in place to ban using non-corporate AI tools. Corporate plans allow controls to prevent the AI from putting the data into the public sphere and training on it. "Employees are sharing company legal documents, source code, and employee information with unlicensed, non-corporate versions of AIs, including ChatGPT and Google Gemini, potentially leading to major headaches for CIOs and other IT leaders, according to research from Cyberhaven Labs... unauthorized AI creates risk, regardless of whether the AI model trains on that data or shares it with other users, because that information now exists outside company walls, adds Pranava Adduri, CEO of Bedrock Security." https://lnkd.in/g2x4QPfy
Unauthorized AI is eating your company data, thanks to your employees
csoonline.com
To view or add a comment, sign in
-
Complimentary CLE session for my legal network! ✴️Register Now! ✴️ https://okt.to/ubeHEU Artificial intelligence will replace legal operations tasks, NOT roles. That’s the belief of our speakers and contributors to the new insightful book, “Legal Operations in the Age of AI and Data.” So, before you imagine an AI future similar to Skynet from the Terminator movies, join Memme Onwudiwe (Evisort), Ashley Miller (Capgemini), Brooke Van de Kamp (Johnson Controls), and Maryam Salehijam (Axiom) for this webinar and see how you can harness the power of AI to optimize your role, enhance efficiency and drive strategic decision-making. #Axiom #LegalOperations #LegalCommunity #LegalInnovation #LegalTech #LegalAI
To view or add a comment, sign in
-
AI regulation is at its best when it focuses on the potential for harm (discrimination in hiring, bias in policing, misinformation...), rather than on technical differences between models. The key to regulating AI without stopping positive tech changes (and to avoid the trope of the law that can't catch up) is to set accountability mechanisms that mitigate real risks by holding entities responsible for the things they profit from
Government should regulate AI misuse and harm, not the basic technology - Capitol Weekly
https://capitolweekly.net
To view or add a comment, sign in
-
🚀 Dive into the latest tech law with Jonathan Armstrong and Eric Sinrod! They unpack personal liability in tech, AI's influence, lessons from actual cases, LinkedIn profile risks, the Horizon investigation, and the NY State Bar Task Force on AI. Take advantage of insights on data breaches and France's AI adoption guidance. Tune in now: https://bit.ly/49TNDNO #TechLaw #AI #DataSecurity
To view or add a comment, sign in
855 followers
Here is the FTC's press release: https://www.ftc.gov/news-events/news/press-releases/2024/12/ftc-takes-action-against-intellivision-technologies-deceptive-claims-about-its-facial-recognition