FairNow’s Post

What happens when AI claims don't match reality? The FTC is watching and making sure there are consequences. Under the FTC’s proposed consent order, IntelliVision Technologies, a vendor of AI-powered facial recognition software, would be prohibited from making misrepresentations about the performance of their AI capabilities. What were the FTC’s complaints? - IntelliVision claimed its AI was completely free of racial or gender bias – but didn’t have the evidence to support this. - They claimed their software was trained on a data sample of millions of faces – but the true number of unique faces was closer to 100,000. - Lastly, they lacked sufficient evidence to claim that their AI was robust and could not be “spoofed” with photos or videos. What are the implications? As with the FTC’s previous enforcement efforts against five AI developers in September, this is a sign they are watching and taking action. Companies should take steps to ensure that their claims are substantially supported by the evidence, and avoid exaggerations that stretch the truth. Read the FTC’s press release below, linked in the comments. #FTC #AIgovernance #responsibleAI

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics