Disinformation News Digest for 01/11/2020
News on Disinformation & Deepfakes from the past week: California considers criminalizing non-consensual deepfake pornography; TikTok struggles with celebrity impersonators, per report; dangers of AI-generated false text abound, consider frauds around a false "Iran draft", its impact on any kind of public discourse, per Bruce Schneier in the Atlantic; AI companies are selling images of computer-generated faces that look real for dating apps and advertising, raising risks of fraud and undermining actual (human) talent; BuzzFeed News investigates disinformation for-hire; Facebook announces new deepfake policy.
Author's Note: This is a summary of recent news developments on disinformation and deepfakes, with a special emphasis on developments related to business and the private sector. This does not constitute attorney advising in any way and reflects only my personal selection of interesting items and analysis. Enjoy!
1) California Legislators Consider New Law Criminalizing Nonconsensual Deepfake Pornography – California Legislature – Jan. 8, 2020
California Assemblyman Timothy Grayson just introduced a new bill on Jan. 8, 2020 to make it a crime to create and distribute deepfake pornography without consent of the depicted person. If passed, California will become only the 2nd state (after Virginia) to criminalize nonconsensual deepfake porn. The bill (AB 1903) would increase the monetary penalties if the deepfake depicts a minor. It exempts deepfakes that are of political/newsworthy value, satire or clearly fictional. It also appropriates $25 million to the University of California System to fund research to identify and combat deepfakes. This bill follows two California laws that went into effect on January 1, 2020 that made it a civil violation to create and distribute deepfake pornography and distribute some election-related deepfakes. (Asm. Grayson introduced a similar bill to criminalize deepfake pornography (AB 1280) in February 2019.) I have analyzed this bill on Twitter. The Sacramento/CBS affiliate story on this bill is here.
State legislators are proceeding apace to legislate on deepfakes. Just last week (on Jan. 2, 2020), a bill was introduced in the Maine legislature to prohibit certain deepfakes related to elections.
2) Celebrity Impersonators Run Rampant on TikTok As Platform Has No Verification System: TikTok has a problem with impersonator accounts, including Donald Trump Bernie Sanders and Boris Johnson. Is it time for a TikTok blue check mark? – Screen Rant – Jan. 09, 2020
This article discusses impersonators on TikTok, which does not employ the “blue check mark” system of Instagram and Twitter. Notable here is the observation that deepfakes could spread more widely when the propagator can claim convincingly to be a celebrity or person of public renown. “TikTok has a problem with fake celebrity accounts and it remains to be seen if the problem will be fixed anytime soon. While there are plenty of social media platforms now available, TikTok is one of the newest and fastest growing. However, its lack of mature protections make it easy pickings for impersonators….If a service like TikTok has no real way to identify a real account, and many assume that account to be authentic, then posting videos using deepfake technology on these accounts runs the risk of greatly increasing the spread of misinformation. Something that many of TikTok’s main competitors are actively trying harder to combat.”
3) You’re Not Being Drafted: Army Says Scary Texts Are ‘Completely Fake’ – NY Times - Jan. 8, 2020
Tensions with Iran have led to a raft of fake text messages telling people they are being drafted into the military. The story is notable because it points to the danger of such frauds that can be perpetrated at scale especially when machine generated text can trick unsuspecting victims. Also notable is that the Army responded by issuing a “fraud alert” warning that the messages were fake and said it was investigating who was behind them.
“Text” is often a forgotten media of AI technology. I note that most legislation on deepfakes—including the 2020 National Defense Authorization Act (see WilmerHale Alert) and the Deepfake Reports Act pending in Congress (see another WilmerHale alert)—include synthetically, “machine-generated text” as part of their definition of deepfakes.
4) Bots Are Destroying Political Discourse As We Know It: They’re mouthpieces for foreign actors, domestic political groups, even the candidates themselves. And soon you won’t be able to tell they’re bots – The Atlantic, Jan. 7, 2020
Further to the above, cybersecurity expert Bruce Schneier discusses the dangers of machine-generated text and “chat bots” on political discourse writ large. He points out that administrative public comment processes have been deluged by fake, manufactured comments. Such manipulation can affect any venue that involves public commentary. It should be obvious that such commentary could include false, manufactured comments made at scale about brands and businesses.
“One of the biggest threats on the horizon: Artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial-intelligence-driven text generation and social-media chatbots. These computer-generated “people” will drown out actual human discussions on the internet. Text-generation software is already good enough to fool most people most of the time. …..Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. …Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.”
“Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the internet. Not just on social media, but everywhere there’s commentary.”
5) Dating apps need women. Advertisers need diversity. AI companies offer a solution: Fake people – Washington Post, Jan. 7, 2020
This piece explores the uses of AI-created faces for apps and advertising. In addition to the dangers this technology poses for fraud, it also poses particular threats to (human) creative talent and their livelihoods.
“Artificial intelligence start-ups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings. One firm is offering to sell diverse photos for marketing brochures and has already signed up clients, including a dating app that intends to use the images in a chatbot. Another company says it’s moving past AI-generated headshots and into the generation of full, fake human bodies as early as this month….AI experts worry that the fakes will empower a new generation of scammers, bots and spies, who could use the photos to build imaginary online personas, mask bias in hiring and damage efforts to bring diversity to industries. The fact that such software now has a business model could also fuel a greater erosion of trust across an Internet already under assault by disinformation campaigns, “deepfake” videos and other deceptive techniques.”
6) Disinformation For Hire: How A New Breed Of PR Firms Is Selling Lies Online: One firm promised to “use every tool and take every advantage available in order to change reality according to our client's wishes.” – BuzzFeed News, Jan. 6, 2020
This is a detailed investigation of so-called “black PR firms” that use fake social media accounts, websites, and coordinated harassment campaigns to push messages of their clients or trash adversaries. It notes that these tactics are particularly pronounced in the Philippines.
“If disinformation in 2016 was characterized by Macedonian spammers pushing pro-Trump fake news and Russian trolls running rampant on platforms, 2020 is shaping up to be the year communications pros for hire provide sophisticated online propaganda operations to anyone willing to pay. Around the globe, politicians, parties, governments, and other clients hire what is known in the industry as “black PR” firms to spread lies and manipulate online discourse. A BuzzFeed News review — which looked at account takedowns by platforms that deactivated and investigations by security and research firms — found that since 2011, at least 27 online information operations have been partially or wholly attributed to PR or marketing firms. Of those, 19 occurred in 2019 alone.”
7) Facebook Announces New Rules Around Manipulated Media – Facebook, Jan. 6, 2020
Here is the crux of the company’s policy, “Enforcing Against Manipulated Media”
“Going forward, we will remove misleading manipulated media if it meets the following criteria:
-- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
-- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”
----------