China is not generally the country that Western democracies look to when crafting new laws. But starting this week, it will serve as a case study on how to address a problem thats perplexing legislators across the globe: People appearing to say or do things they never did thanks to deepfakes made with artificial-intelligence tools.
On Tuesday, new Chinese regulations came into effect that prohibit the use of deepfakes deemed harmful to national security or the economy. The rules also state deepfakes must be prominently labeled as synthetically generated (or edited) if they might be misconstrued as real.
China, of course, has different priorities than democracies around the world. State-controlled media outlets are often mouthpieces for the ruling party, which directly controls major media groups. Reporters Without Borders calls China the worlds largest prison for journalists and ranks it near the bottom in terms of media freedom.
Still, other nations can learn from Chinas attempt to regulate deepfakes, argues Graham Webster, who tracks the nations digital-policy developments while running the DigiChina Project at Stanford University.
Although Chinas political system and its governments goals differ significantly from many other countries, I argue the world can learn from Chinas early and proactive attempt to navigate challenges faced around the world, he tweeted this week.
Speaking to the Wall Street Journal, he described the new rules as one of the worlds first large-scale efforts to try to address one of the biggest challenges confronting society.
Observers outside China, he noted, will see how such rules play out in the real world, and observe how businesses are affected.
Theres little doubt other governments will soon have to address deepfakes, which are rapidly reaching new levels of sophistication, both in video or audio form. Wedbush tech analyst Dan Ives described an A.I. arms race that is taking place globally in a Wednesday note to clients, pointing to Microsofts investment in ChatGPT maker OpenAI.
Elections might be influenced by candidates appearing to say things they did not. Digital simulations of celebrities, including Tom Cruise, have appeared in ads with no permission granted. In war, national leaders might appear to advise citizens to surrender, as happened to Ukrainian President Volodymyr Zelensky in the early days of Russias invasion.
More recently, Microsoft showed off a text-to-speech A.I. model, called VALL-E, that needs only a three-second sample of someones voice to simulate it, as Ars Technica reported Monday. It can then create audio of that person seeming to say anything, complete with timbre, emotional tone, and even room acoustics.
Microsoft researchers did not provide the code for others to experiment with. VALL-E, they noted on an example website, may carry potential risks in misusesuch as spoofing voice identification or impersonating a specific speaker.
They also suggested some guidelines, writing: If the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice and a synthesized speech detection model.
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.