Technology

Microsoft report highlights AI efforts around election misinformation and harmful deepfakes

By Marty Swant  •  July 31, 2024  •

Ivy Liu

A new report from Microsoft highlighted key challenges, opportunities and urgencies that come with protecting people from the dangers of AI-generated content.

On Tuesday, the company released new research about efforts to prevent harmful generative AI such as election-related misinformation and deepfake content. The 50-page white paper also sheds more light on people’s exposure to various types of AI misuse, their ability to identify synthetic content, and growing concerns about issues like financial scams and explicit content. The paper also offers suggestions for policy makers to consider as lawmakers look to craft new regulations around AI-generated text, images, video and audio.

The report published amid growing concerns about AI-generated content this election season. It also comes the same day as the U.S. Senate approved the Kids Online Safety Act (KOSA), which if passed could create new regulations for social networks, gaming platforms and streaming platforms including new content rules related to minors.

In a blog post introducing the report, Microsoft vice chair and president Brad Smith he hopes lawmakers will expand the industry’s collective abilities to promote content authenticity, detect and respond to abusive deepfakes, and provide the public with tools to learn about synthetic AI harms.

“We need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children,” Smith wrote. “While we and others have rightfully been focused on deepfakes used in election interference, the broad role they play in these other types of crime and abuse needs equal attention.”

In just the past week, AI-generated videos of President Joe Biden and Vice President Kamala Harris have highlighted concerns about the role of AI misinformation during the 2024 elections. One of the most recent examples is X CEO Elon Musk sharing a deepfake of Harris. By doing so, some say Musk might have violated his platform’s own policies.

Creating corporate and government rules for AI-generated content also requires setting thresholds for what should be allowed, according to Derek Leben, an associate professor of business ethics at Carnegie Mellon University. He said that also leads to questions like how to determine thresholds based on content, intention, creator and who a video depicts. What’s created as parody might also become misinformation based on how content is shared and who shares it.

Microsoft is correct to push for regulation and more public awareness while also building better tools for AI detection, said Leben, who has researched and written about AI and ethics. He also noted putting the focus on the government and users could make it less about corporate responsibility. If the goal is to actually prevent people from being tricked by AI misinformation in real time, he said labels should be prominent and require less user effort to determine authenticity.

“So much of parody has to do with the intentions of the person who created it, but then it can become spread as misinformation where it wasn’t intended,” Leben said. “It’s very difficult for a company like Microsoft to say they’ll put in place preventions that are against abusive videos, but not parodies.”

Experts say watermarking AI content isn’t enough to fully prevent AI-generated misinformation. The Harris deepfake is an example of a “partial deepfake” that has both synthetic audio and seconds of real audio, according to Rahul Sood, chief product officer at Pindrop, an AI security firm. He said those are becoming much more common — and a lot harder for users and the press to detect.

While watermarking can help, many experts say it’s not enough to prevent the dangers of AI-generated misinformation. Although Pindrop tracks more than 350 voice AI generation systems, Sood said only a majority of those are open-source tools don’t use watermarking. Only about a dozen tools are commercially available.

“The technology exists to do real-time detection uploaded onto these platforms,” Sood said. “The question is there seems to be no real mandate forcing them to do it.”

Other companies are also looking for more ways to help people detect deepfakes. One of those is Trend Micro, which just released a new tool to help detect synthetic videos on conference calls. According to a new study by Trend Micro, 36% of people surveyed are already experiencing scams, while around 60% said they’re able to identify them.

“The biggest challenge we’re going to see with AI in the coming years is misinformation,” said Jon Clay, vp of threat intelligence at Trend Micro. “Whether that is the use of deep fakes, whether it’s video or audio, I think that is going to be one of the toughest parts for people to ascertain what is real and what isn’t real.”

https://digiday.com/?p=551392

More in Media

Related Articles

Back to top button