By Marty Swant • January 29, 2024 • 5 min read •
Regulators looking into the dangers of AI are turning up the heat. Last week, the Federal Trade Commission launched a new inquiry into five major AI players to investigate how their investments and partnerships impact competition.
Along with new inquiries for Alphabet, Amazon and Microsoft, the FTC also is investigating Anthropic and OpenAI. Both startups have received multi-billion-dollar investments from the tech giants. As part of the investigation, the FTC is looking for information about investment and partnership agreements, their rationale and the implications for competition, sales, resources and product rollouts.
During a virtual “Tech Summit” last Thursday, FTC chair Lina Kahn said the agency will look at whether tech giants are using their power to trick the public, and whether the AI investments allow giants to “exert undue influence or gain privileged access” to secure an advantage across the AI sector. Kahn also said the FTC will use findings from previous investigations to guide their strategy, including looking “upstream” at other companies that might be violating U.S. laws. (In 2020, the agency launched a similar inquiry to see if acquisitions by six tech giants harmed competition.)
“Just as we’ve seen behavioral advertising fuel the endless collection of user data, model-training is emerging as another feature that could further incentivize surveillance,” Kahn said. “The FTC’s work has made clear that these business incentives cannot justify violations of the law. The drive to refine your algorithm cannot come at the expense of people’s privacy or security, and privileged access to customers’ data cannot be used to undermine competition.”
The inquiries could also benefit from the FTC’s recently approved use of a subpoena-like legal tool to that aims to make it easier and faster for the agency to obtain documents and testimony during AI investigations.
Another big tech hearing is happening this week in Congress. On Wednesday, the Senate Judiciary Committee will hold a hearing with the CEOs of five tech giants — Meta, Snap, TikTok, Discord and X — to discuss online child safety on the platforms. While the hearing isn’t focused on AI per se, growing concerns about AI-generated misinformation will likely lead lawmakers to question the executives about what they’re doing to mitigate harmful GenAI ahead of the 2024 elections. And in Europe, last week’s leak of the final text for the 800-page European Union’s AI Act offered a glimpse into how that could guide the EU’s approach to AI regulations.
Existing worries about AI-generated election misinformation are already a reality. Last week, New Hampshire residents received robocalls featuring deepfake audio impersonating President Joe Biden urging people not to vote in the state’s primary. Meanwhile, new research from the Brookings Institution shows how AI on social platforms could affect the 2024 elections and a separate Mozilla report claims ChatGPT flouts its own election policies related to AI-generated content.
Lawmakers also have condemned the explicit AI-generated images of Taylor Swift that spread rapidly across social media platforms including X and Telegram. In a post on X last week, U.S. Sen. Mark Warner, D-Virginia, wrote about wanting “to pass Section 230 reform so we can hold tech firms accountable for allowing this disgusting content to proliferate.”
“Sadly, current law may insulate platforms and websites from exactly this sort of accountability,” Warner wrote.
Publicis Groupe announces major AI updates
Last week, Publicis Groupe announced major updates for its AI strategy including a new AI platform called CoreAI and plans to invest €300 million over the next three years. Ahead of the announcement, Digiday interviewed several Publicis Groupe execs about the process of developing CoreAI and numerous partners and data involved. Beyond using consumer intelligence and performance data, Publicis Groupe trained CoreAI on behavioral science data to teach it attitudinal perspective, online and offline behavioral trends, and data sets from internal sources and walled garden partners.
“When you extend that out further, we certainly have quite a robust strategy,” said Sam Levine Archer, chief solutions architect for Publicis North America. “Experience design, measurement and analytics [and] sets of practices throughout the group to observe behaviors and exposure to marketing and behaviors thereafter to see impact. All of that are both structured and unstructured datasets that we have as part of our practice today and that is being included in what’s training our models.”
Other AI news from across Digiday
- The Recording Academy and IBM spoke with Digiday for an exclusive on a new generative AI tool made for the 2024 Grammy Awards.
- Google debuted new GenAI tools for Chrome users and expanded its rollout of GenAI tools for advertisers now powered by its Gemini model.
- CourtAvenue’s new generative AI platform, Genjo, is set to launch with Kia and other e-commerce clients.
- Podcast networks are testing AI tools for faster translation and ad sales.
- Publishers are hesitant to add their chatbots to OpenAI’s GPT Store.
Prompted Products: AI related announcements and reports
- Diageo launched a new “Breakthrough Innovation Team” that will explore ways for the alcohol giant to use AI and other tech.
- Dentsu and Amazon Web Services announced a partnership to scale adoption of GenAI platforms like Bedrock and Sagemaker.
- Etsy launched a new AI-powered “Gift Mode” feature to help consumers find the right gift for people based on their interests.
- The Marketing AI Institute announced new AI courses related to piloting, scaling and mastering AI along with a new subscription-based membership program.
- A new report shows the most popular AI tools in 2023 and a forecast for 2024.
- Google and Hugging Face announced a new partnership to boost open collaboration for AI.
- A new report from the University of Oxford and Reuters Institute dives into how media execs think AI could impact the industry.