European Parliament passes landmark AI legislation with AI Act vote
By Marty Swant • March 14, 2024 • 4 min read •
Ivy Liu
The European Parliament on Wednesday voted to approve the highly anticipated AI Act, comprehensive legislation to govern artificial intelligence across the European Union. First introduced in 2021, the AI Act aims to provide a risk-based approach to regulating AI without stifling innovation across the 27-country bloc.
After three years and 800 amendments, the landmark legislation creates new guardrails for developing and deploying AI systems and various AI tools. In addition to new transparency requirements, the rules cover a range of concerns related to copyright, intellectual property, data privacy, health and safety and other ethical issues. The AI Act also addresses AI-generated deepfakes and election-related content will require clear disclosures labeling images, video and audio as AI-generated.
Lawmakers sought to “create enablers” for European businesses while also enhancing protections for citizens, according to Dragos Tudorache, a Belgian member of the European Parliament. At a press briefing before the vote, Tudorache — who was co-rapporteur for the AI Act alongside Italian Member of Parliament Brando Benifei — noted that lawmakers faced heavy lobbying against transparency measures for rules around AI and copyrighted materials. While companies pushed to keep “black box” AI models intact, they said lawmakers knew transparency rules around data and content would be important.
“It is the only way to give effect to the rights of authors out there or whatever they are — scientists or physicians,” Tudorache said. “How else would they know whether their work was used in a training algorithm that is then capable to reproduce or to emulate the kind of creation?”
The AI Act was crafted using a risk-based approach, which applies increasingly strict standards based on various levels of risk. “High risk” uses include AI systems that pose health and safety hazards including using AI in medical devices, vehicles, emotion-recognition systems and law enforcement. If AI systems are unlikely to harm EU citizens’ rights or safety, they’ll be categorized as “low risk.” While high-risk uses have higher standards for data quality, AI transparency, human oversight and documentation, low-risk uses will require companies to inform users they’re interacting with an AI system. Companies with low-risk uses will also have the option to voluntarily commit to codes of conduct.
The EU has also outlined various uses where AI systems pose an “unacceptable risk” that will be banned by the AI Act: Using AI for social credit scoring, behavioral manipulation, untargeted scraping of images for facial recognition and exploiting citizens’ vulnerabilities including age and disabilities.
According to Benifei, many Europeans are still skeptical about AI, which could be a “competitive disadvantage” that stifles innovation.
“We want our citizens to know that thanks to our rules, we can protect them and they can trust the businesses that will develop AI in Europe and that this is a way to support innovation,” Benifei said. “Having in mind our fundamental values, protection of consumers or workers of citizens, transparency for businesses for downstream operators.”
The AI Act comes eight years after European lawmakers passed landmark legislation on another key topic: data privacy. While GDPR sought to retrofit the already entrenched ecosystem of digital advertising, the new rules for AI arrive as the industry is still in its early days.
Privacy experts say the AI Act could potentially help raise standards globally if AI companies make the EU a benchmark for how they apply AI worldwide.
“What’s different here is we’re talking about the regulation of new technological systems,” said Joe Jones, head of research at the International Association of Privacy Professionals. “And it invokes debate and commentary on whether you’re going too fast or too slow when it comes to developing technology and the harms of technology.”
Although yesterday’s vote was a major milestone, it’s part of a longer multi-year process that’s rolling out across the 27-country bloc. After the AI Act becomes law — likely by late spring — countries will have six months to outlaw AI systems banned by the EU. Rules for chatbots and other AI tools will take effect a year later and ultimately become enforceable by 2026. Violations could lead to fines of 7% of a company’s global revenue or up to 35 million euros.
During an online panel Wednesday afternoon, top privacy executives from OpenAI and IBM said it’s important for companies to “go back to basics” and make sure they map out their data and content strategies before the AI Act is in effect.
“I often use an analogy, a notion where you almost have to be a master of microscope and telescope,” said Emma Redmond, assistant general counsel at OpenAI. “By microscope, I mean really trying to assess and see what is it in a particular organization… How is the AI Act applying based on what you’re doing right now? You also have to look telescopically in terms of what are the plans going forward and in the future.”
https://digiday.com/?p=537932