As companies gear up to meet the requirements of the new legislation , TDAi Next explores the upcoming changes over the next months and years.
The EU AI Act , coming into effect on Thursday, will apply to both existing and in-development artificial intelligence (AI) systems.
This legislation is considered the first in the world to regulate AI based on the risks it poses.
While lawmakers passed the Act in March, its official publication in the European Commission’s journal in July initiated its enforcement.
The August 1 date kicks off a timeline for companies using AI in any form to familiarize themselves with the new law and ensure compliance.
AI Act Evaluates Companies Based on Risk
The EU AI Act categorizes AI systems into four risk levels, determining the applicable timelines for compliance: no risk, minimal risk, high risk, and prohibited AI systems.
Certain AI practices will be banned entirely starting February 2025. This includes systems that manipulate user decisions or expand facial recognition databases through internet scraping.
High-risk AI systems, such as those collecting biometric data or used in critical infrastructure and employment decisions, will face the strictest regulations. These companies must show their AI training datasets and provide proof of human oversight.
According to Thomas Regnier, a spokesperson for the European Commission, about 85% of AI companies fall under the “minimal risk” category with minimal regulatory requirements.
Timeline for Compliance
Heather Dawe, head of responsible AI at consulting firm UST, is assisting international clients in aligning their AI use with the new Act.
Dawe notes that most of her clients recognize the necessity of AI regulation and are “okay” with the new requirements. Depending on the size of the company and the role AI plays in their workflow, it could take between three to six months to achieve compliance.
“There are clear guidelines about what you need to do,” Dawe said. “Not starting the process early enough can lead to complications.”
Dawe suggests that companies consider setting up internal AI governance boards, comprising legal, tech, and security experts, to audit their technologies and ensure adherence to the new law.
If companies fail to comply with the AI Act by the set deadlines, they could face fines of up to seven percent of their global annual turnover, Regnier added.
Preparing for Enforcement
The Commission’s AI Office will oversee compliance with the rules for general-purpose AI models.
The office will initially include 60 internal Commission staff members, with 80 more external hires planned for the next year, according to Regnier.
An AI Board, consisting of high-level delegates from all 27 EU member states, laid the groundwork for the Act’s implementation in their first meeting in June. This board will collaborate with the AI Office to ensure harmonized application of the Act across the EU.
Over 700 companies have committed to an AI Pact, agreeing to comply early with the law.
Financial and Technological Support
Additionally, the Commission is preparing to increase its investment in AI, with a billion-euros injection in 2024 with one billion-euros injection and twenty billion euros by 2030.
The Regnier said, “What you hear everywhere is that the rules of the European Union will block innovation; this is not true.”“The legislation is designed to allow companies to operate in the EU while protecting citizens and businesses.”
One of the Commission’s main challenges is regulating future AI technologies, but Regnier believes the risk-based system will enable quick regulation of new systems.
Need for Further Clarification
Risto Uuk, EU Research Lead at the Future of Life Institute, believes the European Commission needs to provide more detailed guidance on the risk levels of specific technologies.
For example, using a drone to take photos around a water supply needing repairs doesn’t seem very risky, but it falls into the high-risk category under the legislation.
Uuk stated, “When you read it now, it’s quite general. “We have general guidance, which is helpful, but companies need specific answers on whether a system is high risk.”
Areas for Improvement
According to Uuk, the Act could impose more restrictions and larger fines on Big Tech companies operating generative AI (GenAI) in the EU.
Major AI companies like OpenAI and DeepMind, categorized as “general-use AI,” fall under the minimal risk category. These companies must prove compliance with copyright laws, publish a summary of their training data, and demonstrate cybersecurity protections.
European Digital Rights, a collective of NGOs, believes the Act contains loopholes for biometrics, policing, and national security that need to be addressed. “We regret that the final act includes many important flaws for biometrics, policing and national security, and we ask MLAs to close these intervals,” said a spokesman said.
Visit:
- AI-Powered Content Creation: Top 10 Essential Prompts for Every Field
-
Essential ChatGPT Prompts to Streamline Your Business Operations in 2024