CONNECT WITH US

EU launches AI office to enforce AI Act and supervise development

Ollie Chang, Taipei; Jerry Chen, DIGITIMES Asia 0

Credit: AFP

The EU announced on May 29 that it established an AI office under the Directorate General for Communications Networks, Content and Technology (DG CNECT).

This follows the month-long process of EU member states signing and approving the legislation. The office will oversee the regulation and supervision of AI development, with organizational changes taking effect on June 16, aligning with the AI Act's implementation date.

New leadership

According to Euro News, Lucilla Sioli, the director of AI and Digital Industry at the European Commission, will head the new office.

The AI Board, made up of regulators from each of the 27 member countries, will help the AI Office standardize regulations and is scheduled to convene its inaugural meeting in June. During these initial meetings, delegates from the member states will participate, as they have a year to designate an official regulatory body.

EU AI office's five departments

According to TechCrunch, the office will consist of five departments and plans to hire over 140 professionals, including technical experts, lawyers, political scientists, and economists. The office may expand further in the coming years as needed.

The regulation and compliance unit will liaise with member states, coordinate regulatory enforcement, and ensure consistent implementation and application of the AI Act within the EU. It will also handle investigations, violations, and penalties.

The AI Safety unit, on the other hand, will identify potential systemic risks in general AI models, develop mitigation measures, and establish evaluation methods. The AI for Societal Good unit will plan and execute international projects, such as weather simulation, cancer diagnosis, and digital twins for urban redevelopment.

The Excellence in AI and Robotics unit will support and fund AI research and coordinate the GenAI4EU project. The AI Innovation and Policy Coordination unit will oversee the implementation of EU AI policies, monitor technological trends and investments, and promote AI applications through the European Digital Innovation Hub network.

Compared to the US government's more laissez-faire approach to AI regulation, the EU is taking a more proactive stance.

Phases of implementation

According to Reuters, CNBC, and EU announcements, the AI Act will be implemented in phases. Specific use prohibitions will take effect six months after the Act's passage.

General AI model system obligations will begin 12 months later, and high-risk AI system obligations will start 36 months later. This phased approach gives companies a buffer period to comply with legislative requirements.

Under the AI Act, AI systems will be classified based on the risk they pose unacceptable risk, high risk, limited risk, and minimal risk, with different obligations or prohibitions applied accordingly.

AI systems deemed to pose "unacceptable risks" under the Act, such as government social scoring systems, predictive policing, and emotion recognition in workplaces and schools, will be subject to bans.

"High-risk" AI systems include systems that significantly impact citizens' health, safety, and fundamental rights, such as self-driving cars and medical devices. Financial services and education sectors are also included due to potential algorithmic biases.

Violations will incur fines ranging from EUR 7.5 million (US$ 8.2 million) or 1.5% of global turnover to EUR 35 million or 7% of global turnover. The amount to pay would be dictated by the violation type and the higher amount of the two options.