CONNECT WITH US

From Brussels to the world: EU AI Act advisor on shaping the future of AI regulation

Jerry Chen, DIGITIMES Asia, London 0

Credit: DIGITIMES Asia

On March 13, 2024, the European Parliament announced the passage of the "AI Act," a landmark piece of legislation set to be implemented in phases starting in mid-2025.

This act represents the EU's ambitious effort to establish comprehensive regulations for AI technologies. However, despite its ambitious goals and status as the first piece of concrete legislation in the world regarding AI regulations, the AI Act has sparked considerable debate over compliance costs and the fundamental definitions of AI systems.

Kai Zenner (KZ) is the Head of Office and Digital Policy Adviser for MEP Axel Voss of the EPP Group in the European Parliament. Over the past seven years, he has been deeply involved in the preparatory work and initial discussions surrounding AI regulation.

His expertise has been instrumental throughout the development of the AI Act, participating in several committees and co-leading technical agreements. In an interview with DIGITIMES Asia (DT), Zenner explains the background of EU legislation, the Act's impact on AI regulation within and beyond the region, international outreach, and the future potential of the legislation for the new technology across borders.

DT: What prompted the EU to start regulating AI?

KZ: There were several key factors.

Firstly, there was a growing concern that AI was increasingly replacing human decision-making, raising significant ethical and democratic issues. For instance, it's deemed unacceptable within our democratic system and ethical principles for a machine to decide on critical issues like whether surgery should proceed for terminally ill patients.

Secondly, there were fears about AI exacerbating existing biases and discrimination, making them harder to detect and address due to AI's "black box" nature. This could lead to more severe violations of human and fundamental rights.

Thirdly, there was anxiety about market concentration, where a few powerful companies could dominate the AI sector, mirroring issues in the digital economy. The EU aimed to create a competitive advantage for European companies by providing a clear and manageable legal framework, hoping to trigger a "Brussels effect" similar to GDPR.

This would ensure that European companies can develop and sell AI products globally with legal certainty. Finally, the potential of AI to address global challenges like climate change, supply chain issues, hunger, and resource scarcity was a significant driver. AI could detect patterns and improve procedures, helping to solve long-term issues facing humanity.

DT: Some critics argue that the newly passed EU AI Act may impose stringent regulations on tech enterprises and AI developers, leading to hefty compliance costs. How do you perceive the law's impact on AI development within the EU? Are the concerns about overregulation justified?

KZ: There are valid concerns about overregulation, especially given the more hands-off approaches in the UK and US, which can foster innovation in such a new and rapidly evolving field. The AI Act is quite complex, and it might not always be clear to companies what they need to do to comply, potentially increasing costs and making Europe less attractive for AI development.

However, if we manage to clarify the requirements and develop clear standards and best practices through bodies like the European standardization organization CENELEC, compliance could become straightforward and cost-effective. CENELEC is crucial in this process, as it will develop harmonized technical standards for AI systems.

If companies have clear guidelines and know precisely what is required to comply, the regulatory burden could be significantly reduced. This would minimize compliance costs and ensure that AI technologies developed in Europe align with our values of safety and trust. In this scenario, the AI Act could become a competitive advantage, providing a clear and consistent framework that helps European companies innovate confidently and responsibly.

DT: Has there been significant coordination between the EU and the UK-led AI safety summits and the pledges made on AI regulation and innovation? How do you view these collaborative efforts, and what impact do you think they will have on the future of AI governance?

KZ: International organizations and forums like the OECD, Council of Europe, United Nations, and UNESCO have played a significant role in shaping AI regulations. Many AI laws and policies across different countries are based on the same principles, originally developed by the OECD in 2019 and adopted by the G20.

This common foundation will simplify harmonization efforts in the future. However, the EU has faced challenges in active participation in these international forums due to internal structural issues.

The negotiators of the AI Act were not involved in international discussions such as the Trade and Technology Council with the United States or the G7 and G20 negotiations. This lack of coordination resulted from the Commission in Brussels becoming too large and compartmentalized, with various entities working in silos. As the political negotiations for the AI Act conclude and it becomes law, I hope the EU will allocate more resources to engage more effectively on the international stage.

DT: Do you foresee the establishment of a global regulatory framework for AI by the end of the year or in the near future? If so, what form might this regulation take, and what key challenges do you anticipate in achieving a consensus among international stakeholders?

KZ: The ethical standards for AI established by the OECD and G20 in the 2010s provide an excellent foundation for an international framework. We are likely to see countries selecting specific areas for cooperation, such as incident notification standards being developed by the OECD. This will establish a harmonized framework where countries and companies know the procedures to follow if a significant AI-related incident occurs, including who needs to be informed.

Beyond these technical areas, there is potential for broader international cooperation. If major players like China and the US can agree, we might see treaties or other cooperative efforts in areas such as autonomous weapons, AI in healthcare, and advanced robotics.

For instance, international agreements could emerge to regulate the use of AI in military applications, ensure safety and ethical standards in AI-driven healthcare, or facilitate research on advanced AI models to prevent scenarios where AI becomes too powerful. Cooperation on these big-picture issues could lead to significant advancements in AI governance, aligning with international principles, creating common procedures, and addressing global challenges collaboratively.

DT: Speaking of the big picture with issues such as AGI on the horizon, do you think the technology comes first and then the regulation gets enforced based on it, or should the government get ahead of it?

KZ: I believe regulation will always follow the development of technology. At the policy level, we can facilitate big-picture discussions beforehand. Personally, I am skeptical about AGI at this point.

For those concerned about it, the best approach is to invest heavily in research. This includes international research projects, similar to those in virology or space exploration, where experts from Asia, Africa, Europe, and other regions collaborate.

Governments can support these research efforts without prematurely imposing regulations. By bringing together top experts and listening to their insights, we can develop a well-informed approach to regulation.

This means that regulation should come early enough to address emerging issues but only after we have a clearer understanding of the direction the technology is taking. This way, we can ensure that regulation is informed and effective without stifling innovation.