USA-China Tensions Transform Global Market

After the U.S. elections, relations between the...

New EU regulatory framework for artificial intelligence

LAWNew EU regulatory framework for artificial intelligence

On February 2, the Council of the European Union adopted the final version of the Artificial Intelligence (AI) Act. Essentially, it responds to some of the ethical challenges posed by the use of AI technology, promoting its responsible and safe development. This is extremely important, since AI is currently growing very strongly and its impact on the global economy is significant.

On Friday, February 2, 2024, after three years of work, the Council of the European Union formally adopted the final version of the AI Act. It is expected that the EU Parliament will also soon approve it, without making any significant changes to the text. The regulation will come into effect 20 days after its publication. This will initiate several deadlines ranging from 6 months to 3 years for issuing acts of secondary law and for both public institutions and businesses to adapt to the new regulation.

The AI Act is a Regulation of the European Parliament and Council of the EU, meaning it will be directly applicable in member states. Technical and executive issues will be resolved by the European Commission through delegated and executive acts. In certain areas, Member States will need to adopt their own legal acts, such as establishing supervisory bodies.

Other regulations similar to the AI Act are also being worked on by the Washington administration, Chinese authorities, and the G7, but these are at a much earlier stage.

Existing estimates indicate that the impact of artificial intelligence (AI) on the global economy will be significant, although the forecasts vary greatly. Chris Hyzy, Director of Investments at Merrill and Bank of America Private Bank, stated that artificial intelligence will bring about a transformation in the global economy similar to inventions like electricity and steam engines.

The AI Act is primarily a response to the need to direct responsible and safe development of AI. Following a risk-based approach, AI systems will be classified within four groups, depending on the level of risk they pose to health, safety and violations of fundamental rights.

Borderline attention is also drawn to the publication in December 2023 of the ISO/IEC 42001 standard on establishing consolidated frameworks for managing AI systems. This is largely in line with the requirements of the AI Act. The European Union was actively involved in its development, coordinating with work on the AI Act text.

New regulations focusing on cybersecurity of IT systems (in some cases sector-specific), such as NIS2, DORA, CER, and those related to digital services and the digital market DSA and MSA, will also impact AI Act requirements.

From a legal perspective, the AI Act, due to its fairly general and undefined concepts, will be more resistant to changes in technology over time. However, it will often cause uncertainty as to the scope and manner of its application, requiring interpretations by lawyers, under the watch of the European Court of Justice.

The AI Act is thus only the beginning of the path to meeting the challenges created by AI that face states, businesses, and citizens alike. These should be supplemented by self-regulations adopted by businesses, and a sensible state policy that uses the opportunities created by AI for the entire economy and society, mitigating the emerging risks.

Authors:

Dr. Iwona Karasek-Wojciechowicz, Jagiellonian University, of counsel Lawspective, Member of the Polish Economic Society

Jacek Wojciechowicz, Collegium Civitas, Member of the Polish Economic Society.

Check out our other content
Related Articles
The Latest Articles