The AI Act, or EU act on artificial intelligence, is set to change Europe’s digital landscape by categorizing AI tools according to their risk levels and imposing new duties on governments and companies. However, experts point out that the act only focuses on specific, critical areas related to AI functioning such as deepfake issues, biometrics, and social scoring. The regulations are primarily a response to emerging, increasingly advanced models of generative artificial intelligence. A recent example is OpenAI’s Sora – a film-making tool. It is unlikely that the EU legislation will serve as inspiration for lawmakers in the USA or Asia.
At the start of February, after three years of work, the EU Council adopted the final version of the act on artificial intelligence. It will now be discussed by parliamentary committees, with a European Parliament vote expected in early April. The regulation is set to come into effect 20 days after its publication, and individual provisions will start to apply in Member States after a period ranging from six months to three years. During this time, countries, companies, and institutions will have to adjust to the new regulations.
Decisions of this nature are often driven by political reasons, as was reportedly the case with the GDPR, which was approved following the Snowden affair. Now, companies like OpenAI, NVIDIA, Google, and Microsoft—who continue to roll out new AI products and stronger language models—are motivating the parliament.
In mid-February, OpenAI launched Sora, an AI-based video clip generator creating minute-long films from text prompts. Regarding the AI Act regulation, Sora can also generate video from a still image or modify existing videos. Following the launch of this tool, Worldcoin, a cryptocurrency developed by OpenAI owner Sam Altman, saw a dynamic increase in price: about 200% within one week.
Currently, AI presents numerous challenges related to privacy protection, copyright, ethical issues, and the job market. According to the Polish Economists Society, the EU AI Act is a response to the need for responsible and safe AI development. AI systems will be classified into four groups based on the risk they pose to health, safety, and fundamental rights. For example, the introduction, release, or use of systems posing an unacceptable risk will be banned, especially those using subliminal, manipulative, or deceptive techniques.
Those interviewed remind that AI Act is a so-called ‘island regulation’, covering only areas where AI can be applied, such as social scoring, biometrics, deepfakes, etc. These areas will be separately regulated, for instance, by requiring disclosure when something is a deepfake, or by withdrawing consent for using our data for social scoring purposes. This fragmentary nature sets this regulation apart from others, such as the GDPR on the protection of personal data.
In the view of some experts, although artificial intelligence regulation is also being discussed in Asia and the United States, European efforts are unlikely to inspire their legal frameworks. As per Goldman Sachs Research, generative artificial intelligence could boost global GDP by 7 percent (or $7 trillion) over a 10-year horizon.