USA-China Tensions Transform Global Market

After the U.S. elections, relations between the...

The EU AI Act: A Game Changer for Responsible AI Development

LAWThe EU AI Act: A Game Changer for Responsible AI Development

On March 13, the European Parliament adopted the Artificial Intelligence Act (AI Act). The new regulations allow the creation of a legal environment that, on one hand, ensures the possibility of developing artificial intelligence (AI), and on the other hand, guarantees respect for human rights and minimizes the phenomenon of discrimination. AI is a driving force of change in the business world. Companies are facing the challenge of adapting and harnessing the potential of this revolutionary technology. It is essential to maintain the pace of competition in the areas of digitization and automation while ensuring safety.

The legislative proposal for the regulation on AI was submitted by the European Commission in April 2021. In December 2023, the European Parliament, Council, and European Commission reached a compromise on the main provisions of this document. On March 13, 2024, the European Parliament adopted the AI Act. The new regulations introduce a category of prohibited practices related to AI, including the use of subliminal techniques or those that discriminate against certain groups. It will also be forbidden to use AI systems to track citizens’ lifestyle.

The AI Act is the world’s first comprehensive document regulating the area of artificial intelligence. The aim of the document is to create solid legal frameworks that will ensure the safe and ethical use of AI systems, protecting Europeans from the negative effects of their development. The ultimate goal is to build universal trust in this technology, which holds enormous potential for both individuals and entire organizations.

“The development of generative artificial intelligence is associated with a wide range of threats – from classic information security to decisions about finances, careers, and even health and life. In the face of this revolution, cybersecurity is being redefined. The traditional approach to information system security is insufficient. AI implementation requires securing the integrity and confidentiality of data used in the AI model training process. AI solutions obtained from external suppliers necessitate the construction of mechanisms ensuring an appropriate level of trust and protection against supply chain attacks. Given a series of new challenges, the AI Act is a much-needed legislative initiative that will make cybersecurity assurance mandatory in the design and implementation of AI solutions,” says Michał Kurek, Partner in Consulting, Head of the Cybersecurity Team at KPMG in Poland and Central and Eastern Europe.

Here are the key things you should know about the AI Act:

The first phase of the AI Act will take effect 20 days after its publication in the Official Journal of the European Union, and it will fully apply 36 months after publication, i.e., in the first half of 2027. The definition of AI in the EU legal act covers a wide range of technologies and systems, which will significantly affect the functioning of companies. It is important to note that most of the obligations imposed by the regulation will need to be fulfilled already in the first half of 2026. Restrictions on high-risk AI systems will apply six months after the regulation comes into effect, and rules on the use of general-purpose AI systems after 12 months.

The regulation is based on classifying AI-related risk levels as unacceptable, high, limited, and minimal. High-risk AI systems will be allowed to be used but will be subject to strict restrictions that will apply to both users and suppliers of these systems. The term “supplier” includes entities creating AI systems, including organizations that do this for their consumption. Therefore, it is important to understand that an organization can both use and supply these systems.

“Implementing artificial intelligence in businesses brings tangible benefits, but also represents a challenge, especially in the context of legal regulations. Implementing EU AI regulations imposes on companies the obligation to ensure compliance with strict standards of ethics and data protection. The necessity to comply with these regulations requires a thorough analysis of existing AI systems and potential modifications in business processes to ensure compliance with new legal and ethical requirements. Businesses need to be aware of the consequences of violating regulations and act proactively to prevent potential irregularities and loss of customer trust,” says Radosław Kowalski, Partner in Consulting, Head of the Data Intelligence Solutions team at KPMG in Poland.

How can businesses prepare for the changes?

In the short term, organizations should focus on proper management, which involves shaping corporate policy in line with risk categorization related to AI. They should keep in mind that the list of AI systems at risk classified as unacceptable or high may get longer with time. Key is continuous improvement of AI management frames while observing legal regulations.

Another necessary action is to recognize the risk associated with the use of AI systems. Organizations should focus on both internal and external risk. Key is classifying threats, identifying gaps in digital security, and taking preventative measures in the form of AI system testing.

Another action should be the implementation of good practices and actions supporting the AI system adaptation process. Desired actions include automation and optimization of AI technology management, conducting reliable documentation and action records, employee training, and transparent communication towards clients.

“The AI Act is an important step on the road to regulating issues related to the introduction and operation of artificial intelligence systems. Despite the fact that obligations resulting from the regulation should be fulfilled from the first half of 2026 according to the assumption, preparations for its entry into effect should start much earlier. This particularly applies to companies using AI solutions classified as high-risk systems, such as automated employee recruitment tools, which are already used by many organizations. In such cases, it will be necessary to implement a series of procedures and mechanisms – risk assessment (related to the processed personal data) and develop proper documentation,” says Magdalena Bęza, Associate Director, legal advisor at KPMG Law.

Key medium and long-term actions of organizations include:

  • Forecasting the impact of new regulations on the organization’s activity. Maintaining transparent actions and keeping customer trust through cooperation with the environment, including holding open dialogue with other entities in the industry. Adapting the company’s strategy to new legal requirements can help predict changes.
  • Broadening knowledge about the ethical use of artificial intelligence. Planning AI-related training and creating a dedicated team responsible for ensuring compliance with ethical standards and legislative requirements are recommended.
  • Using trusted AI systems from the early stages of project implementation. Regular audits and technology updates are important. Prioritizing adherence to ethical principles, even in the processes of supporting innovative AI-based solutions, should be a priority.

About the report:

The KPMG report “Decoding the EU AI Act: Understanding the AI Act’s impact and how you can respond” brings closer the provisions of the regulation, setting a new global standard regulating the field of artificial intelligence.

Check out our other content
Related Articles
The Latest Articles