Saturday, February 1, 2025

USA-China Tensions Transform Global Market

After the U.S. elections, relations between the...

AI Chamber Criticizes EU’s Draft AI Code for Excessive Complexity and Stringent Regulations

TECHNOLOGYAI Chamber Criticizes EU's Draft AI Code for Excessive Complexity and Stringent Regulations

AI Chamber on behalf of dozens of startups, companies, and AI organizations in Poland, has submitted an official position to the European Union’s AI Bureau concerning the second draft of the General Purpose AI Code of Conduct. The Chamber highlights critical flaws in the European Commission’s draft, such as complex and stringent regulations, impractical rules, controversial copyright principles, excessive disclosure requirements that pose threats to public security and trade secrets, and new obligations for AI model providers that go beyond what the AI Act mandates. The Chamber views the current form of the Code as overly detailed, unclear, and impractical. These shortcomings could slow down innovation, especially for small and medium-sized enterprises (SMEs), further weakening the EU’s competitiveness in the global AI arena.

AI Chamber, an institution that already encompasses numerous startups, firms, organizations, and NGOs focused on AI in Poland, represented these entities in providing feedback on the second draft of the General Purpose AI Code of Conduct (Second Draft General Purpose AI Code of Conduct). The Code serves as a guiding document for general-purpose AI model providers on demonstrating compliance with the AI Act throughout the lifecycle of models. There is little time for amendments—the final Code should be ready by April this year, as the AI Act regulations will take effect in August 2025.

EU Bureaucracy Versus Global Competition

AI Chamber points out that the biggest issue with the second draft of the Code is its excessive detail and the creation of new obligations that often exceed the requirements of the AI Act. The complexity and severity of the rules may deter companies from developing their AI models, particularly SMEs, which might find the costs and effort to comply with these regulations too high. According to the Chamber, the AI Act and the burdens it places on AI development already position the EU unfavorably in the “AI race,” especially against the USA and China. Codes of conduct, like the GPAI Code of Conduct, should facilitate compliance with AI Act regulations to keep Europe an attractive hub for developing AI models and solutions based on this technology.

“Instead of supporting a dynamic and competitive environment for AI development, the EU risks creating a ‘bureaucratic monster’ with the Code. Although some improvements have been made compared to the earlier version of the Code, the document in its current form still fails to fulfill its fundamental purpose. From the perspective of European entrepreneurs, the added value of the Code is negligible, while the burden of its implementation—both financial and administrative—may effectively stifle innovative spirit,” notes Tomasz Snażyk, CEO of AI Chamber.

According to AI Chamber, the current version of the Code may slow down innovation and consequently further weaken the EU’s competitiveness in the field of artificial intelligence on the international stage. The Code in its current form is the opposite of what the EU needs now to strengthen its economy and increase productivity.

Excessive Regulation Stifles Innovation

The Code includes overly detailed Key Performance Indicators (KPIs) that are difficult to implement and ambiguous. For instance, the documentation requirements are extensive but lack clear justification for their necessity. Demanding detailed information about the parameters of AI models and the data used for their training could infringe on trade secrets and intellectual property rights, as well as threaten public security and discourage innovation. Although the Code aims for transparency, concerns remain about the level of detail required in publishing model parameters, such as data used for training, testing, validation, computational resources, and energy consumption reporting.

As AI Chamber highlights, the draft Code introduces obligations not stipulated in the AI Act. For example, it imposes overly stringent requirements for compliance with copyright laws of third-party data sets. The requirement that internal copyright policies be “consistent with the Code obligations” is problematic because the Code should not restrict providers’ freedom to independently assess EU copyright law. The mandate for AI model creators to take “reasonable actions” to assess compliance with the copyrights of third-party data sets exceeds the scope of EU law.

Moreover, as the Chamber emphasizes, the obligation to apply policies at every stage of model development is too broad and difficult to implement, and the mandate to monitor the rights of other entities is impractical and interferes with commercial contracts. Furthermore, the introduction of additional extraterritorial regulations in the Code raises serious legal concerns.

AI Model Size Does Not Define Risk

According to the Chamber, the size of an AI model or the amount of power needed to train it are not good indicators of the actual risk posed by new technologies. While generally supporting the exemption of SMEs from mandatory regulations due to their lack of resources to fully comply with the Code, sometimes such an approach could create a legal loophole where AI models offered by small and medium-sized companies might have lower safety standards and pose significant risks of abuses specified in the AI Act. The AI Act aims to ensure the safety of all AI models, regardless of the size of the provider, so more emphasis should be placed on the actual risk of breaking safety standards and the measures the model provider will take.

Code Built on Non-Existent Standards

The Code also relies on future standards that have yet to be defined by the European AI Office, creating uncertainty about the specific compliance requirements with the AI Act, which significantly complicates planning and investment in the development of AI models. From the outset, the Code should provide real and concrete guidelines for compliance with the AI Act.

“A large portion of the guidelines remains overly burdensome and prescriptive, imposing additional duties on providers of artificial intelligence systems. This makes the document unworkable. Emerging AI companies facing the decision to develop their own model—in the face of such complicated Code rules—may simply decide it’s not worth the effort, losing out on the benefits of AI,” emphasizes Tomasz Snażyk.

Source: Manager Plus

Check out our other content
Related Articles
The Latest Articles