USA-China Tensions Transform Global Market

After the U.S. elections, relations between the...

Specialists appeal for increased control over artificial intelligence

SECURITYSpecialists appeal for increased control over artificial intelligence

The number of solutions that automatically monitor the use and development of artificial intelligence (AI) is increasing rapidly. This is in response to issues such as the spread of disinformation in the form of fake news, images, and videos disseminated online, as well as the growing number of cyberattacks on businesses and institutions often involving various types of AI. Legislation is evolving to control the use of artificial intelligence, as exemplified by the AI ACT enacted by the European Parliament. But interestingly, it seems that the use of artificial intelligence should be overseen by… artificial intelligence utilizing a range of penalties, questions Przemysław Wójcik, cybersecurity expert and President of AMP SA.

AI – Its Pros and Cons in Brief

Indeed, there are already applications and tools overseeing the operation of artificial intelligence, and their number grows in tandem with the development of this technology and the increasing need for oversight. Such applications monitor and evaluate AI algorithms to ensure they operate according to their design principles, comply with ethical standards and enacted regulations, and do not display undesirable deviations or errors. They include various platforms that automatically analyze AI operations and are used in the design of new AI features.

“Stories about a vice-presidential candidate involved with child exploitation, widely circulated during the recent US elections, famous pictures of the Pope in a fur coat or Presidents Trump and Biden grilling together are just some examples of disinformation created and spread using AI. But artificial intelligence is also employed by cybercriminals to attack companies and institutions, which can result in financial losses, and even threats to human life.” – comments Przemysław Wójcik, cybersecurity expert, President of AMP SA.

AI development is unstoppable, especially considering not just its potential threats, but also its benefits. AI allows for the automation of repetitive tasks, which boosts efficiency and saves time. It can process huge data sets, providing valuable insights that aid fact-based decision making. Thanks to AI, offers and communication can be tailored to individual user needs, thus improving customer experience. AI contributes to the development of new technologies and products, for example in medicine (diagnosis), transport (autonomous vehicles), and education (personalized learning). The usage of AI in systems for monitoring, facial recognition, and threat analysis can improve safety in many areas. These are just some examples demonstrating that AI, while bringing many benefits, also poses risks.

Human Attempts to Control AI

Therefore, monitoring its use is critical. The Artificial Intelligence Act (AI Act) is a pioneering European Union regulation providing comprehensive legal frameworks governing the development and use of AI systems. Its main objectives aim to protect citizens’ rights, ensure safety, and promote ethical and responsible use of AI. An EU-level European Artificial Intelligence Board would be created to coordinate actions and ensure regulatory consistency. For violation of the AI Act, severe sanctions are provided, which can reach up to 6% of a company’s global annual revenue, making it one of the stricter regulations in the world.

“But the question arises whether humans can keep up with the development of artificial intelligence. Would it not be better to entrust AI supervision to artificial intelligence itself, which could not only enforce rules but also penalize AI and its users? Imposing sanctions on such solutions is a complex and ambiguous issue, mainly because AI does not have consciousness or a sense of morality. This implies that traditional forms of punishment – like imprisonment or fines – are pointless for AI. Instead, AI punishment should be limiting its operation or imposing sanctions on people or organizations responsible for its operation.” – comments Przemysław Wójcik, cybersecurity expert, President of AMP SA.

How to Punish AI?

When an AI system makes errors or acts unethically, the simplest form of “punishment” is to suspend or turn it off. One example could be removing it from service; for instance, if a chatbot spreads disinformation, its creators could block or reprogram it. AI could be “punished” through updates that change its operation, including the imposition of additional restrictions, such as limiting access to certain data or reducing autonomy. This change would allow the AI to remain in use while eliminating potentially harmful behaviors.

“Since AI is a tool created by humans, the legal and financial responsibility falls on the creators or owners of the system. For instance, companies can be fined or limited in operations if their AI system breaks the law, for example, by using prohibited surveillance algorithms. In some situations, AI creators can be compelled to repair the damages caused by the system. An example could be when AI misleads customers. In this case, the organization might be obligated to compensate the losses to the injured parties. In cases where the AI system is potentially dangerous, its use might be limited to a ‘sandbox’, a controlled environment where it can function without access to the real data. This way, the system can be tested and improved before being re-implemented on a large scale.” – comments Przemysław Wójcik, cybersecurity expert, President of AMP SA.

Companies might be obligated to increase transparency about the operation of an AI system, which means regular reporting on decisions made by the AI, possible errors, and ways to correct them. This measure is somewhat akin to a “punishment” in the form of additional responsibilities and administrative work for the company. For particularly harmful or non-compliant AI systems, a company or team could be prohibited from further developing this specific technology. This can be particularly effective in cases of AI generating disinformation or violating privacy.

These are just a few examples of how to limit the unwanted effects of AI use. More and more countries introduce new regulations governing these issues into their legislation. But we might ask whether humans can keep pace with AI development? Should artificial intelligence not be controlled by… artificial intelligence, under human jurisdiction, of course? It seems this could speed up the process of overseeing AI, automate it, and reduce the number of dangerous cases of such solutions.

Source: https://managerplus.pl/specjalisci-apeluja-o-zwiekszenie-kontroli-nad-sztuczna-inteligencja-66865

Check out our other content
Related Articles
The Latest Articles