Closer to the Emergence of Conscious Artificial Intelligence: Preventive Measures Needed to Avoid Misuse

TECHNOLOGYCloser to the Emergence of Conscious Artificial Intelligence: Preventive Measures Needed to Avoid Misuse

Artificial intelligence is increasingly discussed in the context of the opportunities and threats posed by the emergence of global artificial intelligence. This model would be capable of human-like thinking, but its computational capabilities would surpass those of the human brain by several million times. This raises justified concerns about safety, leading to the proposal that market participants take an oath similar to the Hippocratic Oath. According to this oath, they would ensure that the applications of technology are ethical. Experts also believe that more detailed regulations are necessary.

“There is more and more talk today about so-called conscious AI. We have a duality of views: on one hand, it is said that artificial intelligence is not and never will be conscious, while on the other hand, we are moving towards building general artificial intelligence or superintelligent AI that will surpass human intelligence. It’s hard for me to say whether artificial intelligence is or can be conscious, because the only consciousness we know, probably not entirely, is our own human consciousness,” says Natalia Hatalska, CEO and founder of infuture.institute, in an interview with Newseria Innowacje.

Today’s artificial intelligence can perform specific tasks but cannot think and act like a human, who is capable of associating theoretically unrelated elements of reality. General artificial intelligence is still a hypothetical model but could be realized in the near future, possibly within a few years. This model would be able to think like humans but at least several million times faster, based on a data repository equal to all the knowledge accumulated in the world, updated in real-time. The challenge for individuals and organizations involved in the AI market is to create conditions for the development of such technology so that it benefits humanity rather than serving as a tool for a narrow group of people to gain more power.

“The concept of a so-called technocratic oath, analogous to the Hippocratic Oath, is increasingly resonating. The idea is that anyone working with technologies related to artificial intelligence, neurotechnologies, or biotechnologies would take such an oath, which would ensure ethical work and use of these technologies. Regarding securing the development of artificial intelligence, I see two areas here: the first is regulatory, and the second is public education. Regulations alone are never sufficient. People must also be aware of what this technology is, how to use it, what the risks are, its limitations, and its potential,” lists Natalia Hatalska.

The first step in legal regulations at the European level is the AI Act. The aim of this regulation is to build a legal environment that allows for the development of this technology while respecting fundamental rights and protecting democracy and the rule of law from high-risk systems. However, it will take at least two more years before this directive is implemented.

“AI-based technologies influence democratic processes in various ways, such as through deepfakes and fake news that appear online and disinformation that affects our decisions. As AI becomes more widespread, we see a trend among people that we call functional illiteracy. People are losing the ability to think critically, formulate thoughts clearly, and read with understanding. This inability to think critically and distinguish true information from false affects democratic processes, participation in elections, referendums, and decision-making,” the expert points out.

The development of artificial intelligence also raises other concerns in society. The first is about job automation and the potential for robots to take jobs from humans. Labor market experts reassure that with an aging society, the demand for workers of working age will increase, and AI may contribute more to reshaping employment than eliminating jobs. The second concern is a futuristic vision of a robot uprising that could lead to the extinction of humanity.

“For now, these technologies are governed by certain laws that dictate how robots should be built. Even if they thought about eliminating humans, they would be programmed to shut down before acting on such thoughts. Theoretically, we are protected from this. The biggest threat, in my opinion, is how we use technology daily and how it affects us as people,” concludes the CEO and founder of infuture.institute.

According to Cognitive Market Research, the global artificial intelligence market will exceed $400 billion by 2030. The average annual growth rate between 2024 and 2030 will be over 31%.

Exit mobile version