USA-China Tensions Transform Global Market

After the U.S. elections, relations between the...

Current Artificial Intelligence Technology Does Not Pose a Threat to Human Existence. It Is Not Capable of Independent Thinking and Planning

TECHNOLOGYCurrent Artificial Intelligence Technology Does Not Pose a Threat to Human Existence. It Is Not Capable of Independent Thinking and Planning

Large language models, such as ChatGPT, are currently incapable of independently acquiring new skills, reasoning, or planning. Hence, they do not pose an existential threat to humanity – confirmed by scientists who analyzed the capabilities of these large language models from such a perspective. However, that does not mean that these tools pose no danger at all. Misused, they currently contribute to the spread of misinformation.

“Language models pose no threat because they cannot create plans or strategies autonomously that could be directed against us. When we use a large language model, like ChatGPT, it operates by solving problems based on examples. This means our suggestions must be clear, highly transparent, simple, and the model cannot perform something completely deviated from our instructions. If we don’t issue a command, the model won’t be able to act,” says Dr. Harish Tayyar Madabushi, an artificial intelligence lecturer at the University of Bath, in an interview for Newseria Innovation agency.

As part of the research by scientists from the University of Bath in collaboration with Darmstadt University of Technology in Germany, they conducted experiments to test the ability of large language models to perform tasks they had never encountered before. The aim was to verify their so-called emergent abilities. The tests clearly showed a lack of complex reasoning skills. Therefore, the present and most likely future models will not gain the ability to reason and plan despite increased operational capabilities.

“However, we should consider that these models can be used in ways we wouldn’t wish for. For instance, people may use language models to create very well-constructed fake news or well-written spam messages. Typical spam messages aren’t very well-crafted, so they’re easy to recognize and delete. Now, we may receive such messages written flawlessly. The same applies to fake news, and generating such content will be easier. Our study aims to distinguish between those issues. We don’t have to worry about language models starting to act independently and posing a threat to us. Let’s focus on what constitutes a real risk instead.” advises Dr. Harish Tayyar Madabushi.

Underestimating the potential harm that language models can create is one issue. At the other end of the spectrum, overestimating the capabilities of tools such as ChatGPT is another problem.

“When we see a text generated by a language model, we attribute meaning to it, which, in this case, isn’t necessarily the correct approach. Just because a model can generate a statement that has some meaning, doesn’t mean it has analyzed and assimilated it. It usually doesn’t have that kind of information, as our observations confirm. I’m not saying that we shouldn’t be excited about AI. On the contrary, language models represent a breakthrough and are capable of incredible feats that’ll boost productivity in many fields. However, overestimating these models or having baseless fears is not the right approach. I think it’s worth focusing on the positive aspects and maintain a realistic outlook regarding their capabilities, so they’re not perceived as a source of fear but instead as a positive development and an exciting tool that we can use,” says the researcher from the University of Bath.

According to Polaris Market Research, the global market for large language models will reach annual revenues of almost $62 billion by 2032, with an average annual growth rate of 32%. In 2023, this market was worth $5 billion. Analysts highlight that the key factor driving the growth of this market in the coming years will be the introduction of zero human intervention features. This will increase the efficiency of the models and their ability for autonomous learning, without manual human interference. The market growth will also be influenced by the steadily increasing amount of data on which the models can learn.

Check out our other content
Related Articles
The Latest Articles