The Achilles’ heel of artificial intelligence, AI is not as omnipotent as people think – LLODO


Deep learning is an artificial intelligence technique for pattern recognition, a successful technique that is making its way into the field of scientific computing. We often see it in many famous news headlines such as the ability to diagnose diseases more accurately than doctors, or the ability to prevent traffic accidents thanks to autonomous driving. However, it is a fact that a lot of deep learning systems are unreliable and they are easily fooled.

This leaves AI systems like some overconfident humans, often having confidence far beyond their actual capabilities. While humans are relatively good at spotting their own mistakes, many AIs have no way of knowing when they’ve made a mistake. Sometimes it’s even harder for an AI system to recognize when it’s made a mistake than it is to produce the correct result.

The Achilles' heel of artificial intelligence, AI is not as omnipotent as people think - Photo 1.

Researchers from the University of Cambridge and the University of Oslo say that this instability is the Achilles heel of modern AI, and that a mathematical paradox reveals the limitations of AI. This paradox can be traced back to two great mathematicians of the 20th century, Alan Turing and Kurt Gödel. In the early 20th century, mathematicians were trying to prove that mathematics was the ultimate language for unifying science. However, Turing and Gödel discovered a paradox at the heart of mathematics: the truth of certain mathematical propositions cannot be proved, and some computational problems cannot be solved by mathematical equations. algorithm.

At the end of the 20th century, mathematician Steve Smale came up with a list of 18 unsolved problems, the last of which dealt with the limits of human and machine intelligence. This problem has so far remained unsolved, but it brings to the world of artificial intelligence a paradox first posed by Turing and Gödel: mathematics has inherent fundamental limitations, and is similar Similarly, artificial intelligence algorithms also have unsolvable problems.

The Achilles' heel of artificial intelligence, AI is not as omnipotent as people think - Photo 2.

A new study suggests that ubiquitous artificial intelligence with inherent limitations may be to blame for this centuries-old mathematical paradox. By extending the approach proposed by Gödel and Turing, the researchers have shown the limits of algorithms to computational neural networks. They propose a classification theory that describes how, under certain conditions, a neural network can be trained to provide a reliable AI system.

Neural networks are the most advanced tool in the field of artificial intelligence, they are rough simulations of connections between neurons in the brain. In the new study, the researchers say that while good neural networks may exist in some cases, we cannot create a neural network that is inherently reliable because of this paradox. In other words, no matter how accurate the data we use to build the neural network, we can never get the perfect information needed to build the perfect neural network. 100%.

At the same time, it is not possible to calculate how good an existing neural network is, no matter how well it is trained through a lot of data. No matter how much data the algorithm has access to, it will never create the network it needs. This is similar to Turing’s point: there are computational problems that cannot be solved regardless of computational power and runtime.

Not all artificial intelligence is flawed, the researchers say. In some cases, it is only reliable in specific areas, using specific methods. However, this is not what we see in many systems.

The Achilles' heel of artificial intelligence, AI is not as omnipotent as people think - Photo 3.

Dr. Matthew Colbrook from the Department of Applied Mathematics and Theoretical Physics said: “In some situations it’s perfectly fine for AI to make mistakes, but it needs to be honest about that. And that’s not what we are. We’re seeing for many systems – there’s no way to tell when they’re more or less confident about a decision.”

Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics said: “At the moment, AI systems are sometimes predictable. But at some point, you’re going to get tired of failing. get what you want, and you’ll try a different approach. understanding. A program that understands the foundations of AI computing is needed to bridge this gap.”

“When 20th-century mathematicians identified various paradoxes, they didn’t stop studying mathematics. They tried to find new paths, because they understood the limitations,” Colbrook said. “For AI, it can be a case of changing paths or developing new avenues for building systems that can reliably and transparently solve problems, while understanding the limitations of they”.

The researchers’ next phase is to combine approximation theory, numerical analysis, and computational foundations to determine which neural networks can be computed using algorithms and which are stable and stable. reliable. Just as the paradoxes about the limits of mathematics and computers identified by Gödel and Turing have led to rich foundational theories – describing both the limits and possibilities of mathematics and computation.

The Achilles' heel of artificial intelligence, AI is not as omnipotent as people think - Photo 4.


https://genk.vn/got-chan-achilles-cua-tri-tue-nhan-tao-ai-khong-he-toan-nang-nhu-moi-nguoi-van-nghi-20220324121355062.chn



Link Hoc va de thi 2021

Chuyển đến thanh công cụ