Restart Artificial Intelligence
The main ideas of the book “Artificial Intelligence: Reset” by Gary Marcus and Ernest Davis, scientists specializing in artificial intelligence.
After reading the review, you’ll know why, despite faction predictions and media hype, we are still a long way from creating a human-like AI.

Since the mid-20th century, optimistic predictions have been made about the creation of human-like artificial intelligence (AI). The authors of the book, taking into account the real situation in this area, consider such optimism unreasonable.
In their book, Marcus and Davis explain why artificial intelligence cannot become human if it develops along the current path, and what direction it needs to work in to create truly thinking machines.
Consider Important Ideas from the Book.
In the field of artificial intelligence development, there is a gap between what the media reports and how things are in reality.
According to the authors, the distancing from reality is characteristic of both optimistic and pessimistic forecasts. A huge contribution to this is made by the media, which modestly presents itself as an incredible hack, attracting attention and scaring its readers. But you just need to find out what information is hidden behind a headline or press release, as it becomes clear that the situation is not right.
That’s not to say there aren’t real advances in AI research. Excellent achievements in areas such as image and speech recognition, cargo control and delivery are achieved by drones. The success of modern AI systems depends on two factors: increasing the computing power of devices and greatly increasing data libraries.
The main problem with artificial intelligence is its rigidity, its extreme narrowness, which is expressed in the suitability of solving very specific problems. Even a giant like Google managed to create the highly specialized Google Duplex for phone calls.
There is a huge gap between AI’s optimistic predictions and reality, which is expressed in three unresolved problems.
The first is a groove, or fundamental error in assessing authenticity. People endow artificial intelligence with human qualities and incorrectly assess its capabilities.
The second problem is the illusion of rapid progress. Fast progress in solving light problems does not equal progress in solving complex problems. The fact that a computer plays intellectual games with people does not mean that it is smarter than people.
The third problem is the reassessment of reliability. Intrigued by AI’s successes in some areas, we extrapolate these successes to everything. We believe that if an unmanned car is successful on the highway, it means that after optimization it will also successfully appear on city streets. However, between these two capabilities there is a technological chasm.
The modern approach, focused on big data, does not lead to a fundamental breakthrough
The main problem is that the mainstream approach means going towards the narrow straitjacket and all the bigger datasets. As a result, there are many solutions to particular problems that may seem fantastic but fail to lead to radical breakthroughs. Modern AI is, in the words of the authors, a blind slave to data. The more we rely on such systems, considering them intellectual, the more serious the consequences can be.
Right now, the main danger of AI systems is not that they consume energy and enslave us, but that they are unreliable even though we increasingly depend on them.
Furthermore, according to the authors, the issue of taking power with cars has been exaggerated, because cars do not have human motivations, goals and desires.
Deep Learning is not intelligence, but just a fragment of it
Today, deep training is a dominant way of developing artificial intelligence. This is a relatively new approach, and the classic approach focused on manually coding the knowledge that machines would use at the time. The classic approach is still used in many areas, but it has been almost completely replaced by machine learning, extracting patterns from large amounts of data and plotting based on them.
Deep learning is based on hierarchical image recognition and training itself. Hierarchical image recognition means processing data in a specific sequence, like the neurons in the human visual system.
The similarity of artificial networks with human neurons is explained as “neural networks”.
The second foundation of deep learning is trial and error training, when the system masters more and more associations and becomes more accurate in its predictions.
However, the authors, realizing the impressive results of the work of neural networks in various areas, point out that deep training is not artificial intelligence, but part of a more complex task of creating thinking machines.
They distinguish three main problems with deep learning.
The first - large datasets are needed for it. If game rules remain unchanged and all combinations of movements can be taught to cars, then in many areas of real life it is impossible to obtain a sufficient amount of relevant data to guarantee the reliability of a deep training system. This is a clear limitation of its operation.
The second problem is the opacity of neural networks. Neural networks make decisions based on large arrays of data, the logic of these solutions is hidden not only from ordinary users, but also from specialists. The more we rely on neural networks, the more important it is to understand the principles by which decisions are made. These principles must not remain a mystery when people’s lives depend on and transcend them.
The third problem with deep learning is unstable and unpredictable. The authors give many examples when neural networks incorrectly interpret a sharp image of a person, for example, taking a turtle as a weapon and a baseball with foam as a cup of cappuccino. These bugs are of critical value if we are going to transfer the role of driving vehicles or protecting people from attacks.
The authors conclude that deep training, despite its name, is not very deep. It lacks the depth of the human mind, recognizing speech and images not intelligently, but only the smallest snippets.
A very important task for the creation of artificial intelligence is to teach cars to read and understand what they read.
Until now, attempts are already being made to read cars, but the task is too complicated. So they created the Google Talk to Books project that promised to use natural language to provide a whole new way to study books.
It was assumed that the system would answer any questions, finding answers to them in books. However, in reality, that system had no idea what it was reading. And if I dealt with answers to literal questions, where the answers required contact with abstract thought, the results were dire.
People understand, unlike modern neural networks, that the answers that can be found in books are not given literally. The Harry Potter reader, unlike the AI system, can find all seven crosses in the books, even taking into account the fact that they are not listed in the form of a single list.
Like a person, a truly sensible device must not only repeat what it reads, but also be able to accumulate information.
So far, the functioning of neural networks is primitive compared to human language, the authors conclude. From text, AI systems are capable of extracting very limited information.
It’s still too early to worry about a botnet invasion
Humorous authors write that if you are afraid that robots will rise and attack people, then just lock yourself in the house, paint the handle so that it is invisible against the background of the door, attach a large poster with a school bus or a painted toaster at the front door, and place banana peels and nails on the floor, for greater safety, you can place a table in the robot’s path.
All this is an insurmountable obstacle for robots. In addition, in the process of passing obstacles, the modern robot is likely to be discharged from the battery.
Doomsday scenarios about robot invasion are much closer to science fiction than reality.
The authors identify five main points that should be able to assess any reasonable creature: where it is; what is going on around you. What are you doing now ; how to reach it; What needs to be done longer to achieve current goals. All these questions must be considered continuously in the form of continuously repeated cycles.
In the field of AI, progress is only observed in some parts of the course, and others are left without decisions.
Truly Reasonable/Intelligent Machines must have common sense and be more reliable than existing systems today.
The authors believe that if our goal is to create human-like artificial intelligence, we need to move from systems that use statistical correlations as the main teaching tool to systems that understand the world through basic human understanding. Machines must have what we call common sense – basic knowledge, which we expect from all people.
It is knowledge about how people and things tend to behave in different situations, how they use objects, what can be done with different things, what does and does not happen in different situations.
To create human-like AI, you need to put into machines the principles that allow people to learn and understand the world for people: the ability to get rid of syntax-causing representations and understand that things exist for sometimes.
It is necessary to lay the groundwork in synthetic systems in the form of an understanding of time, space and causality. Therefore, when creating artificial intelligence, computer science must be enriched with knowledge from other disciplines, including cognitive science.
According to the authors, the problem of common sense and deep understanding is extremely difficult to solve, but only this will help us to create a truly believable AI.
So far, the functioning of neural networks is primitive compared to human language, the authors conclude. From text, AI systems are capable of extracting very limited information.