“Humanity strikes back: An amateur beats AI at its own game, revealing the fundamental weakness in ai systems.”
Introduction: An Unexpected Victory for a Human Player over a Top AI System in Go
Have you ever felt like you could beat a computer at its own game? Well, that’s exactly what amateur Go player Kellin Pelrine did when he defeated a top-ranked AI system in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence.
The tactics that put a human back on top of the Go board were suggested by a computer program that had probed the AI looking for weaknesses. The winning strategy revealed by the software was not completely trivial, but it’s not super-difficult for a human to learn and could be used by an intermediate-level player to beat the machines. The triumph, which has not previously been reported, highlighted a weakness in AI systems used by the best Go computer programs that is shared by most of today’s widely used AI systems.
The Winning Tactics: Exploiting a Blind Spot in the AI Systems
How did Pelrine manage to defeat the AI system? By taking advantage of a previously unknown flaw that had been identified by another computer. The tactics used by Pelrine involved slowly stringing together a large “loop” of stones to encircle one of his opponent’s own groups, while distracting the AI with moves in other corners of the board. The Go-playing bot did not notice its vulnerability, even when the encirclement was nearly complete.
“It was surprisingly easy for us to exploit this system,” said Adam Gleave, chief executive of FAR AI, the Californian research firm that designed the program. The software played more than 1 million games against KataGo, one of the top Go-playing systems, to find a “blind spot” that a human player could take advantage of, he added. The winning strategy suggested by the software, while not super-difficult, could still be used by an intermediate-level player to beat the machines.
The Rise of AI in Go: From AlphaGo to KataGo and Leela Zero
AI has come a long way in the game of Go, from the groundbreaking victory of AlphaGo over the world Go champion Lee Sedol in 2016 to the rise of other top systems such as KataGo and Leela Zero. However, the victory of Pelrine over these top systems highlights a fundamental weakness in ai systems that underpin today’s most advanced AI.
The Fundamental Weakness in AI: The Limits of Deep Learning and Generalization
According to Stuart Russell, a computer science professor at the University of California, Berkeley, the weakness in some of the most advanced Go-playing machines points to a fundamental flaw in the deep learning systems that underpin today’s most advanced AI. The systems can understand only specific situations they have been exposed to in the past, and are unable to generalize in a way that humans find easy.
“It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines,” Russell said. The limitations of deep learning and generalization mean that even the most advanced AI systems are vulnerable to exploitation, as shown by Pelrine’s victory over the Go-playing machines.
Conjectures on the Cause of Failure: The Role of Rarely Used Tactics and Adversarial Attacks
It’s not entirely clear why Pelrine was able to beat the AI system in Go. One possibility is that he used a tactic that the AI had not encountered before. According to Adam Gleave, the chief executive of FAR AI, the program that helped Pelrine identify the weakness in the AI system, the tactic Pelrine used is rarely used. As a result, the AI had not encountered this particular situation before and was unable to respond effectively.
Another possibility is that Pelrine used what is known as an adversarial attack. This is a technique used to exploit weaknesses in AI systems by deliberately feeding them misleading or false data. While this approach is more commonly used in computer vision systems, it is possible that Pelrine used a similar approach in Go.
Implications for the Deployment of Large AI Systems: Verification and Accountability
The fact that an amateur player was able to beat a top-ranked AI system in Go highlights the need for more rigorous verification and testing of AI systems before they are deployed at scale. As Stuart Russell, a computer science professor at the University of California, Berkeley, has pointed out, we have been too quick to ascribe superhuman levels of intelligence to machines. The reality is that AI systems have their limitations, and it is important to understand those limitations to avoid the potential negative consequences of relying too heavily on AI.
Conclusion: Rethinking the Notion of Superhuman Intelligence in AI
This unexpected victory for Kellin Pelrine over the top AI system in Go highlights the potential limitations of deep learning and generalization in AI systems. The tactics used by Pelrine were suggested by a computer program that had identified a weakness in AI systems, revealing a fundamental flaw in the most advanced AI systems that underpin today’s AI.
This discovery underscores the need for further research and development in the field of AI to address the potential weaknesses and vulnerabilities of these systems. Verification and accountability are necessary to ensure that large AI systems are deployed at scale with little risk.
Ultimately, this experience raises important questions about the notion of superhuman intelligence in AI. It shows that we should not be too hasty to ascribe superhuman levels of intelligence to machines. Instead, we should focus on developing AI systems that complement and enhance human intelligence, rather than replace it. As we continue to advance the capabilities of AI, it is essential that we remain cognizant of the limitations and potential risks of these systems.