Nearly twenty years to this day, in May 1997, a machine that IBM built challenged then world chess champion Garry Kasparov to a six game match, and won.
An year earlier, in 1996, a similar match between the IBM built Deep Blue and Garry Kasparov ended with Kasparov winning the series comfortably. That was when IBM went back to the drawing board to make some significant changes to the architecture of Deep Blue to come back even stronger the following year. Think machines don’t feel the sting of defeat?
The match
This clash between man and machine had all the hallmarks of a thriller, one that is best enjoyed by the nerdier demographic. Chess has been the poster child for computer scientists working on advanced algorithms and techniques in artificial intelligence for decades, and the face off in 1997 was the culmination of the work of many brilliant minds.
It’s all about search
Deep Blue used a hybrid search approach that combined both software and hardware search for maximum effect. The upper levels of the search take place in software and made use of transposition tables for improving search efficiency. The leaf positions are distributed to the specialized hardware chess chips where fixed-depth null window alpha-beta and quiescence search takes place. There were 16 such chips per processor in the 30-node RS/6000SP system and the host processor distributes the work to chess chips and uses MPI (Message Passing Interface) for communication with other nodes over a high speed switch to run a distributed and parallel search.
After the defeat in 1996, one of the key changes that the IBM team implemented was an improved chess chip that was able to run a re-designed evaluation function that looked at over 8000 features, compared to 6400 the previous year. The chip also included efficiency improvements that boosted its search speed by 2 to 2.5 million positions per second per chip. The distributed architecture of 480 such chips across 30-nodes of IBM RS/6000SP were able to achieve the maximum sustained speed of searching 330 million positions per second in the 1997 match with Gary Kasparov. While further tuning and pruning could have extended the search depth further, this was sufficient for gaining the edge over Kasparov.
Deep Blue also featured a hand curated opening book of 4,000 positions by chess Grandmasters. The system included the capability to customize the repertoire in an “override” book to suit match situations and last minute corrections. It was also able to tap into an extended book of a 700,000 game database by Grandmasters to reference in the case the opening book was insufficient. In addition, an end game database of all positions with 5 or fewer pieces was included on 20-GB RAID disk arrays for online lookup by Deep Blue.
Neural networks and re-inforcement learning
To defeat Kasparov, it took a hybrid search architecture of both software and hardware search to provide a large search capability, a highly parallelized distributed search along with an effective evaluation function that considered over 8,000 features, 4,000 hand curated opening positions and database of 700,000 games.
While Deep Blue zipped through 200 million positions every second to attempt to figure out the best move, Kasparov relied on a tiny fraction of that speed to be competitive.
In recent years, new approaches to solving chess that make use of deep neural networks instead of brute force has yielded positive results. By using annotated data sets such a system can be trained to play chess like a beginner would. Such a system can then be bootstrapped to play against itself and go through many iterations of deep re-inforcement learning to yield better and better “skill” where the game play abilities are built organically than hand coded. Such a system does not rely on being able to see many steps ahead and instead relies on a kind of positional intuition, the same kind that humans depend on, to determine the best move. Is this the future of chess AI? Let’s wait and see.
Twenty years ago, it was easy to dislike Microsoft. It was the quintessential evil MegaCorp that was quick to squash competition, often ruthlessly, but in some cases slowly through a more insidious process of embracing, extending, and exterminating anything that got in the way. This was the signature personality of
Nearly twenty years to this day, in May 1997, a machine that IBM built challenged then world chess champion Garry Kasparov to a six game match, and won.
An year earlier, in 1996, a similar match between the IBM built Deep Blue and Garry Kasparov ended with Kasparov winning the series comfortably. That was when IBM went back to the drawing board to make some significant changes to the architecture of Deep Blue to come back even stronger the following year. Think machines don’t feel the sting of defeat?
The match
This clash between man and machine had all the hallmarks of a thriller, one that is best enjoyed by the nerdier demographic. Chess has been the poster child for computer scientists working on advanced algorithms and techniques in artificial intelligence for decades, and the face off in 1997 was the culmination of the work of many brilliant minds.
It’s all about search
Deep Blue used a hybrid search approach that combined both software and hardware search for maximum effect. The upper levels of the search take place in software and made use of transposition tables for improving search efficiency. The leaf positions are distributed to the specialized hardware chess chips where fixed-depth null window alpha-beta and quiescence search takes place. There were 16 such chips per processor in the 30-node RS/6000SP system and the host processor distributes the work to chess chips and uses MPI (Message Passing Interface) for communication with other nodes over a high speed switch to run a distributed and parallel search.
After the defeat in 1996, one of the key changes that the IBM team implemented was an improved chess chip that was able to run a re-designed evaluation function that looked at over 8000 features, compared to 6400 the previous year. The chip also included efficiency improvements that boosted its search speed by 2 to 2.5 million positions per second per chip. The distributed architecture of 480 such chips across 30-nodes of IBM RS/6000SP were able to achieve the maximum sustained speed of searching 330 million positions per second in the 1997 match with Gary Kasparov. While further tuning and pruning could have extended the search depth further, this was sufficient for gaining the edge over Kasparov.
Deep Blue also featured a hand curated opening book of 4,000 positions by chess Grandmasters. The system included the capability to customize the repertoire in an “override” book to suit match situations and last minute corrections. It was also able to tap into an extended book of a 700,000 game database by Grandmasters to reference in the case the opening book was insufficient. In addition, an end game database of all positions with 5 or fewer pieces was included on 20-GB RAID disk arrays for online lookup by Deep Blue.
Neural networks and re-inforcement learning
To defeat Kasparov, it took a hybrid search architecture of both software and hardware search to provide a large search capability, a highly parallelized distributed search along with an effective evaluation function that considered over 8,000 features, 4,000 hand curated opening positions and database of 700,000 games.
While Deep Blue zipped through 200 million positions every second to attempt to figure out the best move, Kasparov relied on a tiny fraction of that speed to be competitive.
In recent years, new approaches to solving chess that make use of deep neural networks instead of brute force has yielded positive results. By using annotated data sets such a system can be trained to play chess like a beginner would. Such a system can then be bootstrapped to play against itself and go through many iterations of deep re-inforcement learning to yield better and better “skill” where the game play abilities are built organically than hand coded. Such a system does not rely on being able to see many steps ahead and instead relies on a kind of positional intuition, the same kind that humans depend on, to determine the best move. Is this the future of chess AI? Let’s wait and see.
Read Next
Windows of Opportunity: Microsoft's Open Source Renaissance
Twenty years ago, it was easy to dislike Microsoft. It was the quintessential evil MegaCorp that was quick to squash competition, often ruthlessly, but in some cases slowly through a more insidious process of embracing, extending, and exterminating anything that got in the way. This was the signature personality of
US-11604662-B2
I’m happy to announce, that after a long wait, patent US-11604662-B2 has been issued.
Parallelizing and running distributed builds with distcc
Parallelizing the compilation of a large codebase is a breeze with distcc, which allows you to spread the load across multiple nodes and…
Getting started with Linkerd
If you’ve done anything in the Kubernetes space in recent years, you’ve most likely come across the words “Service Mesh”. It’s backed by a…