GPT-2, OpenAI’s giant text-generating language model, can play chess – despite having no prior knowledge of the game’s rules.
That may seem pretty odd, at first. The system is mostly well-known for spitting out passages of text after receiving a sentence or two as a prompt, after all.
Trained on around eight million newspaper articles and webpages scraped from Reddit links, GPT-2’s forte lies in learning common patterns in language so that it can generate convincing sentences of its own, ones that are mostly grammatically correct and semi-coherent, even if they’re a little nonsensical.
Chess games, however, can be presented as strings of text and have specific rules too, all of which can be analysed by GPT-2. A pair of engineers, Shawn Presser and Gwern Branwen realized that it could be tweaked to play chess if it was fed enough training data in a specific format.
“We trained it on 2.4 million games scraped from Kingbase, a dataset that represents the positions of chess pieces in Portable Game Notation (PGN),” Presser told The Register.
PGN assigns all the rows in a chess board with a number from one to eight, and all the columns with lowercase letters from a to h. The pieces for the kings, queens, rooks, bishops, and knights are all assigned capital letters. For example ‘Nf3’ signals that a knight has moved to the position f3 on the board.
After GPT-2 was trained on 2.4 million of these sequences using 140 of Google’s Cloud TPU chips over 24 hours, it learned to copy and replay the moves it had previously seen without having to understand the rules of chess or see a chessboard.
GPT-2 is only really hard to play against at the beginning of each game
It may sound impressive at first, but like all neural networks there is a degree of overfitting to the training data. After ten to thirteen moves or so, it begins making invalid moves.
“It’ll do things like trying to move a rook to a particular place when there’s clearly a pawn in the way,” said Presser. As a result, GPT-2 struggles with playing long games and is mostly effective during the start of the game.
“When people play against it, it’s an expert at playing opening moves but its skill level falls off rapidly. It’s possible that it’s memorising opening moves; pro players do that too since the sequence of moves you can make at the beginning is limited,” he added.
But there are some tantalizing signs that the machine isn’t just regurgitating what it has seen before. “It responds dynamically, if you start with a different opening variation it’ll respond differently too. And when it is faced with very strange moves, for example if you offer up your queen immediately, then it’ll take it most of the time.”
Presser and Branwen believe that GPT-2’s ability nosedives after it becomes increasingly difficult to keep track of all the different pieces on the board.
OK, smarty pants AI. You can beat us humans at video games. But how about real-world puzzles like Jenga? Oh, oh no
“For example if you say knight to F3, it has no reference from where that knight has moved from,” Presser told El Reg.
“You, as a human, might know because there are a limited number of places where that knight must have been. We trained it on PGN notation, but we think it’ll help to train it on long algebraic notation because the AI system will know where each piece moved from, not just where each piece is moving to.”
The pair also hope to incorporate self-play in the training process, a technique that pits the machine against itself. GPT-2 can then be rigged to play games at a level that is neither too easy or difficult for it, and if it seems to improve over time then it’s a sign that it really can learn the rules of the game.
Eventually, they’d like to challenge Stockfish, a popular chess engine, so that they can measure how good a chess player GPT-2 really is by calculating its Elo rating. In the meantime, however, you can play against GPT-2 by following the instructions here. ®
Credit: Google News