Pretty shit for a computer. He says his 50m model reached 1800 Elo (by the way, its Elo and not ELO as the article incorrectly has it, it is named after a Hungarian guy called Elo). It seems to be a bit better than Stockfish level 1 and a bit worse than Stockfish level 2 from the bar graph.
Based on what we know I think its not surprising these models can learn to play chess, but they get absolutely smoked by a "real" chess bot like Stockfish or Leela.
According to figure 6b [0] removing MCTS reduces Elo by about 40%, scaling 1800 Elo by 5/3 gives us 3000 Elo which would be superhuman but not as good as e.g. LeelaZero.
[0]: https://gwern.net/doc/reinforcement-learning/model/alphago/2...
sometimes it is not a matter of "is it better? is it larger? is it more efficient?", but just a question.
mountains are mountains, men are men.
>> As they say, attention may indeed be all you need.
I don't think drawing general conclusions about intelligence from a board game is warranted. We didn't evolve to play chess or Go.