The Dunning-Kruger effect among chess players

According to Wikipedia the Dunning-Kruger effect “is a cognitive bias in which people mistakenly assess their cognitive ability as greater than it is.” It suggests that people who are less skilled overestimate their performance more than skilled people. In an article in the Dutch newspaper NRC on the Dunning-Kruger effect, I read that it had also been found in the context of chess tournaments, which triggered my interest to find out exactly what was shown.

The article pointed to a chapter in the book Advances in Experimental Social Psychology (Volume 44, 2011) written by David Dunning himself: Chapter five – The Dunning–Kruger Effect: On Being Ignorant of One’s Own Ignorance. He writes:

We have observed this pattern of dramatic overestimation by bottom performers across a wide range of tasks in the lab […] Similar data have been observed in real world settings on measures other than percentile rankings. […] Of individuals entering chess tournaments, people who possess less skill, as indicated by their Elo rating, mispredicted their tournament performance more than those with greater skill, irrespective of previous experience with tournament chess (Park & Santos-Pinto, 2010).

So this made me look up: Young Joon Park & Luís Santos-Pinto. Overconfidence in tournaments: evidence from the field, Theory and Decision (2010). The abstract reads (bold by me):

This paper uses a field survey to investigate the quality of individuals’ beliefs of relative performance in tournaments. We consider two field settings, poker and chess, which differ in the degree to which luck is a factor and also in the information that players have about the ability of the competition. We find that poker players’ forecasts of relative performance are random guesses with an overestimation bias. Chess players also overestimate their relative performance but make informed guesses. We find support for the “unskilled and unaware hypothesis” in chess: highskilled chess players make better forecasts than low-skilled chess players. Finally, we find that chess players’ forecasts of relative performance are not efficient.

While reading the article I first noticed that this was based on just one rather small chess tournament, held on 17 July 2005 in Sintra (a village near Lisbon). According to the authors, it was an open Swiss-system tournament over 8 rounds where each round took 20 minutes. This seems a bit odd to me because Swiss tournaments usually have an uneven number of rounds. Also, just 8 rounds at this tempo (somewhere in between blitz and rapid) would mean that it would have been a rather short half-day tournament. But maybe it wasn’t 20 minutes per game like the article states, but 20 minutes per game per player, which would make more sense to me. Unfortunately, I wasn’t able to find the tournament announcement or results online.

Amongst other things, the authors found that the less skilled players were less good in predicting where they would find themselves in the final ranking than the best players. And so this would seem to be a typical example of the Dunning-Kruger effect in a real-world setting.

Table 5 shows that coefficient for Elo rating is negative and significant at 5% level. This means that, controlling for experience effects, the higher the Elo rating of a player, the smaller his absolute forecast error. Thus, we find support for H3, that is, we find evidence that the forecast errors of high-skilled chess players are smaller than those of low-skilled chess players. This finding is consistent with Kruger and Dunning (1999) “unskilled–unaware hypothesis” and can not be explained by the fact that high-skilled chess players are more experienced in chess tournaments than low-skilled chess players.

Reading the parts of the article that dealt with this chess tournament, I got the feeling the authors missed some aspects of the specific form this tournament had, which made it quite different than the poker tournaments they also looked at. In my opinion, the authors underestimate the effect of the Swiss pairing system on the final ranking in the chess tournament. It is far more easy for strong participants to make an accurate guess about their final ranking than for average strength participants. Not because of a difference in skill, but because stronger players are usually with fewer equals.

I think the tournament which was looked at in this study is quite typical for a small one-day chess tournament. The article states that 93 players took part in the tournament with an average rating of 1865, from a range of 1090 to 2441. I couldn’t find the results of this particular tournament but I think it is fair to assume that there was quite a big group with players of average club strength (1600-2100), a couple of players who are not very good or are (youthful) beginners with low ratings (maybe just competing because the tournament was organised by their own chess club), and a couple (probably less than 10) very strong 2200+ players who play these sort of tournaments on a regular basis.

As the authors describe, the final ranking of a Swiss tournament is based firstly on the points scored in the games and within the groups of participants with the same score, the order is determined by a tiebreaker (usually according to Sonneborn-Berger score or the cumulative opponent’s score). These tiebreakers don’t really mean much, they are mainly used in between rounds for the pairing of the next round.
In the final ranking, participants will consider there actual score (say 4 points out of 8 rounds) and don’t bother too much about their exact position within a group with the same score. [Of course, there are exceptions, eg. when final ranking within a group is important for the determination of prize money or qualification for a higher ranked tournament] And as the authors rightfully see, the rankings within a group with the same score, have a highly random character.

For the few really strong players this does not play such a big role. The number of rounds will be such that on the top of the final ranking, the players will be split apart by their normal score, or maybe 2 or 3 participants will be in the same score group. This is the purpose of the Swiss tournament design. For these players making a wrong guess about their final score by say half a point, will probably only result in getting their final ranking wrong by a couple of places. For an average player getting his final score wrong by half a point, might drop him dozens of places in the final ranking. But is this an indication that he/she was less skilled in estimating his actual strength than the stronger player? I really doubt that.

Focusing on the final ranking of a Swiss tournament of a typical one-day open chess tournament is probably not the most convincing way to look at the question whether the Dunning-Kruger effect exists among chess players.

Did you enjoy this article? Then please consider to support my blog with a donation.







3 thoughts to “The Dunning-Kruger effect among chess players”

  1. The Dunning Kruger effect is evident in sub 1400 players that ask “How long will it take me to become a grandmaster?”

    1. If you have to ask how long it will take you to be a grandmaster, the answer is probably “No!”

Leave a Reply

Your email address will not be published. Required fields are marked * Your comment might stay in the moderation queue for some time, especially if it is your first comment on this site. Usually all comments will be published, even if they express extreme disagreement with my writing, but I suggest that you find another place to leave rude and offensive comments. Also completely anonymous and non-English comments are not likely to pass moderation. Also read the Privacy Policy.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.