Check out Glinski's Hexagonal Chess, our featured variant for May, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Including Piece Values on Rules Pages[Subject Thread] [Add Response]
H. G. Muller wrote on Sun, Mar 10 09:34 PM UTC in reply to Kevin Pacey from 06:49 PM:

Well, I looked up the exact numbers, and indeed hish threshold for including games in the Kaufman study was FM level (2300) for both players. That left him with 300,000 games out of an original 925,000. So what? Are FIDE masters in your eyes such poor players that the games they produce don't even vaguely resemble a serious chess game? And do you understand the consequences of such a claim being true? If B=N is only true for FIDE masters, and not for 2700+ super-GMs, then there is no such a thing as THE piece values; apparently they would depend on the level of play. There would not be any 'absolute truth'. So which value in that case would you think is more relevant for the readers of this website? The values that correctly predict who is closer to winning in games of players around 1900 Elo, or those for super-GMs?

Your method to cast doubt on the Kaufman study is tantamount to denying that pieces have a well-defined value in the first place. You don't seem to have much support in that area, though. Virtually all chess courses for beginners teach values that are very similar to the Kaufman values. I have never seen a book that says "Just start assuming all pieces are equally valuable for now, and when you have learned to win games that way, you will be ready to value the Queen a bit more". If players of around 1000 Elo would not be taught the 1:3:3:5:9 rule, they would probably never be able to acquire a higher rating.

The nice thing about computer studies is that you can actually test such issues. You can make the look-ahead so shallow and full with oversights that it does play at the level of a beginner, and still measure how much better or worse it does with a Knight instead of a Bishop. And how much the rating would suffer from using a certain set of erroneous piece values to guide its tactical decisions. And whether that is more or less than it would suffer when you improve the reliability of the search to make it a 1500 Elo player.

It would also be no problem at all to generate 300,000 games between 3000+ engines. It doesn't require slow time control to play at that level, as engines lose only little Elo when you make them move faster. (About 30 Elo per halving the time, so giving them 4 sec instead of an hour per game only takes some 300 points of their rating. So you can generate thousands of games per hour, and then just let the computer run for a week. This is how engines like Leela-Chess Zero train themselves. A recent Stockfish patch was accepted after 161,000 self-play games showed that it led to an improvement of 1 Elo...

And in contrast to what you believe, solving chess would not tell you anything about piece values. Solved chess positions (like we have if end-game tables if there are only a few pieces on the board) are evaluated by their distance to mate, irrespective of what material there is on the board. Piece values are a concept for estimating win probability in games of fallible players. With perfect play there is no probability, but a 100% certainty that the game-theoretical result will be reached. In perfect play there is no difference between drawn positions that are 'nearly won' or 'nearly lost'. Both are draws, and a perfect player cannot distinguish them without assuming there is some chance that the opponent will make an error. Then it becomes important if only a tiny error would bring him in a lost position, or that it needs a gross blunder or twenty small errors. And again, in perfect play there is no such thing as a small or a large error; all errors are equal, as they all cost 0.5 point, or they would not be errors at all.

So you don't seem to realize the importance of errors. The whole elo model is constructed on the assumption that players make (small) errors that weaken their position compared to the optimal move with an appreciable  probablity, and only seldomly play the very best move. So that the advantage performs a random walk along the score scale. Statistical theory teaches us that the sum total of all these 'micro-errors' during the game has a Gaussian probability distribution by the time you reach the end, and that a difference in the average error/move rate implied by the ratings of the players determines how much luck the weaker player needs to overcome the systematic drift in favor of the stronger player, and consequently how often he would still manage to draw or win. Nearly equivalent pieces can only be assigned a different value because it requires a smaller error to blunder the draw away for the side with the weaker piece than it does for the side with the stronger piece. So that when the players tend to make equally large errors on average (i.e. are equally strong), it becomes less likely for the player with the strong piece to lose than for the player with the weak piece. Without the players making any error, the game would always stay a draw, and there would be no way to determine which piece was stronger.