Check out Alice Chess, our featured variant for June, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Ratings & Comments

EarliestEarlier Reverse Order LaterLatest
The nightrider in Grand Apothecary Chess Alert, Classic and Modern[Subject Thread] [Add Response]
Aurelian Florea wrote on Tue, Jun 21, 2022 04:50 AM UTC:

Thanks for the comments guys! Hopefully there are more to come.

@HG,

The awkwardness of nightrider I expect it goes away with sufficient play. Anyway computers don't "feel" it. But if the game is badly designed computer play will yield very biased results. At a practical level you need not have all heavier pieces 2 ranks behind a pawn and GAC A,C & M proves that. Just most of them. I guess there is reason for experimenting. Because this way you don't need to go all in with the theoretical approach.


Aurelian Florea wrote on Tue, Jun 21, 2022 05:09 AM UTC:

@HG, By the way. The interactive diagram evaluates vultures less than 12 directions leapers such as the champion. Isn't this weird?


H. G. Muller wrote on Tue, Jun 21, 2022 08:53 AM UTC in reply to Aurelian Florea from 05:09 AM:

This is probably because Vultures can easily be blocked. A Falcon, which also has 16 targets, empirically tested as about equal to a Rook. Th Vulture has 4 paths rather than 3, but each path can be blocked in 3 places rather than 2. The 4 non-capturing leaps add comparatively little.

The value estimate of the Diagram takes this into account by measuring the average number of moves in randomly generated positions where 25% of the squares are occupied.


Jean-Louis Cazaux wrote on Wed, Jun 22, 2022 06:01 AM UTC in reply to Aurelian Florea from Mon Jun 20 07:45 PM:

Hi Aurelian. I belong to those who are not at ease with NN. Same with DD or AA and all their compounds. I had tested once a FDD to finally give it up. But I agree that they can be used if the game is designed for, like yours. It also requires a certain anticipation from the players who are not used to, and this is precisely what I'm missing from my side.


Aurelian Florea wrote on Wed, Jun 22, 2022 10:18 AM UTC in reply to Jean-Louis Cazaux from 06:01 AM:

Thanks for the reply Jean-Louis!


Aurelian Florea wrote on Wed, Jun 22, 2022 10:44 AM UTC in reply to Jean-Louis Cazaux from 06:01 AM:

@Jean-Louis

I have not designed these games for the nightriders but they seem to work ok probably because of the high board density.


Balancing Random Armies[Subject Thread] [Add Response]
Daniel Zacharias wrote on Thu, Jun 30, 2022 12:01 AM UTC:

I'm wondering if there's an easy way to balance a variant where each side would have a random selection of pieces (excluding pawns and Kings). The best I can think of is making ALL pieces contagious, like the werewolf in Werewolf Chess. Maybe pawns shouldn't be contagious. Does this seem like it might work?


The power of the imitator [Subject Thread] [Add Response]
Aurelian Florea wrote on Sat, Jul 16, 2022 04:45 PM UTC:

While playing my 12x12 variants I have noticed that besides the material type owned by the opposite side, the imitator's power is also greatly affected by the number of different types the opponent has. What is your guys take on this?


H. G. Muller wrote on Sat, Jul 16, 2022 04:51 PM UTC in reply to Aurelian Florea from 04:45 PM:

I would expect it to be weaker against more different pieces of roughly the same value. Because it allows the opponent more opportunities to do a move that makes the Imitator harmless in its current location.


Aurelian Florea wrote on Sun, Jul 17, 2022 07:01 AM UTC in reply to H. G. Muller from Sat Jul 16 04:51 PM:

In comparing the joker inthe 10x10 games with the joker in 12x12 games I've noticed that having more "options" is good. Imagine a KJBB vs KJBN the KJBB side has an advantage because the joker sees more potential moves.


Avatar Chess[Subject Thread] [Add Response]
Gerd Degens wrote on Mon, Aug 22, 2022 04:38 PM UTC:

My variant Avatar Chess has not been discussed here yet. I would be interested to know what the experts of CVP think about the variant or the concept. Does the variant have potential or is it just a nice gimmick?


Chess programs move making[Subject Thread] [Add Response]
Aurelian Florea wrote on Thu, Sep 8, 2022 12:31 PM UTC:

@HG or @Greg

I'm writing my own chess variant program (I'm finally making progress) alpha zero style. When generating moves I can't help wondering about loss time due to regenerating the same moves after a move was made.

For example in orthodox chess after 1.e4 there appear a few more queen moves, but you shouldn't try again to check for rook moves. As of now my algorithm works this way an it is probably slow.


H. G. Muller wrote on Thu, Sep 8, 2022 04:04 PM UTC in reply to Aurelian Florea from 12:31 PM:

Virtually all chess programs generate moves completely from scratch in every new position they reach. So after 1. e4 (and the opponent's reply) they would generate all Knight moves again (and now Ng1-e2 would be amongst those), and would try to generate new Rook moves (again without getting any). What you mention would only be relevant if you wanted to generate moves incrementally, by deriving the new list of moves from one you already had, by adding some new moves, and deleting some old, but keeping most of them. This can be done, but for games with the complexity of orthodox chess is not very much faster than generating the moves from scratch. That is, it could be much faster, but it would be very complicated to achieve that. So no one does it. (I wrote a kind of blog about it on talkchess.com, though, under the title "The Mailbox Trials'.)

For larger variants (larger board, more pieces) the incremental method could become competitive, however. I used it in my Tenjiku Shogi engine ('Inferno'), and could reach a speed of 500k positions/sec, which is similar to what most engines for orthodox chess can do, despite the 4 times larger board and and 5 times as many pieces. One problem is that it would be quite hard to make it general enough to do many variants. The incremental method must figure out which slider moves are blocked and which discovered by the move that is searched. If all sliders move along orthogonals and diagonals (as they tend to do in Shogi variants, and certainly do in orthodox chess) it is still manageable, but if the variant also has bent sliders (like the Griffon) or hook movers (like in Maka Dai Dai Shogi) it becomes far more complex to figure that out. The area move of the Fire Demon and Vice General in Tenjiku Shogi is almost impossible to do incrementally, so Inferno generates that from scratch in every position.

Alpha Zero relies on an enormously large neural network for evaluating a position, and suggesting moves to search. This takes so much time to compute that the time to generate moves is completely negligible to it, no matter how inefficiently you do it.


Aurelian Florea wrote on Thu, Sep 8, 2022 04:17 PM UTC in reply to H. G. Muller from 04:04 PM:

Thanks a lot, HG!


Greg Strong wrote on Thu, Sep 8, 2022 05:06 PM UTC:

A mistake most starting chess programmers make is trying to optomize these kinds of things. It is not a good use of your effort. Even if you double the computational speed of your program, that is only good for about 70-100 ELO. The important thing is being bug-free and the optimizaitons make things more complicated and introduce more bugs.


Aurelian Florea wrote on Thu, Sep 8, 2022 05:19 PM UTC in reply to Greg Strong from 05:06 PM:

Yes, Greg I'm sure you are correct. Moreover in this context it matters less. This is a training program. It can take more time to learn the game, but as HG has said, it is of little consequence.


H. G. Muller wrote on Thu, Sep 8, 2022 07:09 PM UTC in reply to Aurelian Florea from 05:19 PM:

Also note that LeelaChess Zero, the imitation of Alpha Zero that can run on a PC, took about 6 months of computer time donated by a large collaboration of people with powerful computers to learn how to play Chess. For AlphaZero this only took 4 hours, but only by virtue of the fact that they used 5000 of their servers equiped with special hardware that calculated hundreds of times faster than an ordinary PC. On a single PC you would be lucky to achieve that in 6 years of non-stop computing.


Aurelian Florea wrote on Thu, Sep 8, 2022 07:17 PM UTC in reply to H. G. Muller from 07:09 PM:

I'm aware of this fact, but I'm not sure about six years. Isn't it closer to a few months?


Greg Strong wrote on Thu, Sep 8, 2022 09:23 PM UTC:

Pretty sure H. G. is correct. There is a reason I spend no effort on the neural network approach.


H. G. Muller wrote on Thu, Sep 8, 2022 09:53 PM UTC in reply to Aurelian Florea from 07:17 PM:

It would be a few month when you had a few dozen computers with very powerful GPU boards as graphics cards. For orthodox chess.

But I should point out there is another approach called NNUE, which uses a far simpler neural net just for evaluation in a conventional alpha-beta search. This is far easier to train; for orthodox chess a few hundred-thousand positions from games with known outcome would be enough.


Aurelian Florea wrote on Thu, Sep 8, 2022 09:58 PM UTC:

Well hundreds of times slower it is not that but. 1000 times slower will be 4000 hours which is 167 days. That is doable. I have one computer for one game (2 in total, remakes of apothecary chess). But my games are 10x10 and have bent riders and an imitator which could itself make things much worse. There could be artifices in the beginning by inserting fake endgame conditions which will train things like "Do not exchange your queen for a knight!".

Anyway I'm doing this because it is the only thing that motivates me so far. With the new treatment (actually 3 years now), I am able to pay attention to stuff. I really enjoy doing this. And the programming is not that hard actually, although there are also NN things I do not understand.

@HG and @Greg Thanks for your advices!


Aurelian Florea wrote on Fri, Sep 9, 2022 06:42 AM UTC in reply to H. G. Muller from Thu Sep 8 09:53 PM:

@HG,

The alpha-beta search would have replaced the feature extraction part of the NN. That is from what I understand. That is the part I did not understood (especially what filters does the CNN use). Anyway even if it takes a huge amount of time the algorithm will still improve. So I could present a user with monthly updates. But I'm afraid it will take time until an challenging AI is outputed by the program.

Once again thanks a lot for your help!


H. G. Muller wrote on Fri, Sep 9, 2022 08:07 AM UTC in reply to Aurelian Florea from 06:42 AM:

You will be already 5000 times slower by the fact alone that Google used 5000 servers, (if I recall correctly), and you only have a single PC. And each Google server contained 4 TPU boards, each capable of performing 256 (or was it 1024) multiplications per clock cycle, while a single core in a PC can do only 8. So I think you are way too optimistic.

I am not sure what your latest comment refers to. If it is the difference between AlphaZero and NNUE: the main point there is that the NNUE net uses 5 layers of 32 cells, plus an initial layer that can be calculated incrementally, (so that the size matters little). While the AlphaZero net typically has 32 layers of 8 x 8 x 256 cells.

I don't want to discourage you, but the task you want to tackle is gigantic, and not very suitable as an introductory exercise in chess programming. A faster approach would be to start with something simple (at the level of Fairy-Max or King Slayer, simple alpha-beta searchers with a hand-crafted evaluation) to get some experience with programming search, then replace its evaluation by NNUE, to get some experience programming neural nets. The experience you gain in the simpler preliminary tasks will speed up your progress on the more complex final goal by more than the simple tasks take. Also note that you will have the disadvantage compared to people who do this for orthodox chess of not having high-quality games available to train the networks.


Aurelian Florea wrote on Fri, Sep 9, 2022 12:33 PM UTC in reply to H. G. Muller from 08:07 AM:

@HG,

Yes, in my previous comment I was referring to the difference between neural net and the NNUE approach. Alpha Zero's neural net has 2 parts. A CNN feature extraction part and a fully connected classification part. In the NNUE, from what I understand, you keep only the second part. I have said that I did not fully understand how the CNN part works. I'm not sure what filters it uses. But I do not think this is a question for here anyway.


H. G. Muller wrote on Fri, Sep 9, 2022 01:58 PM UTC in reply to Aurelian Florea from 12:33 PM:

Indeed such questions are more suitable for talkchess.com.

But the fact that the AlphaZero NN also has a 'policy head' next to the evaluation is not the largest difference. IIRC this is only a matter of the final layer, which calculates a preference for each possible move in the same way as the score of the position is calculated. (I.e. each move output is fully connected to the previous layer.)

The main difference is the size. The AlphaZero net is so large that even using a Google TPU or a fast GPU to calculate it slows down the engine by a factor 100-1000, in terms of positions/sec searched. The NNUE only slows the engine by a modest factor, even on a CPU, because it is so small.


25 comments displayed

EarliestEarlier Reverse Order LaterLatest

Permalink to the exact comments currently displayed.