Dev Builds » 20210724-1604

Use this dev build

NCM plays each Stockfish dev build 20,000 times against Stockfish 15. This yields an approximate Elo difference and establishes confidence in the strength of the dev builds.

Summary

Host Duration Avg Base NPS Games WLD Standard Elo Ptnml(0-2) Gamepair Elo
ncm-dbt-01 06:54:40 584648 4000 521 1542 1937 -90.69 ± 5.09 32 1063 801 102 2 -188.77 ± 12.02
ncm-dbt-02 06:57:27 586585 4000 524 1541 1935 -90.32 ± 5.08 28 1071 793 106 2 -188.77 ± 12.08
ncm-dbt-03 06:56:08 587060 4008 485 1555 1968 -95.06 ± 5.1 35 1104 761 104 0 -198.56 ± 12.34
ncm-dbt-04 06:56:50 569972 4000 493 1520 1987 -91.25 ± 5.04 29 1071 799 100 1 -190.62 ± 12.03
ncm-dbt-05 06:56:53 585136 3992 520 1530 1942 -89.85 ± 5.15 30 1068 781 116 1 -186.93 ± 12.19
20000 2543 7688 9769 -91.43 ± 2.28 154 5377 3935 528 6 -190.71 ± 5.42

Test Detail

ID Host Base NPS Games WLD Standard Elo Ptnml(0-2) Gamepair Elo CLI PGN
400597 ncm-dbt-03 585548 8 0 4 4 -190.56 ± 13.88 0 4 0 0 0 -1199.83 ± 250.99
400596 ncm-dbt-05 584117 492 62 191 239 -93.27 ± 14.68 6 129 99 12 0 -190.85 ± 34.32
400595 ncm-dbt-02 587197 500 82 195 223 -79.9 ± 14.78 3 125 105 16 1 -165.8 ± 33.35
400594 ncm-dbt-01 586943 500 74 196 230 -86.52 ± 15.37 6 128 99 16 1 -176.33 ± 34.4
400593 ncm-dbt-03 587622 500 55 190 255 -96.19 ± 12.99 2 137 105 6 0 -206.01 ± 32.92
400592 ncm-dbt-04 568315 500 63 188 249 -88.74 ± 13.36 3 127 112 8 0 -185.33 ± 31.82
400591 ncm-dbt-05 585548 500 58 196 246 -98.44 ± 14.14 4 141 94 11 0 -207.95 ± 35.29
400590 ncm-dbt-02 586054 500 60 196 244 -96.94 ± 14.57 1 150 84 14 1 -211.87 ± 37.38
400589 ncm-dbt-01 588004 500 57 190 253 -94.69 ± 14.09 5 133 102 10 0 -196.45 ± 33.71
400588 ncm-dbt-03 587792 500 61 195 244 -95.44 ± 14.55 4 140 92 14 0 -200.24 ± 35.72
400587 ncm-dbt-04 571873 500 70 179 251 -76.97 ± 14.83 5 117 110 18 0 -153.86 ± 32.54
400586 ncm-dbt-05 582987 500 64 192 244 -90.96 ± 15.05 2 144 84 20 0 -192.71 ± 37.22
400585 ncm-dbt-02 588686 500 69 205 226 -96.94 ± 14.27 8 128 106 8 0 -196.45 ± 32.87
400584 ncm-dbt-03 586097 500 63 205 232 -101.46 ± 15.65 8 143 82 17 0 -207.95 ± 37.76
400583 ncm-dbt-01 584453 500 67 197 236 -92.46 ± 14.2 1 142 94 12 1 -200.24 ± 35.32
400582 ncm-dbt-04 572356 500 57 182 261 -88.74 ± 13.68 2 132 105 11 0 -187.16 ± 33.21
400581 ncm-dbt-05 581361 500 73 190 237 -82.83 ± 15.0 3 131 96 20 0 -171.02 ± 34.92
400580 ncm-dbt-03 590353 500 54 184 262 -92.45 ± 14.35 3 138 95 14 0 -194.57 ± 35.13
400579 ncm-dbt-02 586858 500 66 191 243 -88.74 ± 14.57 5 129 102 14 0 -181.7 ± 33.83
400578 ncm-dbt-01 584034 500 60 183 257 -87.26 ± 13.64 3 127 110 10 0 -181.7 ± 32.28
400577 ncm-dbt-04 569071 500 64 196 240 -93.95 ± 15.11 6 135 95 13 1 -194.57 ± 35.13
400576 ncm-dbt-03 586604 500 73 200 227 -90.22 ± 13.55 3 130 108 9 0 -189.0 ± 32.58
400575 ncm-dbt-05 582444 500 64 196 240 -93.95 ± 14.96 3 144 85 18 0 -198.34 ± 37.08
400574 ncm-dbt-01 582110 500 63 184 253 -85.78 ± 14.36 2 133 99 16 0 -179.9 ± 34.4
400573 ncm-dbt-02 585674 500 60 186 254 -89.48 ± 14.88 5 132 97 16 0 -183.51 ± 34.76
400572 ncm-dbt-04 568434 500 70 196 234 -89.48 ± 14.0 2 135 100 13 0 -189.0 ± 34.18
400571 ncm-dbt-03 588472 500 61 184 255 -87.26 ± 14.4 2 135 97 16 0 -183.51 ± 34.76
400570 ncm-dbt-05 587749 500 72 189 239 -82.83 ± 14.72 4 125 106 14 1 -171.02 ± 33.15
400569 ncm-dbt-01 584832 500 70 192 238 -86.52 ± 14.23 5 124 109 12 0 -176.33 ± 32.55
400568 ncm-dbt-02 584579 500 66 191 243 -88.74 ± 13.83 2 133 103 12 0 -187.16 ± 33.6
400567 ncm-dbt-04 570068 500 62 191 247 -91.71 ± 14.78 5 134 96 15 0 -189.0 ± 34.95
400566 ncm-dbt-03 584748 500 64 195 241 -93.2 ± 14.66 6 132 99 13 0 -190.85 ± 34.36
400565 ncm-dbt-05 582694 500 63 186 251 -87.26 ± 13.49 2 129 109 10 0 -183.51 ± 32.46
400564 ncm-dbt-01 583447 500 68 197 235 -91.71 ± 14.92 6 132 97 15 0 -187.16 ± 34.76
400563 ncm-dbt-02 587494 500 64 191 245 -90.22 ± 13.86 3 132 104 11 0 -189.0 ± 33.39
400562 ncm-dbt-04 568832 500 50 190 260 -99.95 ± 13.84 4 141 96 9 0 -211.87 ± 34.83
400561 ncm-dbt-01 583363 500 62 203 235 -100.7 ± 14.16 4 144 91 11 0 -213.85 ± 35.9
400560 ncm-dbt-03 586308 500 54 198 248 -102.97 ± 15.23 7 145 83 15 0 -213.85 ± 37.6
400559 ncm-dbt-05 594193 500 64 190 246 -89.48 ± 14.3 6 125 108 11 0 -181.7 ± 32.68
400558 ncm-dbt-02 586139 500 57 186 257 -91.71 ± 14.19 1 142 92 15 0 -196.45 ± 35.72
400557 ncm-dbt-04 570829 500 57 198 245 -100.7 ± 14.16 2 150 85 13 0 -217.85 ± 37.2

Commit

Commit ID b939c805139e4b37f04fbf177f580c35ebe9f130
Author MichaelB7
Date 2021-07-24 16:04:59 UTC
Update the default net to nn-76a8a7ffb820.nnue. combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets. Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue: The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer. Training data gen command (generates in chunks of 200k positions): generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence): python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data. Based on that Michael generated nn-76a8a7ffb820.nnue in the following way: The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack model.py modifications: loss = torch.pow(torch.abs(p - q), 2.6).mean() LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch. optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992) For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time. passed LTC https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4 LLR: 2.94 (-2.94,2.94) <0.50,3.50> Total: 53040 W: 1732 L: 1575 D: 49733 Ptnml(0-2): 14, 1432, 23474, 1583, 17 passed STC https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775 LLR: 2.94 (-2.94,2.94) <-0.50,2.50> Total: 37928 W: 3178 L: 3001 D: 31749 Ptnml(0-2): 100, 2446, 13695, 2623, 100. closes https://github.com/official-stockfish/Stockfish/pull/3626 Bench: 5169957
Copyright 2011–2025 Next Chess Move LLC