Dev Builds » 20230531-0651

Use this dev build

NCM plays each Stockfish dev build 20,000 times against Stockfish 14. This yields an approximate Elo difference and establishes confidence in the strength of the dev builds.

Summary

Host Duration Avg Base NPS Games WLD Standard Elo Ptnml(0-2) Gamepair Elo
ncm-dbt-01 09:56:55 1103743 3348 1395 323 1630 +115.3 +/- 5.57 1 59 528 1039 47 +247.92 +/- 14.84
ncm-dbt-02 09:54:11 1234948 3320 1398 336 1586 +115.18 +/- 5.19 0 42 539 1054 25 +254.56 +/- 14.65
ncm-dbt-03 09:57:47 1236056 3360 1378 341 1641 +110.84 +/- 5.65 0 77 532 1028 43 +236.33 +/- 14.78
ncm-dbt-04 09:58:17 1235577 3328 1383 323 1622 +114.65 +/- 5.52 1 57 528 1037 41 +247.96 +/- 14.83
ncm-dbt-05 09:53:27 1224604 3312 1389 304 1619 +118.17 +/- 5.43 0 48 518 1047 43 +257.15 +/- 14.97
ncm-dbt-06 09:58:20 1229900 3332 1379 353 1600 +110.57 +/- 5.46 1 59 552 1021 33 +239.0 +/- 14.5
20000 8322 1980 9698 +114.1 +/- 2.24 3 342 3197 6226 232 +247.02 +/- 6.02

Test Detail

ID Host Base NPS Games WLD Standard Elo Ptnml(0-2) Gamepair Elo CLI PGN
190154 ncm-dbt-02 1230977 320 141 38 141 +115.95 +/- 17.58 0 5 51 100 4 +251.19 +/- 48.31
190153 ncm-dbt-05 1219179 312 125 30 157 +109.25 +/- 18.21 0 7 50 96 3 +235.31 +/- 48.79
190152 ncm-dbt-04 1225240 328 127 32 169 +103.59 +/- 16.35 0 6 57 101 0 +229.78 +/- 45.57
190151 ncm-dbt-06 1240709 332 145 33 154 +121.98 +/- 15.44 0 0 58 104 4 +269.72 +/- 44.23
190150 ncm-dbt-01 1102497 348 154 36 158 +122.66 +/- 16.56 0 4 53 112 5 +269.02 +/- 47.36
190149 ncm-dbt-03 1245962 360 142 44 174 +97.02 +/- 18.37 0 13 61 101 5 +198.66 +/- 43.89
190148 ncm-dbt-02 1227991 500 204 40 256 +118.33 +/- 13.17 0 4 83 158 5 +261.07 +/- 37.44
190147 ncm-dbt-05 1233215 500 216 44 240 +124.6 +/- 13.45 0 4 77 162 7 +275.45 +/- 39.01
190146 ncm-dbt-04 1246074 500 216 47 237 +122.24 +/- 14.82 0 10 70 161 9 +263.42 +/- 41.06
190145 ncm-dbt-06 1230000 500 203 69 228 +95.44 +/- 14.99 0 15 92 137 6 +196.45 +/- 35.72
190144 ncm-dbt-01 1108816 500 209 39 252 +123.02 +/- 13.11 0 5 74 167 4 +277.93 +/- 39.92
190143 ncm-dbt-03 1238630 500 196 52 252 +102.97 +/- 14.02 0 7 99 137 7 +213.85 +/- 34.13
190136 ncm-dbt-02 1254557 500 211 51 238 +115.22 +/- 13.2 0 7 78 163 2 +258.75 +/- 38.88
190135 ncm-dbt-05 1215345 500 216 49 235 +120.67 +/- 14.18 0 9 71 164 6 +265.78 +/- 40.81
190134 ncm-dbt-04 1237332 500 203 48 249 +111.37 +/- 14.38 0 8 87 147 8 +234.38 +/- 36.71
190133 ncm-dbt-06 1252799 500 208 49 243 +114.45 +/- 14.21 0 11 73 162 4 +251.89 +/- 40.18
190132 ncm-dbt-01 1095084 500 210 51 239 +114.45 +/- 14.85 1 9 77 156 7 +247.41 +/- 39.15
190131 ncm-dbt-03 1255977 500 210 50 240 +115.22 +/- 15.0 0 14 68 162 6 +249.64 +/- 41.39
190124 ncm-dbt-02 1231302 500 217 54 229 +117.55 +/- 13.87 0 9 73 164 4 +261.07 +/- 40.24
190123 ncm-dbt-05 1239269 500 201 50 249 +108.3 +/- 14.21 0 11 81 154 4 +234.38 +/- 38.14
190122 ncm-dbt-04 1217525 500 202 50 248 +109.07 +/- 14.21 0 9 86 149 6 +232.26 +/- 36.96
190121 ncm-dbt-01 1099445 500 212 47 241 +119.11 +/- 13.52 0 6 78 161 5 +263.42 +/- 38.85
190120 ncm-dbt-06 1236176 500 207 55 238 +109.07 +/- 14.37 0 10 84 150 6 +232.26 +/- 37.43
190119 ncm-dbt-03 1214652 500 214 48 238 +119.89 +/- 14.68 0 10 72 160 8 +258.75 +/- 40.49
190112 ncm-dbt-05 1230149 500 215 43 242 +124.6 +/- 12.71 0 3 76 167 4 +282.94 +/- 39.21
190111 ncm-dbt-02 1213430 500 205 51 244 +110.6 +/- 13.56 0 9 80 159 2 +245.2 +/- 38.39
190110 ncm-dbt-04 1234809 500 203 61 236 +101.46 +/- 14.01 0 8 98 138 6 +211.87 +/- 34.38
190109 ncm-dbt-01 1098947 500 201 41 258 +115.22 +/- 14.69 0 10 78 154 8 +245.2 +/- 38.89
190108 ncm-dbt-06 1233983 500 205 50 245 +111.37 +/- 14.22 1 9 77 160 3 +247.41 +/- 39.15
190107 ncm-dbt-03 1218265 500 192 37 271 +111.37 +/- 14.38 0 11 78 156 5 +240.82 +/- 38.87
190100 ncm-dbt-02 1239283 500 217 53 230 +118.33 +/- 12.81 0 5 78 165 2 +268.17 +/- 38.8
190099 ncm-dbt-05 1220294 500 210 41 249 +122.24 +/- 14.33 0 6 79 155 10 +261.07 +/- 38.58
190098 ncm-dbt-01 1106385 500 203 52 245 +108.3 +/- 14.21 0 9 87 148 6 +230.16 +/- 36.73
190097 ncm-dbt-04 1247746 500 217 37 246 +130.94 +/- 14.22 0 10 56 178 6 +298.62 +/- 45.75
190096 ncm-dbt-03 1244160 500 213 52 235 +116.0 +/- 14.69 0 9 80 152 9 +245.2 +/- 38.39
190095 ncm-dbt-06 1212959 500 208 52 240 +112.14 +/- 13.89 0 9 80 157 4 +245.2 +/- 38.39
190088 ncm-dbt-05 1214780 500 206 47 247 +114.45 +/- 14.53 0 8 84 149 9 +240.82 +/- 37.4
190087 ncm-dbt-01 1115029 500 206 57 237 +106.77 +/- 16.13 0 16 81 141 12 +213.85 +/- 38.01
190086 ncm-dbt-02 1247100 500 203 49 248 +110.6 +/- 13.22 0 3 96 145 6 +236.51 +/- 34.4
190085 ncm-dbt-03 1234750 500 211 58 231 +109.83 +/- 14.37 0 13 74 160 3 +240.82 +/- 39.83
190084 ncm-dbt-04 1240319 500 215 48 237 +120.67 +/- 14.18 1 6 74 163 6 +268.17 +/- 39.97
190083 ncm-dbt-06 1202676 500 203 45 252 +113.68 +/- 13.55 0 5 88 151 6 +245.2 +/- 36.32

Commit

Commit ID c1fff71650e2f8bf5a2d63bdc043161cdfe8e460
Author Linmiao Xu
Date 2023-05-31 06:51:22 UTC
Update NNUE architecture to SFNNv6 with larger L1 size of 1536 Created by training a new net from scratch with L1 size increased from 1024 to 1536. Thanks to Vizvezdenec for the idea of exploring larger net sizes after recent training data improvements. A new net was first trained with lambda 1.0 and constant LR 8.75e-4. Then a strong net from a later epoch in the training run was chosen for retraining with start-lambda 1.0 and initial LR 4.375e-4 decaying with gamma 0.995. Retraining was performed a total of 3 times, for this 4-step process: 1. 400 epochs, lambda 1.0 on filtered T77+T79 v6 deduplicated data 2. 800 epochs, end-lambda 0.75 on T60T70wIsRightFarseerT60T74T75T76.binpack 3. 800 epochs, end-lambda 0.75 and early-fen-skipping 28 on the master dataset 4. 800 epochs, end-lambda 0.7 and early-fen-skipping 28 on the master dataset In the training sequence that reached the new nn-8d69132723e2.nnue net, the epochs used for the 3x retraining runs were: 1. epoch 379 trained on T77T79-filter-v6-dd.min.binpack 2. epoch 679 trained on T60T70wIsRightFarseerT60T74T75T76.binpack 3. epoch 799 trained on the master dataset For training from scratch: python3 easy_train.py \ --experiment-name new-L1-1536-T77T79-filter-v6dd \ --training-dataset /data/T77T79-filter-v6-dd.min.binpack \ --max_epoch 400 \ --lambda 1.0 \ --start-from-engine-test-net False \ --engine-test-branch linrock/Stockfish/L1-1536 \ --nnue-pytorch-branch linrock/Stockfish/misc-fixes-L1-1536 \ --tui False \ --gpus "0," \ --seed $RANDOM Retraining commands were similar to each other. For the 3rd retraining run: python3 easy_train.py \ --experiment-name L1-1536-T77T79-v6dd-Re1-LeelaFarseer-Re2-masterDataset-Re3-sameData \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd.binpack \ --early-fen-skipping 28 \ --max_epoch 800 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-from-engine-test-net False \ --start-from-model /data/L1-1536-T77T79-v6dd-Re1-LeelaFarseer-Re2-masterDataset-nn-epoch799.nnue \ --engine-test-branch linrock/Stockfish/L1-1536 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \ --tui False \ --gpus "0," \ --seed $RANDOM The T77+T79 data used is a subset of the master dataset available at: https://robotmoon.com/nnue-training-data/ T60T70wIsRightFarseerT60T74T75T76.binpack is available at: https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG Local elo at 25k nodes per move vs. nn-e1fb1ade4432.nnue (L1 size 1024): nn-epoch759.nnue : 26.9 +/- 1.6 Failed STC https://tests.stockfishchess.org/tests/view/64742485d29264e4cfa75f97 LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 13728 W: 3588 L: 3829 D: 6311 Ptnml(0-2): 71, 1661, 3610, 1482, 40 Failing LTC https://tests.stockfishchess.org/tests/view/64752d7c4a36543c4c9f3618 LLR: -1.91 (-2.94,2.94) <0.50,2.50> Total: 35424 W: 9522 L: 9603 D: 16299 Ptnml(0-2): 24, 3579, 10585, 3502, 22 Passed VLTC 180+1.8 https://tests.stockfishchess.org/tests/view/64752df04a36543c4c9f3638 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 47616 W: 13174 L: 12863 D: 21579 Ptnml(0-2): 13, 4261, 14952, 4566, 16 Passed VLTC SMP 60+0.6 th 8 https://tests.stockfishchess.org/tests/view/647446ced29264e4cfa761e5 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 19942 W: 5694 L: 5451 D: 8797 Ptnml(0-2): 6, 1504, 6707, 1749, 5 closes https://github.com/official-stockfish/Stockfish/pull/4593 bench 2222567
Copyright 2011–2024 Next Chess Move LLC