Training set size?
What was the training set size? How would I go about training my own model?
in the paper it states that:
We use the 3.3 hour dataset (190k frames) of high-skill human gameplay, captured on the ‘dust_2’ map, which contains observations and actions (mouse and keyboard) captured at 16Hz. We use 2.6 hours (150k frames) for training and 0.7 hours (40k frames) for evaluation.
i believe they used the datasets here: https://github.com/TeaPearce/Counter-Strike_Behavioural_Cloning?tab=readme-ov-file#datasets
the version with the recent demos used the larger dataset_dm_scraped_dust2 dataset, described here: https://github.com/eloialonso/diamond/tree/csgo?tab=readme-ov-file#data
We just updated the README with this information :)