DeepHoldem
DeepHoldem copied to clipboard
Info
@happypepper
Hello, in the description you have wrote you have make it play vs slumbot, how did you do it?
Thanks
Hi. And curious what are final/current stats? 2616 hands from readme not much
@LorenzoPanico If you look into the network traffic you can easily communicate with the Slumbot server using ajax requests. If I recall correctly happypepper used selenium scripts.
P.S. Anyone interested to collaborate with data generation?
@lh14576 sure, i planed to run it on a google cloud machine.
have you already run test it?
Hello , Ive been following this rep for quite a while for a research paper of mine on Poker. I appreciate happypeppers work put into this, I have to say datageneration is sadly too slow. I ran both Amazon AWS k80s and then even 4 of the very best Google cloud v100 but still data generation after the river was much to slow, I could not even finish turn by the time costs had exceeded 2000 USD and I had enough. By my estimation total costs to finish would have been over 10000. Still this is a good start but a poker bot will need a more efficient approach than this to be feasable for the average person.
@LorenzoPanico I wrote a selenium script for it, I'll upload it when I have time.
@snarb I paused the script. The playing speed is too slow that it's eating up too much money while playing, since it requires a Tesla P100 to achieve the posted speeds.
@PhDtimothy If I recall correctly, the river generation should require no more than $300. Did you change params.gpu = true? Also, GCP is alot cheaper than AWS, especially the preemptible instances.
I was able to generate all the data using less than $3000 USD. That was before GCP's new discount for preemptible GPUs. It should be achievable for less than $1500 USD now, after the new discounts.
@PhDtimothy i genereted river data trained with models in less 2 days on my own 1070
@LorenzoPanico I just managed to get my GCP instances running today so not much done yet. However, I noticed that the K-80 is generating as fast as the P100 (at least for river situations). So no need to waste money for the P100.
"data generated" should be "equal" for every machine?
because if yes we could just do it one time (for everyone) and upload the files on the cloud, so people don't need to generate data every time.
this seems to give 10tb free cloud: http://www.1mtb.com/how-to-get-10-tb-free-online-cloud-storage-from-tencent-weiyun/
yes that's what I intended to do. The data generated are just more or less randomly created river, turn, flop situations. The more solved situations you have the better the nn should work in theory.
but can the data be shared between different pcs??
i have deleted everything from my pc, but if someone has already build some model and he/she could share it would be very nice!! my connection is a adsl with max upload 1mb/s i would too much time to share gbs
next week as soon as i can ill try to build the data with the 300$ google and a v100 (for sure not all the data)
obviously would be better if done by happypepper but im not asking him to do it since he has already done a lot
Hi guys small update from my side. Yesterday I got my request to access our universitary computing lab for a week accepted. This is in terms of teraflops roughly around 40 P100s. So in the week I will be generating around 10 times the original samples , if I recall correctly 1500000 samples instead of 150000 in order to produce the best models. As a test I generated models yesterday using only 75000 samples just to test the computing lab. So if you guys are interested I will be uploading the best models in maybe like two weeks time. For the meanwhile, I dont think uploading my 75000 models will be any worth, since lh14576 will be uploading the 150000 models soon. @LorenzoPanico I would not bother using the v100 , as I have found out that there is some sort of a compatibility issue with cuda - torch-cutorch on the v100. I would not waste 300 dollars either because you will atmost be able to generate river data and model which is useless on its own.
yeah it's a lot of data. I would guess 1 million river samples would be around 100GB uncompressed and possibly ~ 60GB compressed. I am currently trying to generate the one million river samples which will take around a month unfortunately. @PhDtimothy that would be really awesome if you could do that!
Hy, I would like to join to the party :) Are there any discord or skype group for this? Pls Pm me!
"Are there any discord or skype group for this? Pls Pm me!"
nothing that i know
I would also like to join discord or skype group if somebody makes one!
Feel free to join https://discord.gg/zcxjWw3 :)
Good morning everybody, I am writing a paper on Poker and ran a week of data generation of DeepHoldem on our computing lab. I managed to create around 12 million samples after I optimized the batch size for our cluster. I am very happy that I managed to create even more samples than Deepstack. When I had first run data generation on the AWS and GCP it was rather slow and the cost was much too high to have the time optimize for the P100s and then generate the data. I have started training the model now using different neural networks and I am eager to see whether I beat Deepstacks loss on both the turn and the flop. I expect to finish my paper in the next weeks. Alongside the paper I will also release my code and my models because in private messages a lot of people have asked me for them. It does not really make sense for everyone to generate data and train models themselves because the cost for most people is too high anyway. I can understand that and that is why I will release them once I have found the best models. Greetings Thimothy
Wonder if your models will load up on average gpu ;)
Is your code written in torch and lua aswell?
@PhDtimothy are you able to run with the data you generated? With which gpu? sounds super cool!
any one have the generated training data ? or anyone have the trained model?
Dear @PhDtimothy, could you make the data accessible? I think i would help the progression of this Project tremendously. Or if thats not possible send them via DM?