Shamiul Hasan
Shamiul Hasan
https://github.com/byt3bl33d3r/DHCPShock/blob/b2bb59dd525b146d693571daec037922e34a1f65/dhcpshock.py#L60 The error goes like this. ``` [*] Got dhcp REQUEST from: a4:50:46:7c:12:91 xid: 0x84d40287 [*] Sending ACK... Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/scapy/compat.py", line 117, in raw return...
### **What am I doing wrong? Is it impossible to train the 345M model on multiple GPUs? Or is my GPUs are not enough? If it's the case, what GPU...
1. I am using `ml.p3.2xlarge` instance on AWS with one 16 GB V100 GPU and tried to train 345 model with batch_size 2 and it gets OOM error. It works...