pip install adam_atan2 success then run code get No module named 'adam_atan2_backend'
发生异常: ModuleNotFoundError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
No module named 'adam_atan2_backend'
File "/usr/local/lib/python3.12/site-packages/adam_atan2/adam_atan2.py", line 4, in
Make sure PyTorch and CUDA are present before pip install --no-build-isolation adam-atan2
Make sure PyTorch and CUDA are present before
pip install --no-build-isolation adam-atan2
I already tried some times.Always get the same error.Changing to adamW allows the program to run.Run the smallest Sudoku on a 48GB RTX 4090。It takes 5 hours. This is 30 times the 10 minutes stated by your official side. The time difference is so huge. Is this reasonable? What kind of equipment do you have?
"name": "Debug: Single GPU",
"type": "debugpy",
"request": "launch",
"program": "pretrain.py",
"args": [
"data_path=data/sudoku-extreme-1k-aug-1000",
"epochs=20000",
"eval_interval=2000",
"lr=1e-4",
"puzzle_emb_lr=1e-4",
"weight_decay=1.0",
"puzzle_emb_weight_decay=1.0",
],
"env": {
"OMP_NUM_THREADS": "1",
"DISABLE_COMPILE": "true"
}
},
Same problem here...I installed after pytorch and CUDA, get
ModuleNotFoundError: No module named 'adam_atan2_backend'
Encountered the same problem, solved using following method:
pip install adam-atan2-pytorch
Then modify the import:
from adam_atan2_pytorch import AdamATan2
Correspondingly change AdamAtan2 to AdamATan2 in pretrain.py will solve the problem.
Encountered the same problem, solved using following method:
pip install adam-atan2-pytorch Then modify the import:
from adam_atan2_pytorch import AdamATan2 Correspondingly change
AdamAtan2toAdamATan2in pretrain.py will solve the problem.
This will work. However, the learning rate part of the code still needs to be manually modified. Otherwise, it will report an error when the default value is 0. But the learning rate was clearly passed. And when I used AdamW, there was no error. I checked the code and the input is indeed 0. It's very strange. The actual running time was nearly 11 hours. It's 65 times the running time of the official device on a 48GB 4090.
Try reinstall with
pip uninstall adam-atan2 && pip install --verbose --no-cache-dir adam-atan2
This worked for me.
@dywsy21 It is the opposite: replace AdamATan2 with AdamAtan2
Via terminal:
pip install adam-atan2-pytorch
sed -i 's/adam_atan2/adam_atan2_pytorch/g' pretrain.py
sed -i 's/AdamATan2/AdamAtan2/g' pretrain.py
I also had to do this:
sed -i 's/lr=0,/lr=0.0001,/g' pretrain.py
The learning rates are being set to 0 in the optimizer initialization, but they are dynamically updated during training through the scheduler
Note: for Mac, use '' after the -i:
sed -i '' 's/adam_atan2/adam_atan2_pytorch/g' pretrain.py
sed -i '' 's/AdamATan2/AdamAtan2/g' pretrain.py
sed -i '' 's/lr=0,/lr=0.0001,/g' pretrain.py
(hrm-env) #
OMP_NUM_THREADS=8 python pretrain.py
data_path=data/sudoku-extreme-1k-aug-1000
epochs=20000
eval_interval=2000
global_batch_size=384
lr=7e-5
puzzle_emb_lr=7e-5
weight_decay=1.0
puzzle_emb_weight_decay=1.0> > > > > > > >
Traceback (most recent call last):
File "/tf/HRM/pretrain.py", line 19, in
HELP ! I have tried all the method above that you guys had comment but none of those worked :(
This is a problem with build tools or the environment, follow the tips below.
a} There are issues that can be resolved and don't require workarounds i) pip uninstall adam-atan2 && pip install --verbose --no-cache-dir adam-atan2 ii) review the build and ensure there are no errors iii) If you see and error, resolve the issue: missing package, missing build tool, etc.
[Watch For] W0919 11:39:11.378000 66629 torch/utils/cpp_extension.py:507] The detected CUDA version (12.0) has a minor version mismatch with the version that was used to compile PyTorch (12.8). Most likely this shouldn't be a problem. W0919 11:39:11.382000 66629 torch/utils/cpp_extension.py:517] There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.0 building 'adam_atan2_backend' extension
[Not an Error] [09/19/25 11:43:13] ERROR listing git files failed - pretending git.py:26 there aren't any
had that issue aswell, managed to fix it with help from here but i forgot how to fix it myself
Op vr 19 sep 2025 om 17:49 schreef wuttechadmin @.***>
wuttechadmin left a comment (sapientinc/HRM#25) https://github.com/sapientinc/HRM/issues/25#issuecomment-3312713850
This is a problem with build tools or the environment, follow the tips below.
a} There are issues that can be resolved and don't require workarounds i) pip uninstall adam-atan2 && pip install --verbose --no-cache-dir adam-atan2 ii) review the build and ensure there are no errors iii) If you see and error, resolve the issue: missing package, missing build tool, etc.
[Watch For] W0919 11:39:11.378000 66629 torch/utils/cpp_extension.py:507] The detected CUDA version (12.0) has a minor version mismatch with the version that was used to compile PyTorch (12.8). Most likely this shouldn't be a problem. W0919 11:39:11.382000 66629 torch/utils/cpp_extension.py:517] There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.0 building 'adam_atan2_backend' extension
[Not an Error] [09/19/25 11:43:13] ERROR listing git files failed - pretending git.py:26 there aren't any
— Reply to this email directly, view it on GitHub https://github.com/sapientinc/HRM/issues/25#issuecomment-3312713850, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXADQO3G4ZIS5MHG53UZJU33TQQYDAVCNFSM6AAAAACDB4YMHWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTGMJSG4YTGOBVGA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Try using --no-build-isolation option:
pip install --verbose --no-cache-dir --no-build-isolation adam-atan2