HRM icon indicating copy to clipboard operation
HRM copied to clipboard

pip install adam_atan2 success then run code get No module named 'adam_atan2_backend'

Open xiezhipeng-git opened this issue 5 months ago • 11 comments

发生异常: ModuleNotFoundError (note: full exception trace is shown but execution is paused at: _run_module_as_main) No module named 'adam_atan2_backend' File "/usr/local/lib/python3.12/site-packages/adam_atan2/adam_atan2.py", line 4, in import adam_atan2_backend File "/usr/local/lib/python3.12/site-packages/adam_atan2/init.py", line 1, in from .adam_atan2 import AdamATan2 File "/mnt/d/my/work/study/ai/kaggle_code/arc/HRM/pretrain.py", line 19, in from adam_atan2 import AdamATan2 File "/usr/local/lib/python3.12/runpy.py", line 88, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.12/runpy.py", line 198, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None, ModuleNotFoundError: No module named 'adam_atan2_backend' Why? What can I do? I already tried uninstall then pip install --no-build-isolation adam-atan2 but no effect. @imoneoi

xiezhipeng-git avatar Aug 04 '25 12:08 xiezhipeng-git

Make sure PyTorch and CUDA are present before pip install --no-build-isolation adam-atan2

imoneoi avatar Aug 04 '25 13:08 imoneoi

Make sure PyTorch and CUDA are present before pip install --no-build-isolation adam-atan2

I already tried some times.Always get the same error.Changing to adamW allows the program to run.Run the smallest Sudoku on a 48GB RTX 4090。It takes 5 hours. This is 30 times the 10 minutes stated by your official side. The time difference is so huge. Is this reasonable? What kind of equipment do you have?

"name": "Debug: Single GPU",
            "type": "debugpy",
            "request": "launch",
            "program": "pretrain.py",
            "args": [
                "data_path=data/sudoku-extreme-1k-aug-1000",
                "epochs=20000",
                "eval_interval=2000",
                "lr=1e-4",
                "puzzle_emb_lr=1e-4",
                "weight_decay=1.0",
                "puzzle_emb_weight_decay=1.0",
],
            "env": {
                "OMP_NUM_THREADS": "1",
                "DISABLE_COMPILE": "true"
            }
        },

xiezhipeng-git avatar Aug 04 '25 13:08 xiezhipeng-git

Same problem here...I installed after pytorch and CUDA, get

ModuleNotFoundError: No module named 'adam_atan2_backend'

ltenny avatar Aug 04 '25 13:08 ltenny

Encountered the same problem, solved using following method:

pip install adam-atan2-pytorch

Then modify the import:

from adam_atan2_pytorch import AdamATan2

Correspondingly change AdamAtan2 to AdamATan2 in pretrain.py will solve the problem.

dywsy21 avatar Aug 04 '25 14:08 dywsy21

Encountered the same problem, solved using following method:

pip install adam-atan2-pytorch Then modify the import:

from adam_atan2_pytorch import AdamATan2 Correspondingly change AdamAtan2 to AdamATan2 in pretrain.py will solve the problem.

This will work. However, the learning rate part of the code still needs to be manually modified. Otherwise, it will report an error when the default value is 0. But the learning rate was clearly passed. And when I used AdamW, there was no error. I checked the code and the input is indeed 0. It's very strange. The actual running time was nearly 11 hours. It's 65 times the running time of the official device on a 48GB 4090.

xiezhipeng-git avatar Aug 05 '25 03:08 xiezhipeng-git

Try reinstall with

pip uninstall adam-atan2 && pip install --verbose --no-cache-dir adam-atan2

This worked for me.

ronanhansel avatar Aug 07 '25 02:08 ronanhansel

@dywsy21 It is the opposite: replace AdamATan2 with AdamAtan2

Via terminal:

pip install adam-atan2-pytorch
sed -i 's/adam_atan2/adam_atan2_pytorch/g' pretrain.py
sed -i 's/AdamATan2/AdamAtan2/g' pretrain.py

I also had to do this:

sed -i 's/lr=0,/lr=0.0001,/g' pretrain.py

The learning rates are being set to 0 in the optimizer initialization, but they are dynamically updated during training through the scheduler


Note: for Mac, use '' after the -i:

sed -i '' 's/adam_atan2/adam_atan2_pytorch/g' pretrain.py
sed -i '' 's/AdamATan2/AdamAtan2/g' pretrain.py
sed -i '' 's/lr=0,/lr=0.0001,/g' pretrain.py

kroggen avatar Aug 07 '25 05:08 kroggen

(hrm-env) # OMP_NUM_THREADS=8 python pretrain.py
data_path=data/sudoku-extreme-1k-aug-1000
epochs=20000
eval_interval=2000
global_batch_size=384
lr=7e-5
puzzle_emb_lr=7e-5
weight_decay=1.0
puzzle_emb_weight_decay=1.0> > > > > > > > Traceback (most recent call last): File "/tf/HRM/pretrain.py", line 19, in from adam_atan2 import AdamATan2 File "/tf/hrm-env/lib/python3.11/site-packages/adam_atan2/init.py", line 1, in from .adam_atan2 import AdamATan2 File "/tf/hrm-env/lib/python3.11/site-packages/adam_atan2/adam_atan2.py", line 4, in import adam_atan2_backend ModuleNotFoundError: No module named 'adam_atan2_backend'

HELP ! I have tried all the method above that you guys had comment but none of those worked :(

TuananhCR avatar Aug 07 '25 09:08 TuananhCR

This is a problem with build tools or the environment, follow the tips below.

a} There are issues that can be resolved and don't require workarounds i) pip uninstall adam-atan2 && pip install --verbose --no-cache-dir adam-atan2 ii) review the build and ensure there are no errors iii) If you see and error, resolve the issue: missing package, missing build tool, etc.

[Watch For] W0919 11:39:11.378000 66629 torch/utils/cpp_extension.py:507] The detected CUDA version (12.0) has a minor version mismatch with the version that was used to compile PyTorch (12.8). Most likely this shouldn't be a problem. W0919 11:39:11.382000 66629 torch/utils/cpp_extension.py:517] There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.0 building 'adam_atan2_backend' extension

[Not an Error] [09/19/25 11:43:13] ERROR listing git files failed - pretending git.py:26 there aren't any

wuttechadmin avatar Sep 19 '25 15:09 wuttechadmin

had that issue aswell, managed to fix it with help from here but i forgot how to fix it myself

Op vr 19 sep 2025 om 17:49 schreef wuttechadmin @.***>

wuttechadmin left a comment (sapientinc/HRM#25) https://github.com/sapientinc/HRM/issues/25#issuecomment-3312713850

This is a problem with build tools or the environment, follow the tips below.

a} There are issues that can be resolved and don't require workarounds i) pip uninstall adam-atan2 && pip install --verbose --no-cache-dir adam-atan2 ii) review the build and ensure there are no errors iii) If you see and error, resolve the issue: missing package, missing build tool, etc.

[Watch For] W0919 11:39:11.378000 66629 torch/utils/cpp_extension.py:507] The detected CUDA version (12.0) has a minor version mismatch with the version that was used to compile PyTorch (12.8). Most likely this shouldn't be a problem. W0919 11:39:11.382000 66629 torch/utils/cpp_extension.py:517] There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.0 building 'adam_atan2_backend' extension

[Not an Error] [09/19/25 11:43:13] ERROR listing git files failed - pretending git.py:26 there aren't any

— Reply to this email directly, view it on GitHub https://github.com/sapientinc/HRM/issues/25#issuecomment-3312713850, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXADQO3G4ZIS5MHG53UZJU33TQQYDAVCNFSM6AAAAACDB4YMHWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTGMJSG4YTGOBVGA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

storm-frostwing avatar Sep 19 '25 17:09 storm-frostwing

Try using --no-build-isolation option: pip install --verbose --no-cache-dir --no-build-isolation adam-atan2

alexander-rakhlin avatar Nov 01 '25 14:11 alexander-rakhlin