alpha_mix_active_learning icon indicating copy to clipboard operation
alpha_mix_active_learning copied to clipboard

Why I suffer from endless OPENBLAS warning.

Open tzjtatata opened this issue 1 year ago • 0 comments

Thank you for your exciting work.

But when I use your code, it suffers from endless warning by openblas, like following: image

After find candidates and before 'Number of samples that are misclassified...'.

I use a brand new environment via conda and pip. Details like following: name: ALFA_Mix channels:

  • defaults dependencies:
  • _libgcc_mutex=0.1=main
  • _openmp_mutex=5.1=1_gnu
  • ca-certificates=2023.01.10=h06a4308_0
  • certifi=2022.12.7=py38h06a4308_0
  • ld_impl_linux-64=2.38=h1181459_1
  • libffi=3.3=he6710b0_2
  • libgcc-ng=11.2.0=h1234567_1
  • libgomp=11.2.0=h1234567_1
  • libstdcxx-ng=11.2.0=h1234567_1
  • ncurses=6.4=h6a678d5_0
  • openssl=1.1.1t=h7f8727e_0
  • pip=23.0.1=py38h06a4308_0
  • python=3.8.3=hcff3b4d_2
  • readline=8.2=h5eee18b_0
  • setuptools=65.6.3=py38h06a4308_0
  • sqlite=3.41.1=h5eee18b_0
  • tk=8.6.12=h1ccaba5_0
  • wheel=0.38.4=py38h06a4308_0
  • xz=5.2.10=h5eee18b_1
  • zlib=1.2.13=h5eee18b_0
  • pip:
    • absl-py==1.4.0
    • asttokens==2.2.1
    • backcall==0.2.0
    • cachetools==4.2.4
    • charset-normalizer==3.1.0
    • click==8.1.3
    • cycler==0.11.0
    • decorator==5.1.1
    • executing==1.2.0
    • fonttools==4.39.3
    • future==0.18.3
    • google-auth==1.35.0
    • google-auth-oauthlib==0.4.6
    • grpcio==1.53.0
    • idna==3.4
    • importlib-metadata==6.3.0
    • ipdb==0.13.3
    • ipython==8.12.0
    • jedi==0.18.2
    • joblib==1.2.0
    • kiwisolver==1.4.4
    • liac-arff==2.5.0
    • markdown==3.4.3
    • markupsafe==2.1.2
    • matplotlib==3.5.3
    • matplotlib-inline==0.1.6
    • nltk==3.5
    • numpy==1.18.5
    • oauthlib==3.2.2
    • openml==0.11.0
    • packaging==23.0
    • pandas==1.4.4
    • parso==0.8.3
    • pexpect==4.8.0
    • pickleshare==0.7.5
    • pillow==9.5.0
    • prompt-toolkit==3.0.38
    • protobuf==3.20.0
    • ptyprocess==0.7.0
    • pure-eval==0.2.2
    • pyasn1==0.4.8
    • pyasn1-modules==0.2.8
    • pygments==2.14.0
    • pyparsing==3.0.9
    • python-dateutil==2.8.2
    • pytz==2023.3
    • regex==2023.3.23
    • requests==2.28.2
    • requests-oauthlib==1.3.1
    • rsa==4.9
    • scikit-learn==0.23.1
    • scipy==1.9.3
    • seaborn==0.10.1
    • six==1.16.0
    • sklearn==0.0
    • stack-data==0.6.2
    • tensorboard==2.2.2
    • tensorboard-plugin-wit==1.8.1
    • threadpoolctl==3.1.0
    • torch==1.13.1+cu117
    • torchattacks==2.13.2
    • torchaudio==0.13.1+cu117
    • torchvision==0.14.1+cu117
    • tqdm==4.48.0
    • traitlets==5.9.0
    • typing-extensions==4.5.0
    • urllib3==1.26.15
    • wcwidth==0.2.6
    • werkzeug==2.2.3
    • xmltodict==0.13.0
    • zipp==3.15.0

And I run the code like README.md: python main.py
--data_name CIFAR100 --data_dir /data2/dataset --log_dir ./logs
--n_init_lb 100 --n_query 100 --n_round 10 --learning_rate 0.001 --n_epoch 200 --model resnet18
--strategy AlphaMixSampling --alpha_opt --choose_best_val_model

And I suspect that this problem may influence accuracy, because the results are low: 0.041 for 100 samples, 0.056 for 400 samples.

tzjtatata avatar Apr 12 '23 08:04 tzjtatata