flair
flair copied to clipboard
How to use gpu in model tuning?
I am following the constructions in the Tutorial 8: Model Tuning
However, when I follow the codes in the page, I find that I am still using CPU to do the work:
2022-07-07 18:26:55,244 Corpus: "Corpus: 339 train + 101 dev + 129 test sentences" 2022-07-07 18:26:55,244 ---------------------------------------------------------------------------------------------------- 2022-07-07 18:26:55,244 Parameters: 2022-07-07 18:26:55,244 - learning_rate: "0.200000" 2022-07-07 18:26:55,244 - mini_batch_size: "32" 2022-07-07 18:26:55,244 - patience: "3" 2022-07-07 18:26:55,244 - anneal_factor: "0.5" 2022-07-07 18:26:55,244 - max_epochs: "50" 2022-07-07 18:26:55,244 - shuffle: "True" 2022-07-07 18:26:55,244 - train_with_dev: "False" 2022-07-07 18:26:55,244 - batch_growth_annealing: "False" 2022-07-07 18:26:55,244 ---------------------------------------------------------------------------------------------------- 2022-07-07 18:26:55,244 Model training base path: "resources\results" 2022-07-07 18:26:55,244 ---------------------------------------------------------------------------------------------------- 2022-07-07 18:26:55,244 Device: cpu 2022-07-07 18:26:55,244 ---------------------------------------------------------------------------------------------------- 2022-07-07 18:26:55,244 Embeddings storage mode: cpu
I've tried the embeddings_storage_mode='gpu'
in Tutorial 7: Training a Model, but I couldn't find a place to set the code in.
It would be appreciated if someone would offer help.
As far as I know, Flair will use a GPU automatically if available, so there is no need to specify anything in the code.
Do you have CUDA installed? If so can you run torch.cuda.is_available()
and check the result? Or better run nvidia-smi
in your terminal. Here you can see if CUDA is installed and the current usage of your GPU.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.