llama3
llama3 copied to clipboard
Intel graphics card Windows system local development
Describe the bug
Hello, my friends: I have just started learning how to develop large language models and am interning at a small company with only 11 people. I encountered difficulties after downloading the relevant files of llama3 8B. The specific problems are as follows. I am trying to test the lamma3 model with a tablet, but my graphics card is an Intel integrated graphics card and cannot use Intel Arc (it requires independent graphics card support). After debugging the paths of tokenizer_model and checkpoint, each run shows that the cuda driver needs to be used, but the Intel graphics card does not support the use of any version of cuda. The error (output)is: (.venv) PS D:\Llama3\llama3-main> python D:\Llama3\llama3-main\example_chat_completion.py --ckpt_dir D:\Llama3\llama3-main\ckpt_dir --tokenizer_path D:\Llama3\llama3-main\TOKENIZER_PATH\tokenizer.model
Traceback (most recent call last):
File "D:\Llama3\llama3-main\example_chat_completion.py", line 89, in
How can I modify the code in the llama3 file, or make any adjustments on my computer? 24 hours waiting for any reply.