Matthew Hendricks
Matthew Hendricks
did I miss anything in my submission?
Thank you for your kind words.😊 On Mon, Sep 26, 2022, 3:31 AM Paliak ***@***.***> wrote: > @QuickStick123 This is just a tooltip > text change so i don't think...
I am experiencing long response times as well. How do I look at the prompt being delivered to the ollama model?
was using a derivative of adriens [notebook](https://www.kaggle.com/code/matthewhendricks/notebook0cd9dcd006) ``` --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[8], line 53 43 llm = Ollama(model=OLLAMA_MODEL) 44 # response = llm.complete("""Who is Grigori...
I was having trouble but ``` root@62d88bdd9d38:/workspace/axolotl# export BNB_CUDA_VERSION= root@62d88bdd9d38:/workspace/axolotl# accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes`...
How's your code understanding journey going?
Just understand the code, bro. 
The server logs for the problematic system are the ones I posted in my original comment above. Problematic system: ``` flags : fpu vme de pse tsc msr pae mce...
I ran ``` ollama run tinyllama --verbose hello ``` Older machine  Newer, buggy machine with tinyllama  and orca  I'm curious to know what's going on, though it's...