cortex.cpp icon indicating copy to clipboard operation
cortex.cpp copied to clipboard

bug: [DESCRIPTION]models start the model imported can not work.

Open cloudherder opened this issue 1 year ago • 4 comments

Cortex version

cortex-1.0.0-rc1-windows-amd64-local-installer

Describe the Bug

Run all models by imported,return “ Model failed to load with status code: 500”

Steps to Reproduce

1.cortex-beta models import --model_id gemma-2b-Q8_0.gguf --model_path ./gemma-2b-Q8_0.gguf It is successful, and can run modles subcommand ,such as list,get ,update, delete. 2.cortex-beta models start gemma-2b-Q8_0.gguf It return : gguf_init_from_file: failed to open '': 'Invalid argument' {"timestamp":1728130117,"level":"ERROR","function":"LoadModel","line":186,"message":"llama.cpp unable to load model","model":""} Model failed to load with status code: 500 Error: ?

Screenshots / Logs

图片 cortex.log cortex-cli.log

What is your OS?

  • [ ] MacOS
  • [X] Windows
  • [ ] Linux

What engine are you running?

  • [X] cortex.llamacpp (default)
  • [ ] cortex.tensorrt-llm (Nvidia GPUs)
  • [ ] cortex.onnx (NPUs, DirectML)

cloudherder avatar Oct 05 '24 12:10 cloudherder

Hi @cloudherder, for models import, the absolute path is required for --model_path for now. We will improve this soon. Apologies for the inconvenience.

vansangpfiev avatar Oct 05 '24 12:10 vansangpfiev

Hi @cloudherder, for models import, the absolute path is required for --model_path for now. We will improve this soon. Apologies for the inconvenience.

Thank you for your reply! You have created a great work! I tested it with an absolute path. The results are shown as follows: 图片

The following error is recorded in the Cortex.log file: 20241005 13:29:56.458000 UTC 10188 ERROR ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2539.93 MiB on device 0: cudaMalloc failed: out of memory

  • llama_engine.cc:393 20241005 13:29:56.484000 UTC 10188 ERROR llama_model_load: error loading model: unable to allocate backend buffer
  • llama_engine.cc:393 20241005 13:29:56.484000 UTC 10188 ERROR llama_load_model_from_file: failed to load model

The sizes of the three models tested are 2.46GB,2.48GB and 7.06GB,my laptop has 16GB of memory, and using server.exe of llama.cpp can load and use these three models normally.

cloudherder avatar Oct 05 '24 14:10 cloudherder

@cloudherder Seems like you don't have enough VRAM. Please try to set the ngl of your model to 0 or 1 For example, with model gemma-2b-Q8_0.gguf you can check model config by running:

cortex-beta models get gemma-2b-Q8_0.gguf

Then set the ngl to 1:

cortex-beta models update --model_id gemma-2b-Q8_0.gguf --ngl 1

Run cortex-beta models get gemma-2b-Q8_0.gguf to check if config is updated Then try to start the model.

Can you share the output of nvidia-smi command also?

vansangpfiev avatar Oct 05 '24 14:10 vansangpfiev

@cloudherder Seems like you don't have enough VRAM. Please try to set the ngl of your model to 0 or 1 For example, with model gemma-2b-Q8_0.gguf you can check model config by running:

cortex-beta models get gemma-2b-Q8_0.gguf

Then set the ngl to 1:

cortex-beta models update --model_id gemma-2b-Q8_0.gguf --ngl 1

Run cortex-beta models get gemma-2b-Q8_0.gguf to check if config is updated Then try to start the model.

Can you share the output of nvidia-smi command also?

Thanks for your help! this is my test: 图片

图片

the output of nvidia-smi.exe 图片

cloudherder avatar Oct 05 '24 15:10 cloudherder

Hi @cloudherder, apologies for late response. Can you please set ngl = 0 and try again? Would you mind sharing the logs when you run with ngl = 1?

vansangpfiev avatar Oct 17 '24 02:10 vansangpfiev

Hi @cloudherder, we've released cortex v1.0.1 (release note) We'll love if you can give cortex another go, with the many models you've downloaded.

To update to cortex v1.0.1 (or download here: https://cortex.so/)

> cortex update
> cortex update --server

gabrielle-ong avatar Oct 25 '24 02:10 gabrielle-ong

@cloudherder - closing this stale issue. We've released Cortex 1.0.3 with bugfixes an a much improved UX. We're also working on recommending models based on your VRAM to release in 2 sprints. #1108

gabrielle-ong avatar Nov 28 '24 07:11 gabrielle-ong