llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Is it possible to run 65B with 32Gb of Ram ?

Open TerraTR opened this issue 1 year ago • 0 comments

I already quantized my files with this command ./quantize ./ggml-model-f16.bin.X E:\GPThome\LLaMA\llama.cpp-master-31572d9\models\65B\ggml-model-q4_0.bin.X 2 , the first time it reduced my files size from 15.9 to 4.9Gb and when i tried to do it again nothing changed. After i executed this command "./main -m ./models/65B/ggml-model-q4_0.bin -n 128 --interactive-first" and when everything is loaded i enter my prompt, my memory usage goes to 98% (25Gb by main.exe) and i just wait dozens of minutes with nothing that appears heres an example:

**PS E:\GPThome\LLaMA\llama.cpp-master-31572d9> ./main -m ./models/65B/ggml-model-q4_0.bin -n 128 --interactive-first main: seed = 1679761762 llama_model_load: loading model from './models/65B/ggml-model-q4_0.bin' - please wait ... llama_model_load: n_vocab = 32000 llama_model_load: n_ctx = 512 llama_model_load: n_embd = 8192 llama_model_load: n_mult = 256 llama_model_load: n_head = 64 llama_model_load: n_layer = 80 llama_model_load: n_rot = 128 llama_model_load: f16 = 2 llama_model_load: n_ff = 22016 llama_model_load: n_parts = 8 llama_model_load: ggml ctx size = 41477.73 MB llama_model_load: memory_size = 2560.00 MB, n_mem = 40960 llama_model_load: loading model part 1/8 from './models/65B/ggml-model-q4_0.bin' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 2/8 from './models/65B/ggml-model-q4_0.bin.1' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 3/8 from './models/65B/ggml-model-q4_0.bin.2' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 4/8 from './models/65B/ggml-model-q4_0.bin.3' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 5/8 from './models/65B/ggml-model-q4_0.bin.4' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 6/8 from './models/65B/ggml-model-q4_0.bin.5' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 7/8 from './models/65B/ggml-model-q4_0.bin.6' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723 llama_model_load: loading model part 8/8 from './models/65B/ggml-model-q4_0.bin.7' llama_model_load: .......................................................................................... done llama_model_load: model size = 4869.09 MB / num tensors = 723

system_info: n_threads = 4 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |

main: prompt: ' ' main: number of tokens in prompt = 2 1 -> '' 29871 -> ' '

main: interactive mode on. sampling parameters: temp = 0.800000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.100000

== Running in interactive mode. ==

  • Press Ctrl+C to interject at any time.
  • Press Return to return control to LLaMa.
  • If you want to submit another line, end your input in ''.

how to become rich**

TerraTR avatar Mar 25 '23 17:03 TerraTR