I am not even seeing True or False Straightly it dropping out
C:\Windows\System32>interpreter --local
Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.
[?] Parameter count (smaller is faster, larger is more capable): 34B 7B 13B
34B
[?] Quality (smaller is faster, larger is more capable): Small | Size: 13.2 GB, Estimated RAM usage: 15.7 GB
Small | Size: 13.2 GB, Estimated RAM usage: 15.7 GB Medium | Size: 18.8 GB, Estimated RAM usage: 21.3 GB Large | Size: 33.4 GB, Estimated RAM usage: 35.9 GB See More
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): y
Model found at C:\Users\Brane\AppData\Local\Open Interpreter\Open Interpreter\models\codellama-34b-instruct.Q2_K.gguf ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3050 Laptop GPU, compute capability 8.6
C:\Windows\System32>