PromptEngineer
PromptEngineer
@Anas-Dew @achillez Can you share your hardware configuration and memory utilization while the code is running?
Can you look at this (update to the readme) and see if that helps with the issue?
@teleprint-me thanks for the updates. I will have a look at it tonight. Just noticed one thing which is probably worth looking at, if someone tries to use another `model_id`...
@teleprint-me running into this while trying to run the localGPT.run, here is the full trace: ` (localgpt-dev) prompt@Prompts-MBP localgpt-dev % python -m localGPT.run --device_type cpu 2023-06-28 17:59:58,520 - INFO -...
@teleprint-me I think we will need to add the `device_type` check and default to `load_huggingface_llama_model` if we the device_type is `mps` or `cpu`. That seems to work. I haven't tested...
@teleprint-me we probably want to simplify the implementation a bit more. I was using the following parameters: `python -m localGPT.run --device_type mps` and `python -m localGPT.run --device_type cpu` The readme...
@teleprint-me thanks, it works for `cpu` however, using `load_huggingface_model` for `mps` doesn't work. Need to look into it why. `MPS` seems to work with `load_huggingface_llama_model`. Can we have a check...
@teleprint-me I agree, let me have a look at it. Right now we need for the code to work on `cuda`, `cpu` and `mps` as users are expected to have...
@teleprint-me I appreciate all the efforts you are putting into this. Grateful to you and others for that. I like the idea of plugin-based architecture. That will make things more...
@teleprint-me I was testing it after your recent changes and it seems to not be able to create the index. I am getting the following error trace: Enter a query:...