Bendr Radrigues
Bendr Radrigues
Hello folks, Trying to run a private swarm on a 7x Volta-generation GPUs. As suggested by docs, i've set torch_dtype to float16 and NUM_BLOCKS to 10 (these are 32GB GPUs)...
hey folks, thank you for developing this amazing project! can one run Petals with Bloomchat model? https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1 does one need to convert it? can this be done locally if i...
**What problem or use case are you trying to solve?** When the environment where OpenDevin runs is behind proxy the sandbox has trouble accessing the internet. Things like pip install...
### Is there an existing issue for the same bug? - [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting - [X] I have checked the existing issues. ### Describe...
### Is there an existing issue for the same bug? - [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting - [X] I have checked the existing issues. ### Describe...
- Most issues are due to fact that embedding layer 250880x14336 is too large to fit into signed integer - Above affects the main, quantize, and also ggml code -...
Excellent project! I've hit an issue when trying to test gpt-researcher with locally running llama3. When using firefox (125.0.2) on linux i see that on long research the browser closes...
When testing gpt-researcher with local llama3, i found that sometimes the extract_headers will throw the exception here ``` if line.startswith("
Having done some testing i wonder how does one influence the quality of the report via configuration, what are best practices if any I.e. what is the impact of various...
Testing gpt-researcher with llama3, i found that 3 times out of 4 llama3 will respond with json + some verbiage to prompt in generate_search_queries_prompt. Not sure it is worth changing...