unclemusclez

Results 27 issues of unclemusclez

I would like stack two separate chat windows when i look behind my controller while live streaming. Is this possible?

Question

### Is there an existing issue for the same bug? - [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting - [X] I have checked the existing issues. ### Describe...

bug

### Is there an existing issue for the same bug? - [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting - [X] I have checked the existing issues. ### Describe...

bug
severity:low
waiting for input

The nodes connect, but crash after roughly 3 seconds. Server: ``` sudo main simple-server --weights-float-type q40 --buffer-float-type q40 --nthreads 4 --model ~/dllama_meta-llama-3-8b_q40.bin --tokenizer ~/dllama-llama3-tokenizer.t --workers 192.168.2.212:9998 192.168.2.213:9998 192.168.2.214:9998 192.168.2.215:9998 192.168.2.216:9998...

``` ubuntu@ubuntu:~/llama3/Meta-Llama-3-8B-Instruct$ python3 ../../distributed-llama/converter/convert-llama.py ./ q40 Model name: Target float type: q40 Target file: dllama__q40.bin Traceback (most recent call last): File "/home/ubuntu/llama3/Meta-Llama-3-8B-Instruct/../../distributed-llama/converter/convert-llama.py", line 119, in convert(modelPath, outputFileName, targetFloatType) File "/home/ubuntu/llama3/Meta-Llama-3-8B-Instruct/../../distributed-llama/converter/convert-llama.py",...

Is there anyway that main and worker could be separated so I can use a cluster of 8 RPi 3b+ for the compute but the scheduling is offset to another...

ZLUDA only works with Xformers for versions PyTorch 2.2.1 and above. Any way to increase the version of the dependency?

The ipython workspace seems to generate with app.py already present on the filesystem, causing the agent to focus on that as the priority task. > Forget about creating a list...

bug

WSL2 +ROCm xformers @ git+https://github.com/facebookresearch/xformers.git@133d7f1cb0a050f9ee8b8cd02fd9c906247e6c4e ``` got prompt [rgthree] Using rgthree's optimized recursive execution. [rgthree] First run patching recursive_output_delete_if_changed and recursive_will_execute. [rgthree] Note: If execution seems broken due to forward...

### 🐛 Describe the bug ```bash In function ‘make_unique’, inlined from ‘allocate’ at /home/musclez/ComfyUI/opt/rocm/pytorch/torch/csrc/jit/runtime/static/impl.h:1129:47, inlined from ‘__ct ’ at /home/musclez/ComfyUI/opt/rocm/pytorch/torch/csrc/jit/runtime/static/impl.h:1114:41, inlined from ‘__ct_base ’ at /home/musclez/ComfyUI/opt/rocm/pytorch/torch/csrc/jit/runtime/static/impl.cpp:2260:7: /usr/local/include/c++/13.3.1/bits/unique_ptr.h:1085:30: warning: argument 1...