Chris Taylor

Results 88 comments of Chris Taylor

No I think it makes more sense in app.py for use with gradio containers. I want to be able to just deploy this not manually run a different app

Don't see your point.. either way you'd want to use CUDA_VISIBLE_DEVICES=0 to do what you're saying. So might as well just have one script

And for the docker container you can enforce single GPU usage like this: ```bash docker run -it -p 43839:43839 --platform=linux/amd64 --gpus '"device=0"' -v $HOME/models/:/workspace/instantmesh/models instantmesh ```

So there's no reason to have a ton of ugly code duplication for this feature.. Even if you wanted to select it via script features it's better to have an...

Sometimes it can lead to problems but I tested it and it's working great. Also I regularly run 12.4.1 on all my Ubuntu GPU servers so I have some trust...

In my case, using the Prusa V2 enclosure to print ASA, I was seeing frequent Y crashes above a certain Z level. The issue was caused by having the printer...

Better cooling also helped me a lot, though didn't completely resolve it. I added a fan to the enclosure. My final solution will be to just not print high temperatures...

I was printing the Voron parts out of ASA. On some of the parts that were simpler (boxy) it didn't have trouble. On other models it seems to happen consistently...

I get this error trying to quantize with the llama_quantize.py script: ``` root@e0e306bfeaaa:~/TensorRT-LLM/examples/model_api# python3 llama_quantize.py --hf_model_dir /models/Meta-Llama-3-8B-Instruct/ --cache_dir cache -c ``` ``` [TensorRT-LLM][ERROR] 3: [executionContext.cpp::setInputShape::2309] Error Code 3: API Usage...

Also seeing error: `Error while fetching server API version` Software seems to be broken at the moment