paolovic
paolovic
### System Info ```shell Name: optimum Version: 1.18.0.dev0 Name: transformers Version: 4.36.0 Name: auto-gptq Version: 0.6.0.dev0+cu118 CUDA Version: 11.8 Python 3.8.17 ``` ### Who can help? _No response_ ### Information...
Render specific frames, panoramic views from specific poses, or uniformly sampled from camera path
Hi, as described in #2811 I added the following functionality: * Enter a desired eye position in world coordinates, and optionally a desired target, and get in return the images...
### Your current environment ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Red Hat...
Hi, in theory I could get enough compute to host and quantize current models. But it will be provided as multiple VMs, each with 2GPUs, each with 48GB VRAM. Using...
### Reminder - [x] I have read the above rules and searched the existing issues. ### System Info ```bash (llamafactory_env) [localhost.com LLaMA-Factory]$ llamafactory-cli env - `llamafactory` version: 0.9.2.dev0 - Platform:...
HI, unfortunately, I get this error: ```bash ipdb> n ==((====))== Unsloth 2024.12.4: Fast Llama patching. Transformers:4.47.0. \\ /| GPU: NVIDIA L40S-48C. Max memory: 47.712 GB. Platform: Linux. O^O/ \_/ \...
Co-authored-by: elementary-particle Completed the following, stale [PR](https://github.com/vllm-project/vllm/pull/11554): "This is the PR for the RFC https://github.com/vllm-project/vllm/issues/11522. Currently we are building a draft of simpler tool parsers using streaming JSON parsing libraries...