Ilya Chernikov
Ilya Chernikov
Hello, Faced with same problem with box_annotator, solved like that: ``` non_none_mask = np.where(detections.class_id == None, False, True) detections.xyxy = detections.xyxy[non_none_mask] detections.confidence = detections.confidence[non_none_mask] detections.class_id = detections.class_id[non_none_mask] ```
Facing same issue with Qwen2.5
Looks like this problem is related to `vllm` Tried to disable `fast_inference` and eval loop started to working correctly (but very slow, for sure). @danielhanchen FYI
Hello! I'm facing same issue with RAM OOM with GRPO (Qwen2.5 14B model, A100 GPU). VRAM consumption is stable. My config: ``` Unsloth 2025.3.19: Fast Qwen2 patching. Transformers: 4.51.2. vLLM:...
> what if i cant downgrade pandas? for me, downgrading numpy only worked
Try the following: `RUN CMAKE_ARGS="-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=70" pip install llama-cpp-python`
hmm. weird, I still get the same error. I tried clearing the model cache and reinstalling unsloth ``` ~/.local/lib/python3.10/site-packages/unsloth/kernels/rope_embedding.py in forward(ctx, Q, cos, sin, position_ids) 171 half = Q.shape[-1]//2 172...
@macksin downgrading to `unsloth==2025.1.5` and `transformers==4.47.1` fixed my problem with Mistral3 @danielhanchen FYI