Igor Lashkov

Results 8 comments of Igor Lashkov

@JakeWharton What about using these converters? ```kotlin fun Number.toDp(displayMetrics: DisplayMetrics): Float = this.toFloat() / displayMetrics.density fun Number.toSp(displayMetrics: DisplayMetrics): Float = this.toFloat() / displayMetrics.scaledDensity fun Number.toPx(displayMetrics: DisplayMetrics, fromSp: Boolean = false):...

@romainguy I refined concrete types for these density functions, could you explain why they are not applicable for negative values? (tested myself) ```kotlin fun Int.toDp(displayMetrics: DisplayMetrics): Float = this.toFloat() /...

@romainguy possible fix, round should be ok ```kotlin fun Float.spToPx(displayMetrics: DisplayMetrics): Int = (this * displayMetrics.scaledDensity).round() fun Float.dpToPx(displayMetrics: DisplayMetrics): Int = (this * displayMetrics.density).round() private fun Float.round(): Int = (if(this...

> [@iglaweb](https://github.com/iglaweb) we recently fixed one bug when saving Qwen-VL models and it is in the latest patch release. Can you try to update the transformers version? > > Prob...

@zucchini-nlp Thanks a lot for the snippet. Once I changed `model.save_pretrained()` to `trainer.save_model` together with `args.output_dir` for the training, I was able to make an inference successfully on a fine-tuned...

@zucchini-nlp Thanks a lot for quick response. As you mentioned, the problem was in the message template. I used `"path": video_path,` instead of `"video": video_path`. Right now, the training and...

@zucchini-nlp > My question is, if we pass the processor as a processing class, does that fail during training (I believe it shouldn't)? And yeah, using the official processor will...

@zucchini-nlp I switched to `attn_implementation='flash_attention_2'` and then I got the following error: `AttributeError: 'Qwen2_5_VLVisionAttention' object has no attribute 'is_causal'` But it looks like it is a known issue in 4.53.0...