Kohya S.
Kohya S.
Commenting out these two lines may work: https://github.com/salesforce/BLIP/blob/3a29b7410476bf5f2ba0955827390eb6ea1f4f9d/models/blip.py#L131-L132 EDIT: After commenting I noticed yenlianglai had already written. The recent transformers seems to do `repeat_interleave` automatically in `_expand_dict_for_generation`. This fix https://github.com/huggingface/transformers/pull/21624...
Thank you for this PR and sorry for the delay. I would like to confirm one thing, am I correct in understanding that this PR is to replace #1231 (we...
Thank you for clarification! I will merge this sooner.!
Sorry for the delay. I added `--log_config` option to enable this feature. I appreciate your understanding.
Thank you for this! This is a great idea. However, it appears that some of the users are training the model in a non-interactive environment. In those cases, the script...
Thank you for this. In my environment, training lllite works fine without this fix. However, this should be fixed. > This seems to be very broken, the training script itself...
Thank you for this! However, when the model trains away from the images, I think the model will eventually break down because that is what happens with any broken image....
That makes sense! I will merge this sooner :)
Thank you for this PR. This seems to be very simple. However, I wonder how this works if I don't use wandb? Also some arguments (wandb and hagging face id,...
I've added a temporary workaround for this to dev branch. Please try it.