amyeroberts
amyeroberts
@sgugger @NielsRogge @alaradirik @LysandreJik Adding you all for a first-pass review for the draft ImageProcessor work. This PR is failing because it's not safely importing e.g. `PIL` if it's not...
@alaradirik @sgugger I've now merged in the stacked PRs above this one. This PR has the transforms library and the image processor for GLPN. Thanks for all of you reviews...
Hi @MadElf1337 do you have any updates? Are you still planning on contributing this model?
Great - glad to hear you're still interested :) As @NielsRogge pointed out, data2vec vision is an extension of BEiT. This means the porting should be a lot simpler! In...
@ariG23498 The parsing issue seen in the `check_repository_consistency` tests is arising because of `@keras_serializable` decorators being below `# Copied from` statements. If I check out your branch, I can run...
> Looks good to me! If the changes per model are small enough, it would probably be best to change them all in the same PR, rather than doing individual...
Hi @MadElf1337 - thanks for opening a PR and for adding this model! Outline looks good. As a quick overview, I see two main things that you'll want to add...
@MadElf1337 As discussed on the issue #18085 [here](https://github.com/huggingface/transformers/issues/18085#issuecomment-1210544100) for this model, we want to copy the relevant code in data2vec to `modeling_tf_beit.py`, then add the necessary `#Copied from` statements in...
If you follow the same structure as the pytorch data2vec vision and beit, including the copied from statements, then almost all of the architecture considerations will be taken care of...
@alaradirik @NielsRogge Could you (re-)review?