Aritra Roy Gosthipaty
Aritra Roy Gosthipaty
@Rocketknight1 The changes you have suggested are taken care of! @ydshieh the tests pass as well! @gante @amyeroberts should we talk about the deprecated code used [here](https://github.com/huggingface/transformers/pull/18020#discussion_r974312323), or should we...
FYI: The failing tests are not related to the `GroupViT` model.
@gante over to you now! 🤗
Thanks @gante I will give it a try myself!
@gante while running `transformers-cli pt-to-tf --model-name nvidia/groupvit-gcc-yfcc` I get the following error. ``` List of maximum hidden layer differences above the threshold (5e-05): text_model_outputhidden_states[1]: 9.155e-05 text_model_outputhidden_states[2]: 9.155e-05 text_model_outputhidden_states[3]: 9.155e-05 text_model_outputhidden_states[4]:...
@gante thanks for the help. The PR for the TF weights on HF Hub are [here](https://huggingface.co/nvidia/groupvit-gcc-yfcc/discussions/1).
@gante I have taken care of the `copied from` inconsistency. I think we are good to go.
@ypereirars you are right! Good catch :)
That was part of the Keras Sprint. CC: @martin-gorner
Hey @joao-alcindo thanks for you interest in the work! You could access the encoder and decoder individually like so 👇 ``` encoder = mae_model.encoder decoder = mae_model.decoder ``` Once you...