ml-4m
ml-4m copied to clipboard
Depth tokenizer
Hi everyone, thanks for the nice work. I am considering using your pretrained depth tokenizer to extract precomputed (features) tokens for further training. I have some questions.
-
I cloned the ml-4m, and installed the diffusers library. However, get error: AttributeError: module diffusers.models has no attribute unet_2d_blocks. Could you please specify the requisites for using your repo and which diffuser version you have used?
-
Also, how many tokens do we get from your pretrained checkpoint model?
-
Is your uploaded pretrained depth tokenizer an encoder-decoder or encoder only model that would just give me the required tokens?
-
What normalization did you use for the depth data?
Thanks a lot!