examples
examples copied to clipboard
New examples requested
Hi everyone, @svekars and I are looking to increase the number of new contributions to pytorch/examples, this might be especially interesting to you if you've never contributed to an open source project before.
At a high level, we're looking for new interesting models.
So here's what you need to do
- Check out our contributing guide: https://github.com/pytorch/examples/blob/main/CONTRIBUTING.md
- Pick a model idea - I've listed a few below, comment on this task so others know you're working on it
- Implement your model from scratch using PyTorch, no external dependencies will be allowed to keep the examples as educational as possible
Your implementation needs to include
- A folder with your code which needs to define
- Your model architecture
- Training code
- Evaluation code
- An argparser
- Make sure your script runs in CI so it doesn't break in the future by adding it to
run_python_examples.sh - README describing any usage instructions
As an example this recent contribution by @sudomaze is a good one to follow https://github.com/pytorch/examples/pull/1003/files
Here are some model ideas
Model ideas
- [ ] Controlnet - Guided diffusion
- [ ] NERF
- [x] Graph Neural Network @JoseLuisC99
- [ ] Diffusion Model, stable diffusion or any variant of the architecture you like
- [x] Vision Transformer
- [ ] Video model
- [ ] Toolformer
- [ ] Differentiable physics
- [ ] Flownet
- [ ] Dreamfusion or any 3d model
- [ ] Language Translation
- [ ] Swin transformer
But I'm quite open to anything we don't have that's cool
Hey! I'm interested in contributing to the vision transformer model, but I don't have any prior open source contribution experience. Would it be okay for me to proceed with this project?
Yes please go for it
I am interested as well. What is a video model? I am looking at some video examples from tensorflow and keras. Would a spinoff of this suffice? That dataset looks like a standard intro video dataset.
Which of these problems can be comfortably exercised on a 2080ti(12gb vram)?
@msaroufim I would Like to contribute to examples related to graph neural networks ,Is there any specific Dataset i should choose for this or i can choose any dataset of my own choice for examples .
Yes to all of the above
@msaroufim I am interested for PyTorch Open-Source Contribution. Thank you for sharing New examples requested Note, would like to contribute to Diffusion Model, stable diffusion and Vision Transformer sections will keep you posted as my work progress. Please let me know your thoughts on taking up this project. Thank you. Regards, Aditi
hey! I would love to contribute to Stable diffusion. Can i take this up ?
Hi @Krish2002 yes please go for it
I would love to contribute to FlowNet. Can I take this up?
Hi @IMvision12 please do!
I would like to add a video vision transformer model.
Edit: Video ViTs are already present in Torchvision, can i still go ahead with this idea? thanks
@abhi-glitchhg please do just keep in mind the implementation has to be from scratch and not just call the torcvision constructor
I would like to contribute to Graph Neural Network. However, is there some specific task or model in mind or can I choose any?
Hi @msaroufim I would like to implement language translation using encoder-decoder architecture. Can I take this?
@JoseLuisC99 any task or model you like! As long as it's from scratch in pure Pytorch
Hey @msaroufim I would like to work on stable diffusion and some others topics as well. Thanks
@guptaaryan16 @HemanthSai7 @JoseLuisC99 @abhi-glitchhg assigned some models to you, lemme know if you need any help to get it over the finish line. Thanks!
~~Can I take up implementing Controlnet - guided diffusion?~~
Apologies, I could not complete it. If someone else is interested, feel free to take it up - I am no longer working on it.
@msaroufim can I use the transformers library to use the tokenizers?
@msaroufim can I take up NERF?
@msaroufim I'd like to implement a text to 3d model. At the moment, I'm deciding between Test2Mesh and CLIP-Forge. DreamFusion seems a little complex.
@HemanthSai7 yes but the model should be pure PyTorch, bonus point for a from scratch tokenizer! @bhavyashahh Sure! @QasimKhan5x Sounds good, either of those models work
Please reference this issue in your PRs, like "Re #1131".
@msaroufim Sorry, but owing to university examinations I will not be able to participate this time. However, if anybody wants to take FlowNet, they are welcome to do so. :)
Wanted to give updates on my task. I have completed preparing the dataset(tokenization, data-loading, etc) for the translation task and will start with Positional Embeddings and other layers.
Hey! I am wondering whether the Vision Transformer model is taken or not. I am willing to contribute. Or otherwise, would you be interested if I work on the Swin Transformer model? Many thanks.
Thanks @HemanthSai7
@yishengpei vision transformer was completed already but would be happy to review swin transformer
I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the logic?
def generate_square_subsequent_mask(seq_len):
mask = (torch.triu(torch.ones(seq_len, seq_len)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask== 1, float(0.0))
return mask
I'm unable to understand this way of generating masks used in the source code.
I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the logic?
Could you please share a repro? That's certainly not expected