examples icon indicating copy to clipboard operation
examples copied to clipboard

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Results 210 examples issues
Sort by recently updated
recently updated
newest added

Hi, An error occurs when attempting to compile the sample code in line `push_back(Functional(torch::log_softmax, 1, torch::nullopt));` And the error is `error: no matching function for call to 'torch::nn::Functional::Functional(, int, const...

good first issue

Creating this issue for tracking purpose. Example details are TBD. It could include some popular large NLP models. @pritamdamania87 @aazzolini

enhancement
distributed

Hi Pytorch team, I think the measure elaspsed time function is not right. Both trainning and validate function show the progress if i %args.print_freq == 0 in magenet/main.py . currently,...

good first issue

In the README file of "DCGAN Example with the PyTorch C++ Frontend" it says that: _The training script periodically generates image samples. Use the display_samples.py script situated in this folder...

bug
help wanted

The code set seed before call func spawn() like below: ``` torch.manual_seed(412) mp.spawn(worker, nprocs=...) def worker(...): x = torch.rand(3) print(x) ``` I found x in different process have different value,...

distributed

sometimes, the training process will simply get stuck at testing. ~~~ Epoch: [0][5000/5005] Time 0.100 (0.335) Data 0.000 (0.244) Loss 5.9800 (6.5614) Prec@1 1.953 (0.735) Prec@5 7.812 (2.896) Test: [0/196]...

bug
need review

In `fast_neural_style/neural_style/neural_style.py` line 55, if the style image has an alpha channel, then the generated tensor has 4 dimensions and this causes `utils.normalize_batch` to throw due to a tensor dimension...

help wanted

The Actor Critic example (which is actually an implementation of REINFORCE-with-baseline as pointed out in https://github.com/pytorch/examples/issues/573), does not use the discount rate properly. The loss should include \gamma ^ t,...

good first issue
triaged

There is the code of reinforce.py `for action, r in zip(self.saved_actions, rewards): action.reinforce(r)` And there is the code of actor-critic.py: ` for (action, value), r in zip(saved_actions, rewards): reward =...

reinforcement learning

Hi, I have a question about the learning rate in the example "word_language_model", the init lr = 20, which seems very large, can you tell me why lr is set...

help wanted
nlp