rl icon indicating copy to clipboard operation
rl copied to clipboard

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

Results 254 rl issues
Sort by recently updated
recently updated
newest added

## Motivation It would be great to be able to use OpenSpiel's environments with TorchRL. ## Solution Its `rl_environment` interface is basically identical to that of gym, so the integration...

enhancement

## Describe the question In the test suites, LSTMModules are created with inputs and outputs before the environment is created. As such, the primer is added to the environment before...

bug

## Describe the bug Unable to install torchrl via pip for python312. ``` >>> pip install torchrl ERROR: Could not find a version that satisfies the requirement torchrl (from versions:...

bug

## Describe the bug I wasn't sure whether to open this issue on torchRL, agenthive, or the robohive repo. Apologies if its in the wrong place. I'm trying to train...

bug

I managed to run the code, but during the process, I realized that the maximum STEP for each batch is only 50, `steps: 50, loss_val: 0.1930, action_spread: tensor([26, 24], device='cuda:0'):...

enhancement

I found a problem in the section USING PRETRAINED MODELS, the author did not have an error because the data is running CPU the whole time, but if the model...

enhancement

## Description Adds Hindsight Experience Replay ([HER](https://arxiv.org/pdf/1707.01495.pdf)) Transform ## Motivation and Context The first draft for the HER transform. However, I am not sure if it should be a `Transform`...

enhancement
CLA Signed

## Describe the bug The [documentation](https://pytorch.org/rl/reference/trainers.html#checkpointing) doesn't match its [implementation](https://github.com/pytorch/rl/blob/main/torchrl/_utils.py#L197). When using the trainer as described, the trainer cannot save a checkpoint since the provided path is a directory and...

bug

## Motivation When RNN’s are used in isolation, creating a TensorDictPrimer Transform for the environment to populate the TensorDicts with the expected tensors is pretty straightforward: ```python from torchrl.modules import...

enhancement