rl icon indicating copy to clipboard operation
rl copied to clipboard

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

Results 254 rl issues
Sort by recently updated
recently updated
newest added

## Motivation I'm trying out DDPG on a RL task and while looking at this repo and its docs came across different solutions for the actor implementation. I would like...

enhancement

## Description I've added a check to all `is_in` functions in TensorSpec for the datatype, as specified in #793. ## Motivation and Context Why is this change required? What problem...

enhancement
CLA Signed

@matteobettini there's a way to make vmap functional calls much much faster! we'll need to make sure this works across the board but if it does speed up could be...

CLA Signed

## Describe the bug In paper "Noisy Networks for exploration" they say (section 3.1) "A noisy network agent samples a new set of parameters after every step of optimisation. "...

bug

## Describe the bug A dataclass in "next" doesn't get copied over in step_mdp. ## To Reproduce ``` import dataclasses from tensordict.tensordict import TensorDict import torch from torchrl.envs import step_mdp...

bug

## Description Describe your changes in detail. ## Motivation and Context Why is this change required? What problem does it solve? If it fixes an open issue, please link to...

CLA Signed

I have just implemented an RL agent for a custom environment (wrapped into a TorchRL env). I am trying to reimplement the [RAPS](https://github.com/mihdalal/raps) algorithm using SAC and for that I...

bug

Depends on https://github.com/pytorch/tensordict/pull/685

enhancement
CLA Signed

## Description This draft PR proposes a `Transform` for recording histories of observations, which is a common practice in robotic tasks. It should also address #1676: to record observation-action history,...

enhancement
CLA Signed