rl
rl copied to clipboard
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
## Motivation I'm trying out DDPG on a RL task and while looking at this repo and its docs came across different solutions for the actor implementation. I would like...
## Description I've added a check to all `is_in` functions in TensorSpec for the datatype, as specified in #793. ## Motivation and Context Why is this change required? What problem...
@matteobettini there's a way to make vmap functional calls much much faster! we'll need to make sure this works across the board but if it does speed up could be...
## Describe the bug In paper "Noisy Networks for exploration" they say (section 3.1) "A noisy network agent samples a new set of parameters after every step of optimisation. "...
## Describe the bug A dataclass in "next" doesn't get copied over in step_mdp. ## To Reproduce ``` import dataclasses from tensordict.tensordict import TensorDict import torch from torchrl.envs import step_mdp...
## Description Describe your changes in detail. ## Motivation and Context Why is this change required? What problem does it solve? If it fixes an open issue, please link to...
I have just implemented an RL agent for a custom environment (wrapped into a TorchRL env). I am trying to reimplement the [RAPS](https://github.com/mihdalal/raps) algorithm using SAC and for that I...
Depends on https://github.com/pytorch/tensordict/pull/685
## Description This draft PR proposes a `Transform` for recording histories of observations, which is a common practice in robotic tasks. It should also address #1676: to record observation-action history,...