rl icon indicating copy to clipboard operation
rl copied to clipboard

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

Results 254 rl issues
Sort by recently updated
recently updated
newest added

## Motivation Using CatFrames for inference is fairly straightforward and is already well documented. That being said, using CatFrames to reconstruct a stack of frames when sampling from the replay...

enhancement

## Describe the bug When creating a sync data collector with a replay buffer (passed to its constructor), then it crashes when yielding from the collector. ## To Reproduce 1....

bug

## Describe the bug Creating an instance of `MultiaSyncDataCollector` crashes when we pass `replay_buffer=my_replay_buffer` to its constructor. The following logs is observed: ```bash python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be...

bug

## Motivation 1. When dealing with logging, I found it hard to grasp how to use different loggers and classes. Especially, the Recorder makes it difficult to grasp the idea...

enhancement

## Motivation Currently you cannot implement a custom sampling technique for the `ActionDiscretizer` transform. ## Solution Bring `custom_arange` out of `transform_input_spec` and make it a method of `ActionDiscretizer`. Wrappers around...

enhancement

## Describe the bug When using SliceSampler, with `strict_length=False`, the documentation recommends the use of `split_trajectories`. However, if two samples from the same episode are placed next to each other,...

bug

## Motivation Hello, Thanks for writing this great library and providing so many great open source tools! I am looking into using the MPPIPlanner and it seems the implementation is...

enhancement

## Describe the bug The [docstring](https://github.com/pytorch/rl/blob/b4b59444a5e894711ba6d062f9cddc6aafa0e095/torchrl/envs/transforms/transforms.py#L277-L287) of `Transform._call()` says it will be called by `TransformedEnv.step()` and `TransformedEnv.reset()`. However, resetting the transformed environment does not trigger `_call()`. ## To Reproduce ```python...

bug

## Describe the bug The `torchrl.objectives.SACLoss` module is currently broken when the input type of `qvalue_network` is a `List[TensorDictModule]`. Note also the discrepancy between the docstring type `TensorDictModule` and the...

bug

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #2550

CLA Signed