Alexandre Brown
Alexandre Brown
I only see this log + "Killed". If I do not pass the replay buffer to the data collector then the creation works but it crashes when it tries to...
Yes everything works fine with SyncDataCollector (when passing the RB buffer to the data collector or not both work as expected)
@vmoens I tried setting the spawn method to fork or spawn and both crash. But I can no longer reproduce the initial error when I pass my replay buffer. Now...
Sure, here is how I create it : ```python import torch from omegaconf import DictConfig from torchrl.data import ReplayBuffer from torchrl.data import TensorDictReplayBuffer from torchrl.data import LazyMemmapStorage from torchrl.data import...
@pseudo-rnd-thoughts @RedTachyon @vmoens @matteobettini As a member of both the Gymnasium and TorchRL communities, I want to thank all of you for working together on this. It’s great to see...
Cuda support is currently not possible if we have any metrics logging code (which would require `.to("cpu")`), see : https://github.com/pytorch/rl/issues/2644#issuecomment-2625706891
> How are the Maniskill spaces? Native gym ones? Or specialized for PyTorch? They are native gym ones so I had to convert the dtype, ex: ```python states_spec = Unbounded(...
@vmoens You can find the current implementation here : https://gist.github.com/AlexandreBrown/0df62a6c5653ac961d11734984867756 In practice a lot of boilerplate could be removed if we infer the observation_spec automatically.
> > > Hi Aaiguy, > > > You can, for example, initialize the model as follows > > > from dinov2.models.vision_transformer import vit_large > > > model = vit_large(...
Hi @joeycouse , I think it depends on the training phase. > The classification head is implemented by a MLP with one hidden layer at pre-training time and by a...