Federico Belotti
Federico Belotti
Hi @chrisgao99! The problem you see seems related to the `frame_skip` and `action_repeat` values that differ from the defaults. Can you try to leave those as default?
Hi @LucaVendruscolo, do you have some empirical evidence that everything is working? Some plots for example
Hi @defrag-bambino, could you please elaborate more on the issue? Which version of SheepRL are you using? Which steps have you run before encountering the error? Thank you
I'm trying but i'm not able to replicate: which torch version are you using?
Hi @defrag-bambino, this is a screenshot where you can see that the `world_model.encoder`, which is a `_FabricModule`, when the `state_dict` function is called it returns the correct module:  I'm...
@michele-milesi One problem is the obs normalization statistics: if one wants to test an algo trained with normalized obs then (s)he needs to also apply the same statistics to the...
Another issue: the obs and rewards normalization is done per-env since the wrappers are created inside the `make_env` method, then called in the agent code by the `SyncVectorEnv` or `AsyncVectorEnv`....
Hi @ogulcankertmen, I've tried the exact same command on the `main` branch on my machine and the training goes well: can you please share more info about the error? Maybe...
Hi @ogulcankertmen, I've tried on my windows machine and nothing happens: I'm not able to replicate. Could you please share also your env? I've seen from your error that the...
Hi @Winston-Gu, thank you for your kind words. The Dreamer-V3 authors have extensively benchmarked various algorithms: * https://www.nature.com/articles/s41586-025-08744-2 * https://arxiv.org/abs/2301.04104 Bear in mind that our current version of Dreamer-V3 does...