Prasad Chalasani

Results 131 comments of Prasad Chalasani

> As per my experience with this library, the length of data(forecasting instances) given to the `predict` method must be equal or greater than (min_encoder_length + min_prediction_length). This is very...

> > When I try to do `model.predict(df)` and `df` does not contain the target, I get an error. In predict mode, it shouldn't care if the "target" is present...

This is exactly what I'm looking for! Would be great to have this feature

Indeed I agree it should be user-configurable. The scenarios I am worried about are those where continued training results in over fitting and hence deterioration of validation reward (graph B)...

Sure, this change may only make sense when using early stopping. I've been customizing exp_manager for my application, so adding early stopping to `TrialEvalCallback`, and stopping trial early, reporting best...

That would be nice, but I didn't see a way to do that. E.g. here is what I am doing to add early stopping to `TrialEvalCallback` ``` stop_train_callback = StopTrainingOnNoModelImprovement(...

Yes of course there is a work around where you insert a dummy target. But I actually started using the DARTS package which seems more intuitive and the devs are...

> when using enjoy, this is already the case: > > https://github.com/DLR-RM/rl-baselines3-zoo/blob/89d4e0c757f2309d506f32b6bf97eaaddf091209/utils/utils.py#L246 Yes `enjoy.py` is fine. With this fix, one can expect to see *exactly identical* results between an evaluation...

> Not really related with the core of the PR: > > Why this syntax > > ```python > if len(local_normalize_kwargs) > 0: > local_normalize_kwargs["norm_reward"] = False > else: >...

I just updated based on the comment by @qgallouedec Incidentally I am having issues installing `pybullet` so I can't run `make pytest` locally. I assume the CI actions will run...