FinRL-Tutorials
FinRL-Tutorials copied to clipboard
Tutorials. Please star.
Error while executing [FinRL_Ensemble_StockTrading_ICAIF_2020.ipynb](https://github.com/AI4Finance-Foundation/FinRL-Tutorials/blob/master/2-Advance/FinRL_Ensemble_StockTrading_ICAIF_2020.ipynb). While running the below block of code: ``` df_summary = ensemble_agent.run_ensemble_strategy(A2C_model_kwargs, PPO_model_kwargs, DDPG_model_kwargs, timesteps_dict) ``` ____________________________________________________________________________________________________________________________________________ ValueError Traceback (most recent call last) [](https://localhost:8080/#) in () ---->...
While executing the notebook Stock_NeurIPS2018_2_Train.ipynb the following line is causing an exception model_a2c = agent.get_model("a2c") [/usr/local/lib/python3.10/dist-packages/stable_baselines3/common/base_class.py](https://localhost:8080/#) in __init__(self, policy, env, learning_rate, policy_kwargs, tensorboard_log, verbose, device, support_multi_env, monitor_wrapper, seed, use_sde, sde_sample_freq,...
Hi, I tried running the tutorial in Google colab (the first tutorial notebook), and everything at the beginning ran without errors until this line ``` df = YahooDownloader(start_date = TRAIN_START_DATE,...
# # I just transferred it (FinRL-Tutorials/3-Practical/FinRL_MultiCrypto_Trading.ipynb) to a file (main.py) # # # Error 1: # If I want to DEBUG my script (main.py) - I get an error:...
the file [FinRL_HyperparameterTuning_Optuna.ipynb](https://github.com/AI4Finance-Foundation/FinRL-Tutorials/blob/master/4-Optimization/FinRL_HyperparameterTuning_Optuna.ipynb) gives me the following error ``` TypeError Traceback (most recent call last) Cell In[17], line 16 3 env_kwargs = { 4 "hmax": 100, 5 "initial_amount": 1000000, (...)...
I have been changing the amount of data downloaded from the AlpacaAPI and I am wondering if anyone can help with how I can plot this, in the same way...
turbulence_threshold and risk_indicator_col are used during backtest, however, not used in training, will it causing unmatching problem?
this command fail: !pip install git+https://github.com/AI4Finance-Foundation/FinRL.git results: Running command git clone --filter=blob:none --quiet https://github.com/AI4Finance-Foundation/FinRL.git 'C:\Users\o00494123\AppData\Local\Temp\pip-req-build-9ezxx0oj' Running command git clone --filter=blob:none --quiet https://github.com/AI4Finance-Foundation/ElegantRL.git 'C:\Users\o00494123\AppData\Local\Temp\pip-install-2rq3ux93\elegantrl_88282259f5a841b29297adcf2c823715' Running command git clone --filter=blob:none --quiet https://github.com/quantopian/pyfolio.git...
I tested demo NeurIPS2018 with stablebaseline3, I used SAC agent, and I trained with GPU. While I increase batch size from 128 to 512, I found no changing for GPU...
In demo NeurIPS2018, close/low/high value are used as states, it seems future leakage exist?