tforce_btc_trader
tforce_btc_trader copied to clipboard
IndexError when running hypersearch.py
Hi,
The following error occurs right at the end of a training run when running hypersearch.py with no other arguments:
100% Custom: 0.885 Sharpe: 0.000 Return: 0.000 Trades: 0[<0] 10000[=0] 0[>0] Running no-kill test-set Traceback (most recent call last): File "hypersearch.py", line 852, in
main() File "hypersearch.py", line 848, in main y_list=Y File "I:\toolkits\tforce_btc_trader\gp.py", line 193, in bayesian_optimisation2 y_list.append(loss_fn(params)) File "hypersearch.py", line 799, in loss_fn reward = hsearch.execute(vec2hypers(params)) File "hypersearch.py", line 624, in execute env.train_and_test(agent, self.cli_args.n_steps, self.cli_args.n_tests, -1) File "I:\toolkits\tforce_btc_trader\btc_env.py", line 619, in train_and_test self.run_deterministic(runner, print_results=True) File "I:\toolkits\tforce_btc_trader\btc_env.py", line 587, in run_deterministic next_state, terminal, reward = self.execute(runner.agent.act(next_state, deterministic=True)) File "I:\toolkits\tforce_btc_trader\btc_env.py", line 425, in execute pct_change = self.prices_diff[step_acc.i + 1] IndexError: index 12864 is out of bounds for axis 0 with size 12864
The issue is here, where limit
is hard-coded for mode==TEST
. Should be simple to set to n_test
or some fraction, but I was still hitting that error when setting it to n_test
or n_test-1
; never got around to investigating fully. Might not get to it for a bit, if you wanna take a stab
Hi, as soon as you have data that's over 1M records it works like in the example dataset. You probably had 128640 records. I decreased 10000 to 1000 in line 403 of btc_env and then it works. (n_steps = n_steps * 1000) I also removed a zero in line 230 of: limit = 4000 if full_set else 1000. So I don't know which of made it working.
Using data from poloniex as my base, since it's super easy to pull data from them and want the test/live data to at least be from the same exchange. Down side is the 5 min granularity returns around 350k records so had to implement the above tweak from techar.
I more or less merged from the post referenced by the not implemented error using live-ish. Getting it to warm up ok, but after it pulls the next record after an update it crashes with an index error, suspecting it may be related to this.