Julian Z.

Results 9 issues of Julian Z.

Can anybody provide an example or demo of MARL within FinRL? Thanx.

help_wanted

After numerous trail-and-error, I can ensure that training with ElegantRL only executes if the input is more than the data of one-year trading, otherwise, the training will enter an infinite...

bug
good first issue

I am training and predicting with ElegantRL in TD3, but somehow training seems to enter an infinite loop. I am wondering if there is a threshold on how much data...

bug
good first issue

Hi AI4Finance, When going through the code in the tutorials, I found that ElegantRL trained models and saved simultaneously, but SB3 did not. I want to reuse the models trained...

help_wanted

Hi everyone, In traditional RL, every step consists of (S, A -> S', R), but in the context of FinRL, an action taken, i.e., buy or sell at whatever quantity,...

good first issue
discussion

Hello everyone, I found that FinRL can only learn which actions to take, i.e., buy or sell and quantity, via the neural networks during training. What about price? Which price...

discussion

Hi everyone, After going through the code carefully, with ElegantRL, I believe it is necessary to reconstruct a new env for testing/trading after training with an old env. Training via...

bug

Hi everyone, When I ran the code of SB3 tutorial, the resultant actions were only 100 and 0 (mostly 0 with only 100 on the first 2 days). I believed...

bug

An agent trained via ElegantRL, given the same input state, outputs different and even seemingly random actions each time of predictions. Shouldn't an agent output deterministic actions after learning? Is...

bug