gym-mtsim
gym-mtsim copied to clipboard
The new models and the issues caused by signal_features.
Hi, thank you for sharing your code. Can you provide more examples of models? Because the performance of A2C is not very good, I am trying to use my own data (including newly added signal_features) and experimenting with different models, but they are not performing well, especially the DQN model. This may be a silly question, please forgive my unfamiliarity with reinforcement learning. Additionally, I encountered errors when using the DDPG model. I have included my modifications and the error below. I would be grateful for your help. ` def _process_data(self, keys: List[str]=['Stochastic_K_1', 'Stochastic_D_1', 'Stochastic_K_2', 'Stochastic_D_2', 'MACD_DIF', 'MACD_DEA', 'MACD_Histogram', 'Moving_Average']) -> Dict[str, np.ndarray]: signal_features = {}
for symbol in self.trading_symbols:
get_signal_at = lambda time: \
self.original_simulator.price_at(symbol, time)[keys]
if self.multiprocessing_pool is None:
p = list(map(get_signal_at, self.time_points))
else:
p = self.multiprocessing_pool.map(get_signal_at, self.time_points)
signal_features[symbol] = np.array(p)
# data = self.prices
signal_features = np.column_stack(list(signal_features.values()))
return signal_features`
Hello @123xian123 , apologies for missing this issue. I hope your problem has been fixed by now. About the examples, I have changed the code and examples. Creating an agent that works well on the data requires some modifications, especially in the features and reward components. This project is just a tool to achieve that goal and I don't intend to build that agent here.