Maxim Podkolzine
Maxim Podkolzine
@KOLANICH Can you describe this code please?
Hi @baothienpp, I've implemented `LinearCurvePredictor`, which is a simple, but rather efficient method. In my experiments, it was good enough and saved ~50% of training time, though I haven't tried...
> It took a lot of computational power because of Monte Carlo calculation. Yeah, I can imagine. > I am trying to understand your method, could you tell me more...
Hi @baothienpp , Correct, the features are the *whole curve*. So the predictor doesn't try to learn trends or something like that, it compares the given curve to the set...
By the way, I've added a [bunch examples](https://github.com/maxim5/hyper-engine/blob/master/hyperengine/examples) lately. Please take a look, looking forward to your feedback.
@baothienpp Sounds great. Looking forward to seeing your model in action. When you will test it, take a look at [the tests](https://github.com/maxim5/hyper-engine/blob/master/hyperengine/tests/curve_predictor_test.py).
Hi @baothienpp That'll be great if you do this. Please use this code: ``` @article{podkolzine17, author = {Maxim Podkolzine}, title = {Hyper-Engine: Hyper-parameters Tuning for Machine Learning}, journal = {https://github.com/maxim5/hyper-engine},...
This looks really impressive: the burn-in period 5 is very low! Thanks for the update. If you can make a pull request or somehow share your code, I'd incorporate it...
Sorry, I forgot about your question: right now, the model itself can go multi-gpu and that's it. I'd implement distributed training on the library level, but I think the trivial...
OK. No problem. > is the portfolio strategy you used, kind of randomly choosing a utility function every iteration ? Yes, see [`BayesianPortfolioStrategy`](https://github.com/maxim5/hyper-engine/blob/master/hyperengine/bayesian/strategy.py#L153). It is possible to fix the distribution...