Xiaoyun Zhang

Results 94 comments of Xiaoyun Zhang

Looks like the OOM happens even before training start @michaelgsharp is `CountRows` using stream data or it load IDataview in memory

@wil70 let us know if @luisquintanilla comments resolve your issue, and a tip: - you can disable LightGbm trainer as it uses a lot of memory when training data goes...

@wil70 I'm looking into this issue right now. I made a few changes in AutoML.Net which - disable cache - enable diskConvert in fast tree and hopefully it can resolve...

@michaelgsharp Nope, still looking into that @luisquintanilla maybe we should provide a memory-saving automl solution as we have a lot of similar issues on OOM error, in both model builder...

We are exploring the option of continue training in AutoML now. And for excessive long trials, it most likely happens in tree-base trainers when the `NumberOfTree` or `NumberOfIteration` is large....

Can you provide a minimal reproduction example, and also the following information - version of tensorflow you use to train the model

I'll take a look, meanwhile, to disable trainers you can refer this comment https://github.com/dotnet/machinelearning-modelbuilder/issues/1998#issuecomment-1026240486