Anthony Blaom, PhD

Results 815 comments of Anthony Blaom, PhD

Makes sense. I expect the change would happen around about here: https://github.com/JuliaAI/MLJBase.jl/blob/d2af6bd2ca7b85399db0cf79a1f96f738d342aea/src/show.jl#L72

In #841 items returned by `report(mach)` and `fitted_params(mach)` with empty named tuple values (or `nothing`) are dropped.

Thanks @ericphanson for flagging this. There was a request for this a while ago by @CameronBieganek, but I can't find it just now. Sometimes this might introduce scaling issues, for...

> However that kind of design always feels like perhaps we aren't "inverting control to the caller" and that a more compositional flow might be better overall. E.g. I could...

I'm curious, what is your use case for collecting the out-of-sample predictons? Are you doing some kind of model stacking perhaps? We have do have `Stack` for that.

Thanks for pointing this out. I am not aware that unbound type parameters matter for performance in method dispatch. Can you explain a bit more about that, or point out...

Thanks for raising this and the work at #845. I never imagined serialization of surrogate machines. They were introduced as a way to improve on an older way of "exporting"...

I guess I was not clear in my question. For me, the raison d'etre for learning networks is to define new composite model types. The fact that they can be...

No, one has [always been able to export](https://alan-turing-institute.github.io/MLJ.jl/dev/composing_models/#Method-II:-Finer-control) learning networks as standalone composite model types. PR #841 just makes it easier. I will shortly post a doc PR at MLJ...

@davnn This PR is finished or very nearly so. I wonder if you can confirm that this will serve your purposes. You can see from the example how to add...