Jerry Ling
Jerry Ling
Anthony's solution worked for MLJ, but actually if I directly use XGBoost it now has cuda OOM: ```julia 106 function cross_train(df_all; model = (; tree_method="hist", eta=0.08, max_dep th=7, num_round=90), Nfolds=5)...
I suspect the thing returned by `xgboost()` has reference to the Array? somehow?
yeah I know EvoTrees and a big fan of the package as well as Jeremie (met at this year's JuliaCon). Although it looks like Jeremie is on a long holiday...
can you reproduce with my code above -- all you need is a big dataframe and then you partition it in the loop and then keep the returned object from...
 another place where this matters is in MLJ model tunning.... this is using a whopping 20 GB of VRAM (if you train just one model it's like 4GB of...
I was using it 6 months ago, not actively using it any more
not working with the system at all right now
sway has since support ICC profile: https://github.com/swaywm/sway/issues/1486#issuecomment-2344740148
if we have auto-detect in this package, but certain cluster backends live in separate packages (e.g. LSF), what should user do? I imagine we might want to further split backends...
it's planned and I have prototype working already. (we're giving a talk on this topic, at JuliaCon in a month https://pretalx.com/juliacon-2025/talk/J9MSQU/) the current implementation doesn't run implicit multi threading, but...