Jerry Ling

Results 651 comments of Jerry Ling

Anthony's solution worked for MLJ, but actually if I directly use XGBoost it now has cuda OOM: ```julia 106 function cross_train(df_all; model = (; tree_method="hist", eta=0.08, max_dep th=7, num_round=90), Nfolds=5)...

I suspect the thing returned by `xgboost()` has reference to the Array? somehow?

yeah I know EvoTrees and a big fan of the package as well as Jeremie (met at this year's JuliaCon). Although it looks like Jeremie is on a long holiday...

can you reproduce with my code above -- all you need is a big dataframe and then you partition it in the loop and then keep the returned object from...

![image](https://github.com/user-attachments/assets/859215d5-7669-46d4-8c02-4d43a3b2c2bf) another place where this matters is in MLJ model tunning.... this is using a whopping 20 GB of VRAM (if you train just one model it's like 4GB of...

sway has since support ICC profile: https://github.com/swaywm/sway/issues/1486#issuecomment-2344740148

if we have auto-detect in this package, but certain cluster backends live in separate packages (e.g. LSF), what should user do? I imagine we might want to further split backends...

it's planned and I have prototype working already. (we're giving a talk on this topic, at JuliaCon in a month https://pretalx.com/juliacon-2025/talk/J9MSQU/) the current implementation doesn't run implicit multi threading, but...