Laurae
Laurae
@takashioya It seems the service I was using was taken down for overusage. I'll redevelop a new one (and ideally update the content if I have time to update).
ping @huanzhang12 if you have any news
@mjmckp any news?
Newer results: ```sh cd Downloads/Nim rm -rf laser source /opt/intel/mkl/bin/mklvars.sh intel64 export OMP_NUM_THREADS=1 git clone --recursive git://github.com/numforge/laser cd laser git checkout dbfb31d git submodule init git submodule update cd build...
Seems GPU is not supported: * "Don't use hardware accelerators e.g. GPU, TPU": https://github.com/tensorflow/decision-forests/blob/main/documentation/migration.md#dont-use-hardware-accelerators-eg-gpu-tpu * "No support for GPU / TPU.": https://github.com/tensorflow/decision-forests/blob/main/documentation/known_issues.md#no-support-for-gpu--tpu
Note: EBM is a GLM with a quadratic design matrix, not a GBM. Not in the same category.
@RAMitchell It seems to be actually slower. I am using only dmlc/xgboost@84d992b (16 days ago from this post) because https://github.com/dmlc/xgboost/pull/4323 broke all my installation scripts/packages. 1x Quadro P1000: | ?gb...
@RAMitchell It is more likely because it is depth=10 (depth>6 for GPU) which causes the slowdown. R and Python have near identical runtimes. Note that the data features is very...
@RAMitchell GPU hist xgboost seems to have difficulties dealing with very sparse data. The 0.1M data (15.1 MB as sparse, 100K observations x 695 features) wants to gobble 659 MB...
@RAMitchell It occurs with `hist` on CPU also.