tpot icon indicating copy to clipboard operation
tpot copied to clipboard

TPU support for TPOT?

Open neel04 opened this issue 4 years ago • 7 comments
trafficstars

Is there any way to run TPOT on TPU? I didn't find any info regarding this in the docs, or in the ReadMe. Can anyone throw some light on this issue?

neel04 avatar Dec 18 '20 10:12 neel04

Currently, TPOT do not support TPU.

weixuanfu avatar Dec 18 '20 13:12 weixuanfu

Alright. Would you also happen to know how much time does TPOT take on a regression problem (Just a rough estimate) on a normal GPU like V100 or a P100? @weixuanfu

neel04 avatar Dec 18 '20 13:12 neel04

I suppose that you are using the "TPOT cuML" for using GPU-accelerated estimators in RAPIDS cuML and DMLC XGBoost. Unfortunately, I do not know if RAPIDS cuML supports V100 or P100. I only have very limited experience on using it on regression problem. I have tested "TPOT cuML" on 2080Ti before with a regression benchmark with 50000 samples and 50 features and it took 1-2 days to finish 100 generations with 100 population size and cv=5.

weixuanfu avatar Dec 18 '20 14:12 weixuanfu

Is it compulsory to use TPOT cuML for GPU acceleration, or would the vanilla "pip" install use GPU anyways?

neel04 avatar Dec 18 '20 16:12 neel04

No, it is an optional config. Please check this installation guide for TPOT cuML.

weixuanfu avatar Dec 18 '20 16:12 weixuanfu

Follow up question. Is it possible to use multiple gpu's while training tpot?

carterrees avatar Feb 18 '21 00:02 carterrees

@carterrees yes, you can do this by starting a Dask CUDA cluster and setting use_dask=True. This brief recording shows an example. https://www.youtube.com/watch?v=7z4OJQdY_mw

from dask.distributed import Client
from dask_cuda import LocalCUDACluster
cluster = LocalCUDACluster() # use every GPU on the machine by default
client = Client(cluster)
...
# TPOT as normal, passing use_dask=True and config_dict="TPOT cuML"
...

beckernick avatar Feb 20 '21 17:02 beckernick