Alankar Shukla

Results 68 comments of Alankar Shukla
trafficstars

Hi @CrafterKolyan i tried to write the code that support functions, but i was facing the many difficulties, like to differentiate between the blocks ```DATE_FORMAT(date_complaint_raised,'%Y-%m')``` and ```Count(*)```, parser getting confused...

@Y0rkieQvQ , see this i was having similar problem, because i was using latest fastchat with vicuna 1.0 model. You can solve this issue by again generating the vicuna weight...

Same error i'm also facing now, last friday it was running properly and now it is giving error, i installed latest version. @rnyak can you look into this.

Install this if using notebook.. ``` !pip install cudf-cu11==22.12 rmm-cu11==22.12 --extra-index-url=https://pypi.ngc.nvidia.com !pip install cugraph-cu11==22.12 dask-cuda==22.12 dask-cudf-cu11==22.12 pylibcugraph-cu11==22.12 --extra-index-url=https://pypi.ngc.nvidia.com/ !pip install cuml-cu11==22.12 raft_dask_cu11==22.12 dask-cudf-cu11==22.12 pylibraft_cu11==22.12 ucx-py-cu11==0.29.0 --extra-index-url=https://pypi.ngc.nvidia.com ``` error goes away...

I was getting this error when i m training t4rec on custom data. Till last friday cudf was not mandatory to install but now it is.> Install this if using...

Thanks for responding @rnyak , - I didn't do hyperparameter tuning I used these params from [this repo ](https://github.com/bschifferer/Kaggle-Otto-Comp/tree/master/01e_FE_Transformer)they are using the same data. The only difference is the batch...

I see, @rnyak but i'm only using his code to preprocess the dataset and after that i'm trying to use the model architecture given in one of example of this...

> @alan-ai-learner how are generating the schema file if you are not using NVTabular? thanks. I'm using this manual schema.. https://github.com/bschifferer/Kaggle-Otto-Comp/blob/master/01e_FE_Transformer/test.pb

Hey @awasthiabhijeet are you training on custom dataset? If yes can you tell me some way to create the custom dataset? Thanks

Okay, Do @awasthiabhijeet the spider dataset, train_spider.json, each question contains few keys, like query, question , sql, etc so what is that sql is exactly,what it contains, and is there...