Karan Purohit

Results 37 comments of Karan Purohit

I have tested on very small data (100kb) then it was showing results after the end of each epoch. I want to see results at every step. As on bigger...

``` python run_pretraining.py --albert_config_file=model_configs/base/config.json --do_train --input_files=albert/* --meta_data_file_path=meta_data --output_dir=model_checkpoint/ --strategy_type=mirror --train_batch_size=8 --num_train_epochs=3 ```

have you checked gpu usage? In my case, gpu is utilizing.

I am following lesson 4 notebook from sentiment analysis part. which starts with loading vocab file. `TEXT = pickle.load(open(f'{PATH}/TEXT.pkl','rb'))` Then I defined my custom dataset using `lang_model-arxiv.ipynb`. Then I was...

If possible can you just check that are you able to load encoder or not. When are you going to present newer version?

@arrrrrmin thanks for pointing this out. I think in my case I need to regenerate training data by reducing the `max_sequence_length` which will lower down the `total_train_examples`. Is there any...

Still, I need to complete at least 1 epoch to pass whole data through the model, isn't it?

@Danny-Google @beamind @penut85420 @0x0539 Are you able to solve it. I want to use albert chinese and I am using huggingface pipeline for sequence classification which gives error as spiece.model...

its a trade-off . By increase in the size of the image will take a long time and vice-versa. for lesser inference time use GPU with bigger RAM.