kemalaraz

Results 9 issues of kemalaraz

Seems there is no link for the pretrained bert model where can I get it for the inference? Thanks

It seems huggingface repository contains only the base model, I couldn't find the model and tokenizer related to the model for named entity recognition. Where can I find the trained...

Hello, I am having trouble finding the datasets since they are not free. I am trying to reproduce your results on different languages for my thesis. Can you at least...

Hello there, I am using bert-base-uncased and haven't changed the config.ini (just commented out bert-large and using bert-base instead) but in your readme the performance for semeval is 0.88 but...

I trained the model for nyt10 dataset for 10 epochs and accuracy exceeded 100% and I got 103%. Can you elaborate me on that? Thanks

When I am trying to load trained model with model.load_state_dict(torch.load("ckpt") I am getting this RuntimeError: storage has wrong size: expected -4883207186230854459 got 768 error. When I searched it says it...

I am getting a warning "Token indices sequence length is longer than the specified maximum sequence length for this model (730 > 512). Running this sequence through the model will...

``` @@@@@@@@@@@ args @@@@@@@@@@@ {'metric': 'micro_f1', 'cuda_device': 3, 'seed': 31415926535897932, 'opt': 'adam', 'use_cls': True, 'subject_1': False, 'large_bert': False, 'continue_train': False, 'eval': False, 'add_subject_loss': False, 'weight_decay': 1e-05, 'lr': 5e-05, 'max_epoch': 100,...

Where can I get the BERT pre-trained model that you used as you stated in your paper. I think you are referring to standard BERT trained by google research but...