Mayar

Results 14 issues of Mayar

there is no tutorial about trainers

Could you please provide us with a complete step by step tutorial of building config file, what should be included what should not and why?. I have noticed that not...

We're going to look at the first two of these, but if you're interested in using AllenNLP to serve models in production, you should definitely take a look at allennlp...

list of pre-written dataset readers this link does not work

I got this error (base) arij@arij-HP-ProBook-450-G4:~/allennlp_tutorial$ allennlp train -f --include-package tagging -s /tmp/tagging/lstm configs/train_lstm.jsonnet 2020-09-16 07:13:12,680 - INFO - allennlp.common.params - random_seed = 13370 2020-09-16 07:13:12,681 - INFO - allennlp.common.params...

![image](https://user-images.githubusercontent.com/38237790/90313762-caed1b80-df17-11ea-9846-50b881d1f079.png) ![image](https://user-images.githubusercontent.com/38237790/90313787-e1937280-df17-11ea-95f0-cfb451f84cf9.png) ![image](https://user-images.githubusercontent.com/38237790/90313796-f243e880-df17-11ea-8946-99b8f287b198.png) could anyone please explain to me the difference? which better to use and why?

when i run processed_text = [] processed_title = [] for i in dataset[:N]: file = open(i[0], 'r', encoding="utf8", errors='ignore') text = file.read().strip() file.close() processed_text.append(word_tokenize(str(preprocess(text)))) processed_title.append(word_tokenize(str(preprocess(i[1])))) i got the error ---------------------------------------------------------------------------...

hi! pow_frequency = np.array(list(self.word_frequency.values())) ** 0.5 should not be pow_frequency = np.array(list(self.word_frequency.values())) ** 0.75

hello! I am sorry if I miss the obvious but I did like written first I have win10 , and iam working from powershell using cmd can not implement the...

when trying to understand bert_text_classification.ipynb this part of notebook from allennlp.data.token_indexers import PretrainedBertIndexer token_indexer = PretrainedBertIndexer( pretrained_model="bert-base-uncased", max_pieces=config.max_seq_len, do_lowercase=True, ) # apparently we need to truncate the sequence here, which...