Bilal Ghanem

Results 15 comments of Bilal Ghanem

I solved it by downloading `tensorflow-gpu==1.15.0`.

I solved it by downloading `tensorflow-gpu==1.15.0`.

I think the authors was planning to use E11, E21, etc. but then changed the code to use # & $. What I have done to solve the issue is...

@sandeeppilania I change it in the code. Simply, in function `convert_examples_to_features`, before the line: `l = len(tokens_a)`, use `.replace` to convert them. ex. ``` str.replace('E11', '#') etc. ```

> @bilalghanem I am asking something silly here sorry about that, > but on line `tokens_a = tokenizer.tokenize(example.text_a)` in function `convert_examples_to_features` > i tried printing out tokens_a and this is...

@sandeeppilania yes, exactly. And this line specifies the end entity in case its length is more than single word. `e12_p = l-tokens_a[::-1].index("#")+1`

Are you sure that you loaded the trained model? Also, you don't need to use `.predict`. Just follow their example: `preds = model(["i loved the spiderman movie!", "pineapple on pizza...

> Yes, I just trained it before !!! also I tried ditching .predict and still not working ... I didn't upload to hub because I don't want to ... Weird!...

Simply follow what the error says: in your command line write: `polyglot download embeddings2.da`