kinoc

Results 24 comments of kinoc
trafficstars

@SaMnCo here is my "convert_ascii_to_tx1.lua" script, which has the input and output harcoded, so modify as needed. It is just the top of "eval.lua", loads the ascii then writes out...

@soumith Any clues with the TX1 ? The dedicated Linux GPU box is still a month+ away, but the TX1 is here now. ;p

Facing the same error, the easiest way is to convert "words" into a "ratio" estimate, since the above code is always valid for "ratio". This would be my work around....

I had a similar problem, but appears to make progress after re-clone of the repository. I think the process does not like doing "--data full" after doing "--data small".

Experienced same error. Switching to bert produces an IndexError.... > File "Initial_test_k_lm.py", line 198, in > optimizer_type="adamw") #adamw /lamb > File "/home/kino/.local/lib/python3.6/site-packages/fast_bert/learner_lm.py", line 143, in fit > outputs = self.model(inputs,...

See : [Version 2.4.1 breaks run_lm_finetuning.py, version 2.3.0 runs fine](https://github.com/huggingface/transformers/issues/2737) For us in fast-bert , modifying the function mask_tokens in data_lm.py: > labels[~masked_indices] = -1 to this line > labels[~masked_indices]...

I had a problem with patents.google.com, where it seemed to execute the query, but "{{query}}" was displayed in the search box. > https://patents.google.com/?q={{query}}&oq={{query}} I changed it to the following and...

I think the easiest thing to do is just retrain a new model using the code in the workbook. Spacy had a dependency problem with their serialization code, and the...

+1 Similar, been running 774M with SGD on Titan RTX (24Gb) for a few weeks. Able to run 1558M on the Titan. But when it comes to training everything looks...

For anyone interested in this topic, it maybe worth looking at a gist I wrote for training with the IBM Large Model Support package. [Fine-tune GPT-2 1558M on a Titan...