AI-writer_Data2Doc
AI-writer_Data2Doc copied to clipboard
PyTorch Implementation of NBA game summary generator.
Bumps [codecov](https://github.com/codecov/codecov-python) from 2.0.5 to 2.0.16. Changelog Sourced from codecov's changelog. 2.0.16 fixed reported command injection vulnerability. 2.0.15 add -X s3 to disable direct to S3 uploading 2.0.14 fixed coverage...
Bumps [numpy](https://github.com/numpy/numpy) from 1.13.3 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...
Bumps [pyyaml](https://github.com/yaml/pyyaml) from 3.12 to 5.4. Changelog Sourced from pyyaml's changelog. 5.4 (2021-01-19) yaml/pyyaml#407 -- Build modernization, remove distutils, fix metadata, build wheels, CI to GHA yaml/pyyaml#472 -- Fix for...
When I try to predict the word using the model(BiLSTM) I generate by myself, it occurs the following error: File "train.py", line 662, in predictwords encoder_hidden, encoder_hiddens = encoder(rt, re,...
Bumps [nltk](https://github.com/nltk/nltk) from 3.2.5 to 3.4.5. Changelog *Sourced from [nltk's changelog](https://github.com/nltk/nltk/blob/develop/ChangeLog).* > Version 3.5 2019-10-16 > * drop support for Python 2 > * create NLTK's own Tokenizer class distinct...
i come across a problem when i trian the model, when i have trained the model 90 minutes, the GPU suddenly report the error about out of memory, it shoule...
https://github.com/gau820827/AI-writer_Data2Doc/blob/master/train/dataprepare.py ``` for v in data_set: for triplet in v.triplets: # Example: # triplet ('TEAM-FT_PCT', 'Cavaliers', '68') # ('FGA', 'Tyler Zeller', '6') rt.addword(triplet[0]) re.addword(triplet[1]) rm.addword(triplet[2]) summarize.addword(triplet[2]) for v in data_set:...
TODO: Hierarchical BiLSTMs
Edit the predictwords function to reflect the feature in our model: 1. Hierarchical / Non-Hierarchical model structure 2. Copy mechanism while encountering OOV words