fudan_mtl_reviews
fudan_mtl_reviews copied to clipboard
UnrecognizedFlagError!!
Traceback (most recent call last):
File "src/main.py", line 7, in
any idea how to fix it?
python src/main.py build_data
thx, but missing embed300.trim.npy?
I am also trying to find solutions to solve this problem...
is the embed300.trim.npy a word2vec?
embed300.trim.npy
is trimmed from google news word2vec. The origin file is too large, so I didn't upload it.
you can use --word_dim=50
to use pre-trained 50 dim senna embeddings
thank u very much. i would like to try to change the data-set to a Chinese data-set, a lot of work to do!
@wangzhihuia @FrankWork I tried the both commands below:
-
python src/main.py build_data --word_dim=300
; -
python src/main.py --word_dim=300 --build_data
;
I still got the following error: absl.flags._exceptions.UnrecognizedFlagError: Unknown command line flag 'word_dim'
Could you help me? Thank you!
Solved Changed the settings in the main() codes:
- Change all 300 dims into 50
- Create two directories:
saved_models
in the project root folder; indata
folder, creategenerated'; in the
saved_modelsfolder create
fudan-mtl-adv`
I remove all the flags.DEFINE in util.py, and it works. Maybe it is because only one file can has flags?
Hi, I tried every the method above. I still got the Unknown command line flag
error.
Can someone provide a step-by-step procedure to run this program?
ps. I have also tried the exactly what author said in the readme but its not working.
plz give the full error and Traceback
Hi, how can I use a Chinese data set? How to train the file "embed300.trim.npy"
.npy file is actually a numpy type file, once u traind a 300d word embedding u can save it to a numpy type by using np.save() but more suggest that u use gensim. Using Chinese data set i highly suggest u follow author's training data format and run the code. there is a another repo https://github.com/andyweizhao/capsule_text_classification gained a higher score.