TextSentimentClassification
TextSentimentClassification copied to clipboard
data
Hello,there is no data
Hello, you can download data from https://www.kaggle.com/c/ml-2017fall-hw4/data.
Thanks!
Can you share your data with me? I can not download the data.
------------------ 原始邮件 ------------------ 发件人: "chenlee"[email protected]; 发送时间: 2018年4月17日(星期二) 中午12:31 收件人: "wslc1314/TextSentimentClassification"[email protected]; 抄送: "王国薇"[email protected]; "Author"[email protected]; 主题: Re: [wslc1314/TextSentimentClassification] data (#1)
Hello, you can download data from https://www.kaggle.com/c/ml-2017fall-hw4/data.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
OK. Please send me your email address.
[email protected] thank you very much!
------------------ 原始邮件 ------------------ 发件人: "chenlee"[email protected]; 发送时间: 2018年4月17日(星期二) 晚上10:52 收件人: "wslc1314/TextSentimentClassification"[email protected]; 抄送: "王国薇"[email protected]; "Author"[email protected]; 主题: Re: [wslc1314/TextSentimentClassification] data (#1)
OK. Please send me your email address.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
No such file or directory: 'data_helpers/dataset/training_v2i.json'
------------------ 原始邮件 ------------------ 发件人: "chenlee"[email protected]; 发送时间: 2018年4月17日(星期二) 晚上10:52 收件人: "wslc1314/TextSentimentClassification"[email protected]; 抄送: "王国薇"[email protected]; "Author"[email protected]; 主题: Re: [wslc1314/TextSentimentClassification] data (#1)
OK. Please send me your email address.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
You need to run data_helpers/data_preprocessing.py first to get new data. By the way, data_helpers/data_analysis.py used for analyzing new data after preprocessing, and data_helpers/wv_generation.py used for generating word2vec word vectors and other related files. About GloVe word vectors, you need to get it by downloading GloVe and do as it is needed. After generation, you can use data_helpers/wv_evaluation.py to evaluate word vectors by analogical problem. And then you should use data_helpers/utils.py to generate all the dicts that you ask for.
[email protected] thank you very much!
[email protected] thank you very much!
[email protected] thank you very much!
[email protected] thank you very much!
[email protected] thank you very much.
[email protected] thank you very much.