File "link.py", line 137, in
l = l.extract('https://www.wired.com/2017/05/google-just-made-email-heckuva-lot-easier-deal/')
File "link.py", line 74, in extract
keywords = k.extract([text], None, None, 'news')[0]
File "C:\Users\dinom\Desktop\feature_engineering-master\features\keywords.py", line 187, in extract
vectorizer = self.train_tfidf(tokenizer, tfidf_corpus)
File "C:\Users\dinom\Desktop\feature_engineering-master\features\keywords.py", line 71, in train_tfidf
nltk_corpus += self.get_bbc_news_corpus()
File "C:\Users\dinom\Desktop\feature_engineering-master\features\keywords.py", line 51, in get_bbc_news_corpus
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
WindowsError: [Error 3] The system cannot find the path specified: './data/bbc/business/.'
this is the error i am getting
The BBC news corpus files are not included with the code. Is there any chance of making those files available or provide instructions where to find them or how to recreate?
Also all the models in Classifier.getModels function.