jodie
jodie copied to clipboard
Dataset creation
It's rather vague how the datasets are constructed. Let's focus on the Reddit dataset for a starter.
Quoting directly from the paper,
Reddit post dataset: this public dataset consists of one month of posts made by users on subreddits [2]. We selected the 1,000 most active subreddits as items and the 10,000 most active users. This results in 672,447 interactions. We convert the text of each post into a feature vector representing their LIWC categories [35].
[2] leads to a file dump for Reddit data. The timestamp from the reddit.csv file used in this repo appears to indicate that the user-subreddit interactions are collected over a period of 31 days.
Now comes the questions.
- Which time frame of Reddit data is being utilized?
- What about headers for LIWC features?
- Also, it appears that features are normalized since it's expected that some of the LIWC features are integers. However, these details are omitted from the paper entirely.
Know these specifics would be greatly beneficial for the purpose of experimental verification and robustness test over longer time frames.
Agreed, is there anywhere I can look further into the details to create a dataset?