pytorch-sentiment-analysis
pytorch-sentiment-analysis copied to clipboard
6 - Transformers for Sentiment Analysis
hello, i have some questions on task6:
when we print like this
print(vars(train_data.examples[6]))
we get
{'text': [1042, 4140, 1996, 2087, 2112, 1010, 2023, 3185, 5683, 2066, 1037, 1000, 2081, 1011, 2005, 1011, 2694, 1000, 3947, 1012, 1996, 3257, 2003, 10654, 1011, 28273, 1010, 1996, 3772, 1006, 2007, 1996, 6453, 1997, 5965, 1043, 11761, 2638, 1007, 2003, 2058, 13088, 10593, 2102, 1998, 7815, 2100, 1012, 15339, 14282, 1010, 3391, 1010, 18058, 2014, 3210, 2066, 2016, 1005, 1055, 3147, 3752, 2068, 2125, 1037, 16091, 4003, 1012, 2069, 2028, 2518, 3084, 2023, 2143, 4276, 3666, 1010, 1998, 2008, 2003, 2320, 10012, 3310, 2067, 2013, 1996, 1000, 7367, 11368, 5649, 1012, 1000, 2045, 2003, 2242, 14888, 2055, 3666, 1037, 2235, 2775, 4028, 2619, 1010, 1998, 2023, 3185, 2453, 2022, 2062, 2084, 2070, 2064, 5047, 2074, 2005, 2008, 3114, 1012, 2009, 2003, 7078, 5923, 1011, 27017, 1012, 2023, 2143, 2069, 2515, 2028, 2518, 2157, 1010, 2021, 2009, 21145, 2008, 2028, 2518, 2157, 2041, 1997, 1996, 2380, 1012, 4276, 3773, 2074, 2005, 1996, 2197, 2184, 2781, 2030, 2061, 1012], 'label': 'neg'}
why CLS 101 and SEP 102 don't be added into text?
and if i want to get text like input_ids, attention_mask, token_type_ids
in torchtext field, how can i do it?
We don't get the CLS and SEP tokens because we use tokenizer.tokenize
instead of tokenizer.encode
. Ideally, I should've used tokenizer.encode
because the BERT model expects the CLS and SEP tokens and usually gets weird results without them, but I found it to do fine in this case.
If we want the input_ids
, attention_mask
and token_type_ids
then we can simply call the tokenizer
on the string, e.g. tokenize("hello world")