ABSA-PyTorch
ABSA-PyTorch copied to clipboard
About fine-tuning
hello , In your paper ,we can see
BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the given context and target to “[CLS] + context + [SEP]” and “[CLS] + target + [SEP]” respectively.
can you tell me ,how you fine-tuning BERT model? use BERT ML model?
Pre-trained BERT module is a submodule of your BERT-based model, so all the parameters are tuned together.
https://github.com/songyouwei/ABSA-PyTorch/blob/6ba6e040c8bc7aa9e7294905d3254a3f79e46caf/train.py#L38-L39