semantic-text-similarity icon indicating copy to clipboard operation
semantic-text-similarity copied to clipboard

parameters

Open WinnaYuan opened this issue 4 years ago • 3 comments

Hi, can you share all the parameters of the clinical finetuned model? thank you !

WinnaYuan avatar Dec 30 '19 08:12 WinnaYuan

The parameters are shared in the clinical model download. Look in your ~/.cache directory.

Andriy

On Mon, Dec 30, 2019, 3:26 AM WinnaYuan [email protected] wrote:

Hi, can you share all the parameters of the clinical finetuned model? thank you !

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/AndriyMulyar/semantic-text-similarity/issues/5?email_source=notifications&email_token=ADJ4TBXSLMBZTAKV4N6AZJ3Q3GWCZA5CNFSM4KBKFLTKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IDJUE6A, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADJ4TBUVGH3RHABJAYPKCHLQ3GWCZANCNFSM4KBKFLTA .

AndriyMulyar avatar Dec 30 '19 15:12 AndriyMulyar

The parameters are shared in the clinical model download. Look in your ~/.cache directory. Andriy On Mon, Dec 30, 2019, 3:26 AM WinnaYuan @.***> wrote: Hi, can you share all the parameters of the clinical finetuned model? thank you ! — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#5?email_source=notifications&email_token=ADJ4TBXSLMBZTAKV4N6AZJ3Q3GWCZA5CNFSM4KBKFLTKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IDJUE6A>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADJ4TBUVGH3RHABJAYPKCHLQ3GWCZANCNFSM4KBKFLTA .

thanks for your reply, I just saw these parameters which doesn't contain learning rate and epoch and others, or you use the default parameters? and another question, where can I download the MED data? Thank you very much! { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 28996 }

WinnaYuan avatar Jan 03 '20 08:01 WinnaYuan

I never got around to publishing the training parameters due to not having time to clean it up. The training parameters were identical to the pytorch transformers sentence pair CLS finetuning parameters.

AndriyMulyar avatar Jan 03 '20 21:01 AndriyMulyar