PRGC icon indicating copy to clipboard operation
PRGC copied to clipboard

f=0,p=0,r=0

Open Wangrulin-1128 opened this issue 1 year ago • 5 comments

When I use my dataset to run ,the output is 0 which f=0,p=0 and r=0. But while I use the dataset of PRGC, the output is normal. How can i solve this problem? Thanks

Wangrulin-1128 avatar Oct 08 '23 01:10 Wangrulin-1128

me too...

beiyaoovo avatar Apr 10 '24 16:04 beiyaoovo

me,too

youngsasa2021 avatar Apr 11 '24 07:04 youngsasa2021

me,too

I found that this problem can be solved by modifying the learning rate,

    # learning rate
    self.fin_tuning_lr = 1e-4
    self.downs_en_lr = 1e-3
    self.clip_grad = 2.
    self.drop_prob = 0.3  # dropout
    self.weight_decay_rate = 0.01
    self.warmup_prop = 0.1
    self.gradient_accumulation_steps = 2
    

I change it to

    self.fin_tuning_lr = 5e-5
    self.downs_en_lr = 5e-4
   

You can modify it according to your needs.

But the f1 value is very low, only around 0.2. If you have resolved the issue with f1 value, please notify me.

beiyaoovo avatar Apr 11 '24 11:04 beiyaoovo

Thanks for your suggestion.I'll try it!

At 2024-04-11 19:11:51, "beiyaoovo" @.***> wrote:

me,too

I found that this problem can be solved by modifying the learning rate,

# learning rate
self.fin_tuning_lr = 1e-4
self.downs_en_lr = 1e-3
self.clip_grad = 2.
self.drop_prob = 0.3  # dropout
self.weight_decay_rate = 0.01
self.warmup_prop = 0.1
self.gradient_accumulation_steps = 2

I change it to

self.fin_tuning_lr = 5e-5
self.downs_en_lr = 5e-4

You can modify it according to your needs.

But the f1 value is very low, only around 0.2. If you have resolved the issue with f1 value, please notify me.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

youngsasa2021 avatar Apr 12 '24 01:04 youngsasa2021

although I modify as you say,the results (f,p,r) are still 0.

At 2024-04-11 19:11:51, "beiyaoovo" @.***> wrote:

me,too

I found that this problem can be solved by modifying the learning rate,

# learning rate
self.fin_tuning_lr = 1e-4
self.downs_en_lr = 1e-3
self.clip_grad = 2.
self.drop_prob = 0.3  # dropout
self.weight_decay_rate = 0.01
self.warmup_prop = 0.1
self.gradient_accumulation_steps = 2

I change it to

self.fin_tuning_lr = 5e-5
self.downs_en_lr = 5e-4

You can modify it according to your needs.

But the f1 value is very low, only around 0.2. If you have resolved the issue with f1 value, please notify me.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

youngsasa2021 avatar Apr 12 '24 08:04 youngsasa2021