晴天小飞猪丶
晴天小飞猪丶
实际的代码应该在 thinking-in-spring-boot-samples/spring-framework-samples/spring-framework-5.0.x-sample/src/main/java/thinking/in/spring/boot/samples/spring5/context/event/GenericEventListenerBootstrap.java 最下面
@claude-zhou Dear zhou I attempt to implements your paper with pytorch when I read your code,I found that the output you used in calculating the test loss(perplexity) was the output...
` # Dynamic decoding infer_outputs, _, infer_lengths = seq2seq.dynamic_decode( decoder, maximum_iterations=maximum_iterations, output_time_major=True, swap_memory=True, scope=decoder_scope ) if beam_width > 0: self.result = infer_outputs.predicted_ids else: self.result = infer_outputs.sample_id self.result_lengths = infer_lengths`
I am thinking that we should use "infer_outputs" to calculate the CrossEntropy and calculate the perplexity further
I will be appreciate it very much if you could give some advice @claude-zhou
When I did my experiments on dataset 'SE0714' 'Olympic' and 'PsychExp' with all of them were multi-classification task ,the result f1-score is much lower than that in the paper
Hello,have you solved this problem?
@rezwanh001 as the huggingface mentioned in the readme file,the code in the 'script' folder are used to process the raw data in the folder ‘data'. I think 'tweets.2016-09-01' may be...
Maybe you should run the script 'convert_all_datasets.py' in the 'script' folder.
Hello: May I ask you how did you set your learning rate? My program set learning rate to 0.00001.But I seemed that the loss never decreased @munikarmanish