robotzheng

Results 20 comments of robotzheng

loss_plan_col=dict(type='PlanCollisionLoss', loss_weight=1.0), can I make the above "loss_weight" bigger?

I train the model from resnet50-19c8e357.pth by the configs of "VAD_base_e2e.py“. Is the "loss_weight" not important?

@StevenJ308, I have checked my log file, not find redownload the resnet50 model. Maybe, some superparameters are not same as the paper's.

now, the best result is: Val BLEU (2,3,4): 0.95 0.935 0.921 Iteration 126600: loss 0.092868

same options as the paper: Use model cnn_deconv Use 3 conv/deconv layers {'restore': True, 'layer': 3, 'fix_emb': False, 'log_path': './log', 'sent_len2': 129, 'substitution': 's', 'sent_len4': 30, 'filter_size': 300, 'max_epochs': 100,...

now, the best result is: Val BLEU (2,3,4): 0.985 0.98 0.975 Iteration 396000: loss 0.022921

====================================================================== FAIL: test_lm_score_may_fail_numerically_for_external_meliad (__main__.LmInferenceTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/notebook/code/personal/80306170/AGI/alphageometry/lm_inference_test.py", line 82, in test_lm_score_may_fail_numerically_for_external_meliad self.assertEqual( AssertionError: Lists differ: [-1.1633697, -1.122621] != [-1.1860729455947876, -1.1022869348526] First differing element 0: -1.1633697...

I0124 14:01:54.993820 140581835039232 alphageometry.py:565] LM output (score=-0.872653): "r : C d l r 28 D d r l r 29 ;" I0124 14:01:54.994065 140581835039232 alphageometry.py:566] Translation: "ERROR: Traceback (most recent...

SophiaG worked, but the perfomace is not better than Adam, maybe because of the bias. So I want to try SophiaH, which hasn't the bias.