pykan
pykan copied to clipboard
Running Example 2: Deep Formulas on my computer yielded inconsistent results
This is the original drawing
This is my drawing
As you can see, the spline of my graph and the result of the original graph seem to be opposite.
In addition, my loss was far from the original result
Tutorial results
My results
Can you help me solve the above problem?
Your drawing is consistent with my plot (up to node permutation and sign). Not sure about the training loss result. Note that you may need to remove different edges because your diagram is microscopically different (albeit macroscopically consistent) from mine. If you removed the wrong edge, the loss will be quite crappy. To be safe, you may choose not to remove any edge. Could you please provide training code and how you did edge pruning (if you did)?
Your drawing is consistent with my plot (up to node permutation and sign). Not sure about the training loss result. Note that you may need to remove different edges because your diagram is microscopically different (albeit macroscopically consistent) from mine. If you removed the wrong edge, the loss will be quite crappy. To be safe, you may choose not to remove any edge. Could you please provide training code and how you did edge pruning (if you did)?
Hello, thank you very much for your reply. But my code is an Example of Example 2: Deep Formulas that you provided with the tutorial. I didn't make any changes, but the loss value is very bad, different from your example. Don't know how to solve.
Hi, the results of example 2 is produced before code release. After a few merge, it could be that random seeds have a slightly different effect. To avoid removing edges, you may change from
remove_edge = True
to
remove_edge = False
Hopefully this solves the problem.
False
Thank you very much for your advice, according to your tips, the result is very close to your example, the problem seems to be solved.
Hello author, I use your hellokan file to run, and it is inconsistent with your document when the function is finally fitted, what is the cause of this?
嗨,示例 2 的结果是在代码发布之前生成的。经过几次合并后,随机种子的效果可能略有不同。为避免删除边缘,您可以将
remove_edge = True
自
remove_edge = False
希望这能解决问题。
Hello author, I use your hellokan file to run, and it is inconsistent with your document when the function is finally fitted, what is the cause of this?
嗨,示例 2 的结果是在代码发布之前生成的。经过几次合并后,随机种子的效果可能略有不同。为避免删除边缘,您可以将
remove_edge = True
自
remove_edge = False
希望这能解决问题。
Hello author, I use your hellokan file to run, and it is inconsistent with your document when the function is finally fitted, what is the cause of this?
Hello, I seem to have encountered the problem you mentioned, you can try to set the threshold of model.prune() (default threshold is 1e-2) to model.prune(5e-2), it can be solved.
嗨,示例 2 的结果是在代码发布之前生成的。经过几次合并后,随机种子的效果可能略有不同。为避免删除边缘,您可以将
remove_edge = True
自
remove_edge = False
希望这能解决问题。
你好作者,我用你的hellokan文件运行,当函数最终拟合时,它与你的文档不一致,这是什么原因?
你好,我好像遇到了你提到的问题,你可以尝试将model.prune()的阈值(默认阈值为1e-2)设置为model.prune(5e-2),就可以解决了。
thank you
1.0𝑒1.0𝑥22+1.0sin(3.14𝑥1)+0.01 Hello, as per your change, the result appeared this, is it normal? If you have time, please get back to me, thank you
1.0𝑒1.0𝑥22+1.0sin(3.14𝑥1)+0.01 Hello, as per your change, the result appeared this, is it normal? If you have time, please get back to me, thank you
Hello, I just tried, change the threshold value to 5e-2 can get the result of the example, you can compare the following part of the code with your code, modify.
model.train(dataset, opt="LBFGS", steps=20, lamb=0.01, lamb_entropy=10.); model.plot() model.prune(threshold=5e-2) model.plot(mask=True) model = model.prune(threshold=5e-2) model(dataset['train_input']) model.plot()
threshold=5e-2
IT is ok ,thank you very much
@GaoLei0 @KindXiaoming I don't know if the current code (version 0.2.3) supports reproducing the results of Example 3 Deep formula. After downloading and executing the code, the results are as follows: for some seeds, the loss doesn't even decrease, and it seems that optimization is not happening.
When I changed seed to 3, the loss slightly decreased, but I still couldn't reproduce the official results
- Update: Under version 0.2.3, I am also unable to reproduce Example 5 and Example 12. It seems that only the simplest example
torch.exp(torch.sin(torch.pi*x[:,[0]]) + x[:,[1]]**2)
works well. Other complicated examples can not be trained properly. - Update: :sunflower: When I switched to version 0.2.1 of the source code, and made modifications based on issue 362 and 331, and then installed it, the aforementioned examples worked fine. I am curious about what changes were made from version 0.2.1 to 0.2.3 that caused KAN to fail in training properly.
From my experience, this example can be very seed dependent. Please try other seeding KAN(... , seed=42)
However, I’ve tried dozens of different seeds on versions 0.2.3 and 0.2.4, but I haven’t been able to reproduce similar results. There are always redundant functions, and the loss never reaches the e-2 level. As I mentioned earlier, when I revert to version 0.2.1, I can easily reproduce all the examples. So, I’m a bit confused.
The default regularization metric have been changed. I think 0.2.1 uses reg_metric='edge_forward_n', while 0.2.4 uses reg_metric='edge_backward'. Please try model.fit(..., reg_metric='edge_forward_n') or model.fit(..., reg_metric='edge_forward_u').