Ziming Liu
Ziming Liu
Hi, in this case, it looks like the network fails to learn only one hidden neurons, but there are two duplicate neurons. You could try increase `lamb_entropy`, `lamb` or change...
Just fixed a bunch of issues related to cuda and seems cuda runs much faster (20x speed up) than cpu for a [4,100,100,100,1] KAN: https://github.com/KindXiaoming/pykan/blob/master/tutorials/API_10_device.ipynb
Related: https://github.com/KindXiaoming/pykan/issues/258
Hi, what is the value of `symbol_mask` and `numerical_mask` in your case?
I see, the nan loss seems like the real bug. Please open another issue for the nan loss if you still have the problem.
Very cool! I wonder what that 75-term expression looks like? And how accurate is it? Maybe you can try deeper/wider KANs to get more accuracy but less simplicity. Would be...
In this case, it might be more reasonable to try `model = KAN(width=[1,10,1], grid=5, k=1, seed=0)` (possibly increase grid as well), but this is just a workaround that may not...
This might be useful: https://kindxiaoming.github.io/pykan/API_demo/API_4_extract_activations.html Also, function names are stored in `model.symbolic_fun[l].funs_name` and coefficients are stored in `model.symbolic_fun[l].affine` (l is the layer index)
Hi, at high precision, the results can be quite sensitive to random seeds. At least when I made the plot, `noise_scale_base=0.0` is used by default, and the default now becomes...