pykan icon indicating copy to clipboard operation
pykan copied to clipboard

Grid learning with gradient descent

Open Indoxer opened this issue 9 months ago • 1 comments

My question is about this line in the paper:

"Other possibilities are: (a) the grid is learnable with gradient descent, e.g., [16];"

There is an implementation of this?

Also, I noticed that torch.linalg.lstsq can only work with full rank matrices on cuda. Does anyone know an implementation of this in cuda?

PS I'm working on the "library", where I'm testing various ideas that I found in issues, e.g. training on MNIST, CIFAR10, KAN as convolution (in progress), etc. (https://github.com/Indoxer/LKAN). I am open to contribution.

Indoxer avatar May 06 '24 20:05 Indoxer

thank you , i am wanting to test with your github . But it seems that even though KAN is based on kolmogorov arnold network theorem, I feel like they are still UAT

kolmogorov-quyet avatar May 07 '24 08:05 kolmogorov-quyet

close for now, feel free to reopen if there's further update

KindXiaoming avatar Jul 14 '24 04:07 KindXiaoming