pykan icon indicating copy to clipboard operation
pykan copied to clipboard

Support for Adding Constants as Extra Parameters in the Model

Open Mixpap opened this issue 6 months ago • 1 comments

Hello, and thank you for this amazing work!

I believe that a valuable feature, particularly for interpretability and inverse problem cases, would be the ability to add “constants” as extra parameters to the model. This would allow us to incorporate known constants directly into the symbolic expression, rather than relying on the model to discover them through optimization.

For example, consider a dataset describing the position of a falling object over time (t, y). We want our KAN model to find the equation y(t) = y(0) - g t^2 . Using the current methodology as described in the examples, the model would likely be able to discover the exact symbolic expression. However, what if we already knew that g = 9.87 ? Instead of relying on the parameters of the layer functions to approximate this value, it would be helpful to fix or freeze an input variable containing this constant.

I attempted a workaround by fixing a symbolic activation function to a constant using the function '0' and forcing the affine parameters of the node as follows:

model.fix_symbolic(0,0,0,fun_name='0', fit_params_bool=False, a_range=(0,0),b_range=(0,0))
model.get_parameter('symbolic_fun.0.affine').data[0] = torch.tensor([0.0,0.0,0.0,9.87]) #constant activation function
model.get_parameter('symbolic_fun.0.affine').requires_grad =False

This forces all parameters in layer 0 to freeze, transferring the fitting process to subsequent layers. However, I encountered issues where the optimizer returns NaN values, and I couldn’t get this method to work.

Is there an alternative, perhaps simpler, way to incorporate known constants into the model? Or am I missing something in the implementation? Thanks!

Mixpap avatar Aug 23 '24 16:08 Mixpap