AI-Feynman icon indicating copy to clipboard operation
AI-Feynman copied to clipboard

Linear Algebra Error

Open daratanxe opened this issue 3 years ago • 8 comments

Received the following error when the code was "checking for generalized symmetry" and "trying (0, 1)":

LinAlgError: Array must not contain infs or NaNs

Is there any way to fix this?

daratanxe avatar Mar 28 '21 07:03 daratanxe

I'm having the same problem.

Initially I was only getting it when trying to apply AIF to data covering big numbers (numbers going from 1 to 10^5 in the same column), so I was able to avoid it by "normalizing" the data to numbers of order 1 (just divided every column by the average of that column).

But when I increased the number o NN training Epochs from 500 to 5000 it came back, even on the "normalized" data.

rdmatheus-ift avatar Apr 01 '21 21:04 rdmatheus-ift

Could you please share the input file so I can reproduce the error?

SJ001 avatar May 16 '21 18:05 SJ001

Same error with input file being this one: rocketData.txt

patrickfleith avatar May 18 '21 14:05 patrickfleith

One example were I had this problem was (when using NN training Epochs of 5000):

exampleM2uB.txt

The function used to generate the table is of the form:

const-x0x1/x2-x1x2/x0-x0*x2/x1

rdmatheus-ift avatar May 18 '21 19:05 rdmatheus-ift

I am running into the same problem.

JulesBlubb avatar May 19 '21 12:05 JulesBlubb

Exactly the same problem for me. a- @SJ001 did you manage to reproduce the error with the dataset given by @patrickfleith ? is it generic in every OS or specific to some installations of BLAS (as suggested here : https://github.com/numpy/numpy/issues/4291) ? b- I got a "results.dat" file but no "solution_{}" files in 'results' folder. Is is because the early stopping due to the LinAlgError ?

By the way, I like very much this "understandable" (and physically usable) ML work; Congrats !

koskr avatar Jun 29 '21 22:06 koskr

Ran into the same issue. I'm inclined to think this might be a problem due to vanishing gradients, especially because you guys mentioned that playing with epochs or normalization appeared to prolong the inevitable. Depending on what causes the issue, np.nan_to_num might be a quick fix. Another quick fix might be to add some batch normalization layers.

futurisold avatar Sep 15 '21 11:09 futurisold

anyone got it working after facing the error?

ahmad-u avatar Feb 07 '23 15:02 ahmad-u