smt icon indicating copy to clipboard operation
smt copied to clipboard

Error from SMT --> ValueError: setting an array element with a sequence.

Open Val4on opened this issue 1 year ago • 4 comments

After upgrading to version 2.4.0, I'm seeing the following errors:

TypeError: only length-1 arrays can be converted to Python scalars
krg_based.py", line 1127, in _reduced_likelihood_gradient
    grad_red[i_der] = (
ValueError: setting an array element with a sequence.

Is this a real issue or am I incorrectly using KPLS ?

Val4on avatar Mar 20 '24 18:03 Val4on

Hi. Thank you for reporting. I think we've got an issue here. How do you call KPLS(), what options are you using? What is the dimension of your training inputs and the value of n_comp?

In 2.4, we've changed the default internal optimizer from COBYLA to TNC. The latter uses gradients, I guess that if you switch back to COBYLA by using KPLS(hyper_opt="Cobyla", ...) you will retrieve a working solution.

relf avatar Mar 21 '24 07:03 relf

Hi, thanks for getting back, here’s how I’m calling it:

self.surrogate = KPLSK(print_global = False,
                               n_comp=num_params,
                               theta0=t0s,
                               print_prediction=False, corr='squar_exp')

Where I’m setting principal components to the number of input parameters, theta0 is of the same dimension as well. So, I do see a bunch of cobyla failures along with this message:

fmin_cobyla failed but the best value is retained Optimization failed. Try increasing the nugget

Is what gets printed before the error. So it sounds like I should go back to TNC? Correct.

Val.

Val4on avatar Mar 21 '24 13:03 Val4on

  1. the point of using KPLS or KPLSK is to choose n_comp < num_params to get actual dimension reduction otherwise you'd better use KRG. What is the value of num_params? Whats is the shape of your training data (n_samples, n_dim) ? What is the shape/value of t0s?

  2. what version of SMT worked for you before?

  3. did you try to increase the nugget like it was suggested (option nugget=1e-8) when you test with Cobyla?

  4. If you go back to TNC you get an error, correct? At the moment I can not reproduce the error you've got. So without an actual example to reproduce the error, it is difficult to help you more on this.

relf avatar Mar 21 '24 15:03 relf

Answers below. FYI, I will be out tomorrow and all of next week, so my responses will be delayed.

Val.

  1. the point of using KPLS or KPLSK is to choose n_comp < num_params to get actual dimension reduction otherwise you'd better use KRG. What is the value of num_params? Whats is the shape of your training data (n_samples, n_dim) ? What is the shape/value of t0s?

ANS: I figured n_comp is number of principle components to retain, the default now was just to set it to num_params (I will change this), but you’re right we should change it. num_params is the size of the domain, this can vary from 2 to 30. Theta0 is the same as the number of params. We’re currently using discrete value indexes, so this defaults to 1.0.

  1. what version of SMT worked for you before?

ANS: 2.0.1

  1. did you try to increase the nugget like it was suggested (option nugget=1e-8) when you test with Cobyla?

ANS: I did try, but I still saw errors, I will re-run to verify

  1. If you go back to TNC you get an error, correct? At the moment I can not reproduce the error you've got. So without an actual example to reproduce the error, it is difficult to help you more on this.

ANS: If I set hyper_opt to TNC, I still see : Optimization failed. Try increasing the nugget fmin_cobyla failed but the best value is retained

But it doesn’t crash. KPLSK(print_global = False, hyper_opt='TNC', n_comp=num_params, theta0=t0s, print_prediction=False, corr='squar_exp')

Val4on avatar Mar 21 '24 18:03 Val4on