pvlib-python icon indicating copy to clipboard operation
pvlib-python copied to clipboard

Fitting functions for spectral mismatch factor models

Open RDaxini opened this issue 1 year ago • 5 comments

Is your feature request related to a problem? Please describe. The spectrum.spectral_factor.* functions offer model coefficients but pvlib currently has no functionality for users to derive model coefficients from their own data.

Describe the solution you'd like I'd like to implement surface fitting functions for:

spectrum.spectral_factor_pvspec() spectrum.spectral_factor_jrc() spectrum.spectral_factor_firstsolar()

and a polynomial fitting function for:

spectrum.spectral_factor_sapm

Additional context Some questions:

  1. Following on from the discussion about references and fitting functions in #1979 (related: #1898), would the fitting functions here need to reproduce the fitting methods adopted in the original publications, or would a generic fitting tool be okay? By generic tool, I mean, for example, a common method like Ordinary Least Squares using established python packages such as scipy.optimize and statsmodels in order to fit to the published model parameterisation.

  2. If the former (reproducing the published method), but the precise method is not mentioned in the reference, would communication from the author confirming the adopted method be sufficient?

  3. If a generic tool is okay then what would a suitable reference be? A maths/stats papers corroborating the method's validity? Reputable examples (PV or non-PV?) of its application?

RDaxini avatar Oct 08 '24 17:10 RDaxini

I don't think there is consensus on this topic. I'll give my 2 cents. In general, I think determining model parameter values is just as deserving of reference and validation as the models themselves are, and in pvlib we should strive for high rigor for both models and parameter estimation methods.

would the fitting functions here need to reproduce the fitting methods adopted in the original publications, or would a generic fitting tool be okay?

I'd say both approaches can result in worthy additions. We just would need to make sure the function is documented and named accordingly. Either way, a suitable reference is needed.

If the former (reproducing the published method), but the precise method is not mentioned in the reference, would communication from the author confirming the adopted method be sufficient?

Lacking a reference means no specification and no validation. Communication with the author could potentially address the former, but the latter would likely remain unresolved. I think the answer here is that it probably depends on the method, with some amount of case-by-case judgement call being required.

If a generic tool is okay then what would a suitable reference be? A maths/stats papers corroborating the method's validity?Reputable examples (PV or non-PV?) of its application?

I don't think a reference validating OLS (or whatever) is very helpful. I'd want to see something that somehow relates the underlying math (OLS, etc) with the application (e.g. the PVSPEC model), for example by showing that it produces reasonable values for a range of PV technologies, climates, etc, or why some transformation (e.g. performing OLS in log space) was chosen, or what motivated those specific optimization bounds, or...

kandersolar avatar Dec 03 '24 16:12 kandersolar

Perhaps a gallery example?

adriesse avatar Dec 03 '24 16:12 adriesse

In my opinion, validating a parameter estimation method should be done in a manner that demonstrates that the method has two properties:

  1. the model fitting does not introduce a prediction bias, and
  2. the fitting is robust in the presence of measurement error.

One way to demonstrate the first property is as follows:

  1. select model parameters
  2. calculate model output
  3. fit the model to the calculated output
  4. and re-predict the model output with the fitted parameters. Comparison of the selected and fitted parameters, and of the calculated and predicted output, should reveal if the method itself is a source of bias.

The second property can be shown by constructing a model that generates representative measurement error, repeating steps #1 and #2 above, generating many realizations of calculated model output with error applied, fitting the model to each realization, and examining statistics for the fitted model.

cwhanse avatar Dec 03 '24 16:12 cwhanse

Thanks all for the feedback. I want to keep this issue open for now. At some point, I will review in more details and propose a contribution that meets the requirements/suggestions discussed here. Gallery example is a fair idea. I think it would be nice to have the functionality embedded in pvlib, but if that does not work out then I could go down the gallery example route.

RDaxini avatar Jan 14 '25 20:01 RDaxini

Some related discussion took place here (thanks @cwhanse) I will (slowly...) begin working on an example gallery addition and open a PR once I have something developed enough for an initial review.

RDaxini avatar Jul 28 '25 19:07 RDaxini