eleanor icon indicating copy to clipboard operation
eleanor copied to clipboard

Sector 18 interp window length error while using targetdata

Open rachelbf opened this issue 3 years ago • 2 comments

I am running eleanor 2.0.2 on python 3.7 and came across this issue for >20 stars so far, all of which belong to sector 18. Not all sector 18 stars are affected by this. Here are a few tic ids in case you wanted to reproduce the error: 117806985, 410866027, 252851316

Screen Shot 2021-04-09 at 15 12 41 Screen Shot 2021-04-09 at 15 12 50

rachelbf avatar Apr 09 '21 22:04 rachelbf

I have the same issue with sector 17, tic ids [5643787, 5643824].

OS: Linux (CentOS) Python: 3.7.4 eleanor: 2.0.4 scipy: 1.4.1

Example code:

import eleanor
star_5643824 = eleanor.Source(tic=5643824, sector=17)
data_5643824 = eleanor.TargetData(star_5643824, height=15, width=15, bkg_size=31, do_psf=True, do_pca=True, regressors='corner')

Output:

No eleanor postcard has been made for your target (yet). Using TessCut instead.
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-16-93bc34df89cc> in <module>
      1 import eleanor
      2 star_5643824 = eleanor.Source(tic=5643824, sector=17)
----> 3 data_5643824 = eleanor.TargetData(star_5643824, height=15, width=15, bkg_size=31, do_psf=True, do_pca=True, regressors='corner')

~/.venvs/tess-anomalies-jupyter/lib/python3.7/site-packages/eleanor/targetdata.py in __init__(self, source, height, width, save_postcard, do_pca, do_psf, bkg_size, aperture_mode, cal_cadences, try_load, regressors, language)
    229                 self.create_apertures(self.tpf.shape[1], self.tpf.shape[2])
    230 
--> 231                 self.get_lightcurve()
    232 
    233                 if do_pca == True:

~/.venvs/tess-anomalies-jupyter/lib/python3.7/site-packages/eleanor/targetdata.py in get_lightcurve(self, aperture)
    625                 norm = np.nansum(self.all_apertures[a], axis=1)
    626                 all_corr_lc_pc_sub[a] = self.corrected_flux(flux=all_raw_lc_pc_sub[a]/np.nanmedian(all_raw_lc_pc_sub[a]),
--> 627                                                            bkg=self.flux_bkg[:, None] * norm)
    628                 all_corr_lc_tpf_sub[a]= self.corrected_flux(flux=all_raw_lc_tpf_sub[a]/np.nanmedian(all_raw_lc_tpf_sub[a]),
    629                                                             bkg=self.tpf_flux_bkg[:, None] * norm)

~/.venvs/tess-anomalies-jupyter/lib/python3.7/site-packages/eleanor/targetdata.py in corrected_flux(self, flux, skip, modes, pca, bkg, regressors)
   1242         f   = np.arange(0, brk, 1); s = np.arange(brk, len(self.time), 1)
   1243 
-> 1244         lc_pred = calc_corr(f, cx, cy, skip)
   1245         corr_f = flux[f]-lc_pred + med
   1246 

~/.venvs/tess-anomalies-jupyter/lib/python3.7/site-packages/eleanor/targetdata.py in calc_corr(mask, cx, cy, skip)
   1182             # temp_lc = lightcurve.LightCurve(t, flux).flatten()
   1183             tmp_flux = np.copy(flux[np.isfinite(flux)], order="C")
-> 1184             tmp_flux[:] /= savgol_filter(tmp_flux, 101, 2)
   1185             SC = sigma_clip(tmp_flux, sigma_upper=3.5, sigma_lower=3.5)
   1186 

/apps/python/3.7.4/lib/python3.7/site-packages/scipy/signal/_savitzky_golay.py in savgol_filter(x, window_length, polyorder, deriv, delta, axis, mode, cval)
    339     if mode == "interp":
    340         if window_length > x.size:
--> 341             raise ValueError("If mode is 'interp', window_length must be less "
    342                              "than or equal to the size of x.")
    343 

ValueError: If mode is 'interp', window_length must be less than or equal to the size of x.

jennywwww avatar Mar 25 '22 06:03 jennywwww

Hi Jenny,

Funny timing, I was just talking about this issue with someone else this week. Can you try the latest version of the code straight off of github and see if it works for you? I get the same error with an older version of eleanor but I am unable to reproduce it with the github version so I think there's been a fix along the way (or a bug that was squashed) that has since solved this issue!

benmontet avatar Mar 25 '22 08:03 benmontet