photutils icon indicating copy to clipboard operation
photutils copied to clipboard

How to build ePSF with provided star fits

Open oxno2 opened this issue 4 years ago • 19 comments

I tried to obtain the star PSF for a very large image of HST. The original image was huge, I tried to obtain the stars first and put them into individual fits. After subtracted the background and masked some contamination sources, I wanted to use the EPSFBuilder to combine these processed stars images and generate an effect native PSF. There were nearby sources around the original stars so I mask these regions with numpy.nan.

I followed the same method from the example document. I read the values of the stars and stored in a numpy array called "stars_sci" and then combined the data with np.hstack.

data=numpy.hstack(star_sci)

#Then I just follow the same code from the example document.

import matplotlib.pyplot as plt
from astropy.visualization import simple_norm
from photutils import datasets
from photutils.datasets import make_noise_image
peaks_tbl = find_peaks(data, threshold=5000)  
peaks_tbl['peak_value'].info.format = '%.8g'  # for consistent table output  
print(peaks_tbl)  
size = 25
hsize = (size - 1) / 2
x = peaks_tbl['x_peak']  
y = peaks_tbl['y_peak']  
mask = ((x > hsize) & (x < (data.shape[1] -1 - hsize)) &
            (y > hsize) & (y < (data.shape[0] -1 - hsize)))  
from astropy.table import Table
stars_tbl = Table()
stars_tbl['x'] = x[mask]  
stars_tbl['y'] = y[mask]  

from astropy.stats import sigma_clipped_stats
mean_val, median_val, std_val = sigma_clipped_stats(data, sigma=2.)  
data -= median_val  
#The extract_stars() function requires the input data as an NDData object. An NDData object is easy to create from our data array:

from astropy.nddata import NDData
nddata = NDData(data=data)  

from photutils.psf import extract_stars
stars = extract_stars(nddata, stars_tbl, size=25) 

The stars looked ok DeepinScreenshot_select-area_20190822223031 However, the final EPSF was very strange. I didn't know what happened and why the shape looked so irregular. DeepinScreenshot_select-area_20190822223055

oxno2 avatar Aug 23 '19 03:08 oxno2

I was successfully using EPSFBuilder with photutils v0.6 to build the PSF for some WFI images. Now with the v0.7 I am obtaining similar results to @oxno2 even on images where the previous version worked.

cloud182 avatar Sep 22 '19 20:09 cloud182

@oxno2 @cloud182 - any chance that either of you can provide the datasets you ran into these problem with? (I.e., if the WFI images are in some archive @cloud182, or if you can provide the specific HST images you were using to get this, @oxno2 ?) Having a fully working example that can be tested and debugged would make this a lot easier to investigate.

eteq avatar Sep 23 '19 18:09 eteq

I am afraid the data I am working on is still proprietary and I cannot share them.

cloud182 avatar Sep 23 '19 19:09 cloud182

Hi @cloud182 @oxno2, I am looking into this now and will report back once I have done some digging. Once I think I might have a fix in place, would you be willing to re-run your analyses on the updated master branch to test how the changes affect your results?

Onoddil avatar Sep 23 '19 19:09 Onoddil

I think I can do it, even though I cannot assure you how fast I can be on giving back some feedback. But before doing that, I would also need some clarification on the behavior of the oversampling parameter. I don't really understand how it changed from v0.6 and why it is not possible to recover a PSF without oversampling (oversampling = 1).

cloud182 avatar Sep 23 '19 19:09 cloud182

Hi @cloud182 @oxno2, apologies for the delay in getting back to you. I had a look into this issue, using some HST calibration data used in the construction of the original, canonical HST ePSFs. I have found a few minor issues which I will be looking to fix in the coming weeks, but overall I am not able to reproduce this effect (i.e., the calibration dataset makes an ePSF which looks convincingly like the one available "officially", when put through our v0.7 version of the pipeline).

I am unsure of which WFI you refer to, @cloud182, there being one on a future space-based telescope and one on a ground-based telescope, but certainly for HST the issue lies in the word image, @oxno2. One of the main issues, as discussed in the original Anderson & King (2000) paper, is that of the initial guesses -- position and flux -- which are used to begin the ePSF creation process. Typically aperture photometry is used to get a preliminary flux, and some kind of centre of mass-like algorithm used to estimate the positions of the sources. However, these position algorithms introduce biases (a term the authors call 'pixel phase') and the loop of creating an ePSF from stars with incorrect positions, fitting the stars with that same ePSF, using the new positions to update the ePSF, ... results in feedback which destroys the end ePSF, as you have seen. The key, therefore is, to use dithered images, with multiple observations of the same N stars across M observations. The positions in each frame can then be averaged in sky coordinates and transformed back into each pixel coordinate system for the M images, averaging out these incorrectnesses in recorded positions (see Figures 1 and 2, and step 3 of Figure 8 of the paper in question).

You therefore cannot, as I understand it, reliably create an ePSF from a single image, for critically or undersampled images such as HST, as you cannot account for and correct the effects of an initially poor source position. I believe the problem here is more a lack of proper documentation and proper "dos and don'ts" guide for how to go about the creation of ePSFs, especially for space-based telescopes. I will be looking into more quantitative measures of how many dithers and individual sources is "good enough" (although the tests I performed on the calibration data used 750 sources, each in 8 dithers, as a minimum threshold for comparison), and writing up some more solid tutorials and documentation to ensure that users are aware of the issues with undersampled or oversampled telescope observations ("space" vs "ground").

Once I have some working examples and more coherent explanations in place to add to documentation, it would be great if it were possible for both of you to provide your feedback, as users who have encountered issues with this problem, so as to help make any tutorials/guides/documentation as informative as they can be.

Onoddil avatar Oct 02 '19 22:10 Onoddil

@oxno2's image (and possibly @cloud182 issue) suggests to me that the new version has a bug resulting in a diverging algorithm somewhere.

I disagree with the main points in https://github.com/astropy/photutils/issues/951#issuecomment-537707417 about effects of initially poor positions (unless those errors are larger than 1 pixel which should not happen) and importance of dithered observations.

  1. ... you cannot account for and correct the effects of an initially poor source position.

    One of the main purposes of Jay Anderson's method is to improve position estimates and, being an iterative method, it starts with rough estimates as illustrated in the first step in Figure 8 and gradually refines those coordinates. If one knew exact positions, this method would not be needed.

  2. The key, therefore is, to use dithered images, with multiple observations of the same N stars across M observations.

    While having a large (~10-100s) set of dithered images cannot hurt, it should not be a critical issue. For example, initial star coordinates found using a center-of-mass algorithm in each of dithered images would be affected by the same bias effects as in single images. Dithering provides no intrinsic advantage for initial position accuracy.

  3. The positions in each frame can then be averaged in sky coordinates and transformed back into each pixel coordinate system for the M images, averaging out these incorrectnesses in recorded positions.

    I do not believe this step is part of the original algorithm in Jay Anderson's paper as it requires WCS transformations that are not mentioned in the paper. It also requires/assumes that WCS of each image be accurate (i.e., no pointing errors). If WCS is inaccurate, this step would actually result in higher centering errors and lower quality ePSF.

In order to build an oversampled ePSF from (many) under-sampled stars one needs to sample a star (which is a representation of the PSF) at different positions. These samplings can come from multiple images ("dithers") or from different stars in the same image. In the later case the "dithering" is mimicked by random positions of the stars in the image (see 4th paragraph in section 5.2: "We can make use of the fact that nature should distribute stars randomly with respect to the pixel boundaries ...")

Dithering can help deal with PSF variability with position. For example, if PSF is changing across the image, one cannot use all stars in a single image to build a PSF. In that case, when constructing a PSF from stars, one would need to limit to the stars in the vicinity of some star whose position one wants to measure (hence the 9 fiducial ePSF positions in Fig. 7, section 4.2.2). Dithering is relatively free from the effects of spacial variations of the PSF (at least for small dithers) because it allows for stars (PSF) to be sampled at almost the same position within the chip but it suffers from effects of time-dependent variations (see, e.g., "breathing" effects in section 4.8) of the PSF.

To summarize: IMO dithering is helpful but not crucial for computing a high quality ePSF as long as the single image contains a large number of stars in (relatively) small regions of the image (in order to minimize the effects of spacial variability of the PSF).

mcara avatar Oct 03 '19 06:10 mcara

Hi everyone,

Thanks for your help. To answer the questions from @Onoddil about the instrument, I am talking about the ground-based WFI at the ESO2.2m telescope. However, regarding the explanation of the error, I think I agree more with @mcara. For a simple reason: the old version of the algorithm worked, and the results were pretty good. If the reason why it is not working was an error on the input, I also think that also the older versions should not work in the same way. Nonetheless, I agree with @Onoddil that the documentation could be improved.

Of course, I am available to try the new version of the code whenever is ready.

cloud182 avatar Oct 03 '19 12:10 cloud182

Hi @mcara, thanks for your comments.

As I said before I have validated the current version of the ePSF framework within photutils, and find I am able to reproduce ePSFs using F160W HST data of Omega Cen, from calibration dataset CAL-13606. I therefore do not believe the is a bug within the algorithm itself, as presumably this dataset would also fail and diverge if the issue lay within the codebase.

Re: 1) yes, of course if positions were known intrinsically the iterative process would not be necessary, but I was referring to the issue of the initial pixel phase effect, where the initial positions of the sources are wrong, so some of their positions "move" them into the wrong residual spot, so the created ePSF is wrong (containing within some of its residuals sources whose contributions belong in a different part of the grid), which is then used to fit those same stars again... One of the main problems I think lies towards the end of section 2.2 of Anderson & King (2000), beginning "Before setting out to find a better PSF, however...", as they tested their PSFs by injecting artificial stars created from a model PSF and then re-fitting them, exactly as our current documentation lays out. Doing this you do not get any pixel phase error, and so the algorithm can correctly fit the PSF with few stars, which is misleading. Indeed, at the end of the penultimate paragraph of Section 2 the paper states "In short, we will tease apart the shape of the PSF and the positions of stars by incorporating observations made of the same star at different dither positions (different placements with respect to pixel boundaries). This iterative process is the major contribution of this paper.". Again, in ISR WFC3 2016-12, on which I based my tests of the F160W dataset, section 4.2 says "If we in turn use those positions to re-derive the PSF, then we are just reinforcing these [positional] biases."; thus, to my mind, for undersampled images, where the pixel phase error is large, the dithering is important (but I will return to that in a future point below).

Re: 2) No, stars found in each dithered image are not affected by the same bias as in the single images, because the stars are randomly placed in each image (indeed, this is sort of the point made in section 5.2 as you mention). Thus each version of the star has a slightly incorrect position, but because they're randomly placed across the pixel phase (Fig 2), the average position is not subject to bias (or at least it's reduced to some better level). Thus, the initial positions of each source are not subject to these errors, after correcting their individual frame positions with this averaged position. The errors also do not need to be of the order 1 pixel, they only need to be of the order half of the oversampled pixel distance (e.g., for oversampling=4 anything approaching pixel phase of ~0.12 pixels is enough to move a significant number of samplings into the wrong residual space; for the calibration program in question, using HST's F160W filter, the initial pixel phase errors reported in ISR WFC3 2016-12 are at this threshold).

Re: 3) Yes you are correct that no WCS transformations occur, but the M-1 frames are transformed onto the frame of the first image, so some transformation and averaging occurs in Anderson & King (2000) and ISR WFC3 2016-12. We do use WCS transformations, diverging slightly from this result (I do not know if Jay has also changed how he handles this transformation). You're right that a poor WCS -- or any kind of transformation -- would have a large effect on the results, but certainly for high level calibrated HST data I am reasonably confident that the WCSs are fairly trustworthy; this is a point that would need emphasising for non-HST data though, of course, and could very well explain why some users are having problems with other datasets.

Re: dithering point (below 3), Section 5.2 of Anderson & King (2000) is mostly attempting to come up with ways to get around not having dithered images. Dithering is preferred, but perhaps I was a bit too forceful in my comment before; I also forgot to mention the obvious fact that if you have M observations of N stars, you really also get M*N sources to play with, which could also just be an improvement on large number statistics. This also comes back to the extension of use cases (thanks @cloud182 for clarifying which WFI you meant, that's really useful): dithering becomes less and less important as your pixel phase errors decrease, so if you are using a ground-based telescope with nice, oversampled PSFs relative to your pixel size, your "bad" positioning algorithms will not be that wrong (i.e., the amplitude of the pixel phase error will shrink, and this systematic scatter become negligible relative to the statistical scatter). In that regime, you don't care about dithering, and do just need lots of samplings. (I also forgot to ask @cloud182 and @oxno2 for the number of sources in each of your images you are fitting, sorry!). I have indeed run the first of those quantifying tests I said above I was going to run, just fitting a single HST F160W image for sources, and you're right that ~700 sources in a single image is almost good enough: the ePSF does not fail to converge, but you're also sort of wrong in the sense that the converged result still suffers from systematic errors in its final shape, which the correcting of the positions from multiple dithers would have improved, increasing the accuracy and/or precision of the final ePSF. It does still run to a (by eye) sensible looking, non-complete gibberish PSF though!

Re: spatial variability, I'm not 100% sure I understand your point here; the current ePSF framework unfortunately does not handle spatial variability (at least in and of itself, you can do as you say and limit the area of fitting as a first-order version, without the interpolation of the ePSF to each star), but dithering helps with increasing the number of observations (by a factor M), so that each 9 PSFs does not suffer small number statistics effects... Yes, again, it would be useful to point out to users that dithers must be on small detector scales to avoid issues with focal plane variability within each set of observations you wish to deal with.

Re: your summary, I agree here completely, with the caveat that the transition from 'helpful' to 'crucial' is one of the level to which you suffer undersampling, and thus the amplitude of your pixel phase error relative to the size of your oversampled (in the sense of the oversampling factor, instead of PSF-pixel scale sampling) grid. My above comment was primarily aimed at HST data (from @oxno2) and not the ground-based WFI (which @cloud182 has now clarified they are using). Hopefully all of this will be better distilled into the tutorial/documentation.

To hazard a guess at why the old algorithm worked but this new, matches-the-paper-version algorithm doesn't work, @cloud182, I think there's two fold things that it might be: first, the original version of the algorithm only did the step where the residuals between the flux-normalised star data and the ePSF evaluation are computed (as well as the smoothing and recentering) once, whereas now there is recentering_maxiters, which defaults to >1 (it's currently much too high, and I'm not sure why, so one definite step to improve the default use case is to fix the bizarre default iteration count that slipped through review). Thus, effectively, the old algorithm was slower, so it fell over slower. If you bumped up the maxiters by a factor of 10 or so the old algorithm may also diverge, just on longer timescales. Another difference (and the main one) is how the pixel scales were handled: previously, pixel positions were in the oversampled (again, in the sense of the oversampling factor, not space vs ground based undersampling) grid, so x went from 0 to 100, when in the detector pixel space there were supposed to be 25 pixels on each side. The new v0.7 version fixed this to have everything be in detector pixel units, as the ePSF should be (which is what changed the previous flux normalisation issue depending on the oversampling factor), but means that any centroid algorithms now are effectively inflating their errors by a factor of oversampling, which could result in this pixel phase error previously being underrepresented. I haven't done any deep dive tests on the difference between the v0.6 and v0.7 algorithms to confirm if that's something that may be an issue, but I can confirm that the algorithm is capable of producing results that compare favourably with the ePSFs Jay Anderson has made available for download (the residuals I get look exactly like those in bottom middle panel, Figure 20 of ISR WFC3 2016-12, remembering that I am creating an average ePSF, not a spatially varying one).

Again, apologies for the lack of explanation available to users in a way that doesn't involve having to read the original papers and make issues to get solved, so I hope to have this information distilled into something a bit more concise and understandable, to avoid the documentation being actively misleading. Thanks again @mcara for your useful feedback as well.

Onoddil avatar Oct 03 '19 14:10 Onoddil

@Onoddil My understanding of the algorithm agrees with @mcara also. Jay Anderson's method is fundamentally an iterative virtuous cycle improving the star centers and ePSF model at each step. From working directly with Jay and @mcara on implementing the initial ePSF builder, my understanding is that having dithered images is not required or key. In fact, we added that feature later (LinkedEPSFStar) after getting the single-image version to work. The key to building a good ePSF is having good sampling in the output (oversampled) grid. What this means in practice is having a large number of stars (in a single image, or a region of a single image if geometric distortions are important) whose (actual) centers well-sample subpixel positions (the oversampled grid). A large number of stars guarantees this should work because of the random positions (as mentioned in Anderson & King).

To make this concrete, I made a simple simulation. I start with only 16 simulated stars in a single image each with different subpixel centering such that they completely sample a 4x4 oversampled grid. In other words their centers are at the 1/4-pixel locations (0.125, 0.375, 0.625, 0.875) in both x and y. I then build an ePSF from only these 16 simulated stars with 4x oversampling. The ePSF generated by photutils v0.6 reproduces the simulated star profile correctly. The ePSF generated by photutils v0.7 does not (even though the initial guesses are the exact star positions). The result actually looks similar to that of @oxno2 (and other report from @Johannes-Sahlmann). Further, the ePSF actually gets worse as the number of build iterations is increased, suggesting that the updated v0.7 code is diverging instead of converging (also suggested by @mcara). IMHO, that suggests a bug was introduced in #817.

photutils v0.6: https://nbviewer.jupyter.org/gist/larrybradley/57371f860741e7d0189a48e085b32e63

v0 6

photutils v0.7: https://nbviewer.jupyter.org/gist/larrybradley/edfe3880f43985cc0a3098e41852a485

v0 7

I've asked Jay for his input, but he's away on vacation until next week.

larrybradley avatar Oct 03 '19 14:10 larrybradley

To be clear, the commonality that I'm seeing in the v0.7 ePSF bug reports and also my simulation above is the ~45 degree diagonal bright/dark banding in the output ePSF.

larrybradley avatar Oct 03 '19 15:10 larrybradley

I think I may have traced at least part of the origin of the disagreement here - see https://gist.github.com/426614e7b4b1a5973c19ddb8416db0e9 for proof of this: it works in 0.6, breaks in 0.7, but works again in one of @Onoddil's apparently unrelated fixes: https://github.com/Onoddil/photutils/commit/9822e3c14d74d313fd78185df0c9331242832235 . So it appears that @Onoddil's https://github.com/Onoddil/photutils/tree/daophot_testing branch addresses at least one variant of this problem. There's still a problem of the normalization, but that looks like it's an unrelated problem to do with an incorrect oversampling factor (if @Onoddil agrees?)

On the broader set of issues, it seems to me like there's a lot of hypothesizing and not enough data on the "how many dithers are needed" question. Fortunately, we can just try it! I'll make a second issue about that, and we can move the discussion there.

eteq avatar Oct 03 '19 17:10 eteq

This is what I get running the v0.7 notebook, albeit running on my "bleeding edge" dev branch of photutils, which should also be the last option of @eteq's link above. Perhaps the largest change being the issue where residuals are nearest neighbour vs everything within a pixel is used; see Figure 5 of Anderson & King (2000). The normalisation fix is unrelated (as @eteq said), and is simply a consequence of the default (space-based...) norm_radius of 5.5, too small for sigma=3. Upping that to norm_radius=20 gets good agreement again, with a final residual plot of the order 1e-8.

That said, the 'truth' IntegratedGaussianPRF box is wrong for v0.7: it should be

# should be "truth" ePSF
m = IntegratedGaussianPRF(sigma=3., x_0=12.5, y_0=12.5, flux=1)
yy, xx = np.mgrid[0:101, 0:101]
xx = xx / 4
yy = yy / 4
data2 = m(xx, yy)
plt.imshow(data2)
plt.colorbar()

remembering that the pixel values should all be in detector pixel space, rather than oversampled pixel space (again, oversampled in terms of the factor, not resolution!). This is exactly the problem with the original v0.6 version, which should not sum to 1 (but in this case 16).

This looks like a good place to start hunting for any bugs, however, so I will look into this test some more from a 'why does adding extra residual points fix everything magically' perspective. Thanks for the unit test @larrybradley!

Onoddil avatar Oct 03 '19 17:10 Onoddil

@oxno2 and @cloud182 - we think #974 should have solved this problem. Can you give the latest development build of photutils a try? (Or if you aren't set up to install from source, try it when 0.7.2 comes out.) If that addresses your problems here, we can close this issue.

eteq avatar Nov 15 '19 18:11 eteq

Hi,

Yes, I'm not setup to use the development build, but as soon as the 0.7.2 comes out I'll give it a try.

Thanks for your work, Enrico

Il giorno ven 15 nov 2019 alle ore 15:58 Erik Tollerud < [email protected]> ha scritto:

@oxno2 https://github.com/oxno2 and @cloud182 https://github.com/cloud182 - we think #974 https://github.com/astropy/photutils/pull/974 should have solved this problem. Can you give the latest development build of photutils a try? (Or if you aren't set up to install from source, try it when 0.7.2 comes out.) If that addresses your problems here, we can close this issue.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/astropy/photutils/issues/951?email_source=notifications&email_token=AI5DOBJGKPNYWG2FOFMG2F3QT3WODA5CNFSM4IO3LET2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEGMRPA#issuecomment-554485948, or unsubscribe https://github.com/notifications/unsubscribe-auth/AI5DOBKOHEUODUJTENV2N3TQT3WODANCNFSM4IO3LETQ .

cloud182 avatar Nov 15 '19 19:11 cloud182

I know this issue is pretty old now, but I am getting the same output as @oxno2 although using Photutils 1.0.2. I even tried duplicating only one light source many times, but get the same output.

111 11

@Onoddil same issue?

TSA26 avatar Mar 11 '21 09:03 TSA26

Hi @TSA26,

Can you add a minimum working example code to your comment so I/we can look at this? Would be great if it mocks up some data I can also generate to input into your calling of the builder, but at least the call signature you're using would be good.

I believed we closed this with the pinning down of the default centroiding function recentering_func (see #969), so will definitely need to know what function you're passing to that kwarg.

Onoddil avatar Mar 11 '21 09:03 Onoddil

Hi @Onoddil.

Ok, so I am actually trying to detect these 6 LEDs and as accurate as possible(subpixel accuracy) measure centroid of each of these LEDs. Around 50% of visible LEDs are oversaturated (their center)with a pixel intensity of 65532. Around the center, close to the edge, an overflow of light can be seen which, I assume, can be used to precisely determine the centroid of each LED.

Here is the code:

from astropy.stats import sigma_clipped_stats from astropy.nddata import NDData from photutils.psf import extract_stars from astropy.visualization import simple_norm from photutils import EPSFBuilder

initial position of only one LED, duplicated 20 times

leds_tbl = Table() leds_tbl["x"] = [178 for i in range(20)] leds_tbl['y'] = [445 for i in range(20)]

Load image and subtract the background

image = np.load("temp.npy") image = image.astype('float') mean_val, median_val, std_val = sigma_clipped_stats(image, sigma=2.)
image -= median_val
nddata = NDData(data=image)
leds = extract_stars(nddata, leds_tbl, size=70) nrows = 5 ncols = 4 fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(12, 12),squeeze=True) ax = ax.ravel() for i in range(nrows*ncols): norm = simple_norm(leds[i], 'log', percent=99.) x,y = centroid_sources(leds[i].data, 35, 35, box_size=70, centroid_func=centroid_2dg) ax[i].scatter(x, y, marker='+', color='red') ax[i].imshow(leds[i], norm=norm, origin='lower', cmap='viridis') epsf_builder = EPSFBuilder(oversampling=4, maxiters=10, progress_bar=True) epsf, fitted_leds = epsf_builder(leds) norm = simple_norm(epsf.data, 'log', percent=99.) plt.figure() plt.imshow(epsf.data, norm=norm, origin='lower', cmap='viridis') plt.colorbar()

I just noticed, that if I decrease number of iteration from 10 to 3 for EPSFBuilder, I get meaningful result. Here is the zipped numpy array of the image. temp.npy.zip

temp2

TSA26 avatar Mar 11 '21 11:03 TSA26

Thanks for posting the code.

I am still hopeful that it's still centroid function related. The issue you've got seems to be similar to those we've seen in other users' cases (but within EPSFBuilder for those people), but also with a new recentering function, so it would be good to check if centroid_2dg is also an issue for centroid_sources or not. Can you try removing your kwarg for recentering_func in centroid_sources, so that it uses centroid_com?

I am admittedly less convinced that it's the solution (as it would just be a poor initial input position, and I assume you then use centroid_com within EPSFBuilder being on 1.0.2), but good to check the low hanging fruit before getting into the nitty gritty.

But yes, the "more iterations makes it worse" is a calling card for whatever this issue is, where the algorithm ends up in an unstable minimum or something, and just with each call drifts away from "true" rather than getting closer to the "right" value, so that's to be expected, and doesn't solve the underlying issue unfortunately!

Onoddil avatar Mar 11 '21 11:03 Onoddil