Casting back to int after motion correction
Hi everyone! I had a quick question about casting back to int16 for saving after motion correction. During motion correction the data must be in float for interpolation, but saving to binary (e.g. to pass to a sorter) is generally required to go back to int16 for disk space. I'm wondering if anyone has looked into the effect on this on the motion correction?
I guess if a spike is in the range ~0-200uV its feasible that casting back to int will loose some useful precision, but I've not measured it. I guess if you had a 100.5uV rounded to 101uV you would loose 0.5% precision, 1% at 50uV etc). Also I guess if you were to motion-correct then whiten (now in a small uV range) then cast back to int16 there would definitely be some issues. I don't have time to test this at the moment but was curious if anyone had ever looked into it?
Just happened to see this and wanted to offer a thought. I am a big fan of saving to float16 rather than using KS' "scaleproc" strategy of 200*int16, since the dynamic range is better for our preprocessed data (not for raw data in uV, but usually I think that data has been rescaled by whitening or standardization at this point of pipelines?). But I am not sure if what I am saying is relevant for the scenario you're mentioning @JoeZiminski -- maybe you are describing data which is still in the same int16 range that it was originally saved in?