labstreaminglayer icon indicating copy to clipboard operation
labstreaminglayer copied to clipboard

matlab - pull_chunk for eeg - timestamps not equidistant

Open marvnmtz opened this issue 4 years ago • 10 comments

Hey everybody, I'm working with EEG Data that i want to process while reading it. Therefor i use the pull_chunk function to read the data chunk by chunk. My problem is that the difference between the timestamps is not constant. To break down my problem to this issue i used the example code "ReceiveDataInChunks.m" provided together with liblsl_matlab .

`% instantiate the library disp('Loading the library...'); lib = lsl_loadlib();

% resolve a stream... disp('Resolving an EEG stream...'); result = {}; while isempty(result) result = lsl_resolve_byprop(lib,'type','EEG'); end

% create a new inlet disp('Opening an inlet...'); inlet = lsl_inlet(result{1}); time = []; ende = 0; disp('Now receiving chunked data...'); while true % get chunk from the inlet [chunk,stamps] = inlet.pull_chunk(); for s=1:length(stamps) time(ende+s) = stamps(s); end ende = ende + length(stamps); pause(0.05); end`

I let it run for some time and then I plot

plot(diff(time))

pull_chunk error

So normally that should be a straigt line at y=0,0004883 (~1/2048Hz) I stream the EEG data via a BioSemi device, if this is of interest. Does someone know why i get these discontinuities and how i solve them? Thanks a lot in advance Marvin

marvnmtz avatar Oct 08 '20 11:10 marvnmtz

Hi Marvin,

I think the full answer depends on what EEG device you're using and how much you trust the hardware. Generally 'research grade' EEG amps are trustworthy and sample properly (correct frequency and no dropped frames/data) so you can simply linearly interpolate the timestamps (automatically done in load_xdf.m in 'HandleJitterRemoval'). The discrepancies in your data seem to happen in equal intervals... maybe this happens when LSL synchronizes the clock drift between the two systems (if you are using two systems)? If you're interested you can find the stream's synchronization field and check if the timestamps match up at the points where discrepancies occur.

Best, Clement Lee Applications Programmer Swartz Center for Computational Neuroscience Institute for Neural Computation, UC San Diego 858-822-7535

On Thu, Oct 8, 2020 at 4:53 AM mrvnmtz [email protected] wrote:

Hey everybody, I'm working with EEG Data that i want to process while reading it. Therefor i use the pull_chunk function to read the data chunk by chunk. My problem is that the difference between the timestamps is not constant. To break down my problem to this issue i used the example code "ReceiveDataInChunks.m" provided together with liblsl_matlab .

`% instantiate the library disp('Loading the library...'); lib = lsl_loadlib();

% resolve a stream... disp('Resolving an EEG stream...'); result = {}; while isempty(result) result = lsl_resolve_byprop(lib,'type','EEG'); end

% create a new inlet disp('Opening an inlet...'); inlet = lsl_inlet(result{1}); time = []; ende = 0; disp('Now receiving chunked data...'); while true % get chunk from the inlet [chunk,stamps] = inlet.pull_chunk(); for s=1:length(stamps) time(ende+s) = stamps(s); end ende = ende + length(stamps); pause(0.05); end`

I let it run for some time and then I plot

plot(diff(time))

[image: pull_chunk error] https://user-images.githubusercontent.com/72555114/95454927-5f4e8a00-096d-11eb-8c17-c84532d02879.png

So normally that should be a straigt line at y=0,0004883 (~1/2048Hz)

Does someone know why i get these discontinuities and how i solve them? Thanks a lot in advance Marvin

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sccn/labstreaminglayer/issues/63, or unsubscribe https://github.com/notifications/unsubscribe-auth/AITC3JXCERZI3GERALLSJBDSJWR5DANCNFSM4SIU23BA .

cll008 avatar Oct 08 '20 16:10 cll008

Hi @mrvnmtz ,

Can you please let us know which device you're using and how it streams data via LSL (vendor-provided app, community app, custom, etc)?

In order of decreasing importance:

  • If the hardware provides timestamps then the application should be using those timestamps (converted to lsl's clock) when pushing data to the stream and you should use these without postprocessing (other than clock offset adjustment).
  • If the hardware does not provide timestamps but you trust it to have equal sampling interval (as Clement mentioned), then you can dejitter. This happens automatically by both the matlab and python xdf importers, but you can also do it online by calling set_postprocessing on the inlet after it is created. This is the most likely solution.
  • If you don't trust the hardware to have a better sampling interval than the application pushing the data then maybe the provided timestamps are the most accurate, uneven intervals and all.

cboulay avatar Oct 08 '20 16:10 cboulay

Adding to those comments, to me these stamps actually don't look all that bad, as in, there's a good chance that dejitter will pretty much fix that up for you.

I sometimes use the following rough back of the envelope calculation: it looks like the hardware is natively providing about 15 chunks per sec (the spikes), and if you use online dejitter that'll be smoothed out over a minute-long sliding window (effectively a bit more since it's exponentially weighted, but we're going to conservatively assume a minute). You can get a rough idea of the residual timing error after dejitter by taking the magnitude of those spikes (17ms) and dividing them by the number of spikes that are averaged (that's the effective number of time measurements, which ignores the flat portions, which are not actual timing measurements but filled in based on 1/srate), so that makes ca. 15x60 in total over a minute. So 17ms/(15x60) comes out at 0.018ms (i.e., less than a sample at 2048 Hz). It'll be a bit worse than that in in the first minute after turning on the stream since there's less data.

Now, in case you don't actually need those stamps in real time, but you just want to record them to disk (eg to an XDF file), then that dejitter would instead be done on the whole recording at import time, in which case the timing error due to jitter will be even smaller (miniscule).

chkothe avatar Oct 08 '20 18:10 chkothe

The fact that you have irregular time stamp does not mean that you have missing data samples. So it fine to estimate interpolate/dejitter the time stamps as if they were perfectly regular.

The issue is when you have an event stream at the same time, and making sure the latency on the event stream can be aligned with the data streams. This is more problematic, and ideally you would have the events both in the EEG (as an extra channel) and in the LSL event stream and compare the latency, so you can compare the two - and run some optimization. In my opinion, make sure the computer that records the EEG is the same as the one generating event (to minimize network delays). Also work on the code segment that write events to the event streams and sends a TTL pulse to the EEG amplifier to ensure there is no buffer delay.

Cheers,

Arno

On Oct 8, 2020, at 8:35 AM, Christian Kothe [email protected] wrote:

Adding to those comments, to me these stamps actually don't all that bad, as in, there's a good chance that dejitter will pretty much fix that up for you.

I sometimes use the following rough back of the envelope calculation: it looks like the HW is natively providing about 15 chunks per sec (the spikes), and if you use online dejitter that'll be smoothed out over a minute-long sliding window (effectively a bit more since it's exponentially weighted, but we're going to conservatively assume a minute). You can get a rough idea of the residual timing error after dejitter by taking the magnitude of those spikes (17ms) and dividing them by the number of spikes that are averaged (that's the effective number of time measurements, which ignores the flat portions, which are not actual timing measurements but filled in based on 1/srate), so that makes ca. 1560 in total over a minute. So 17ms/(1560) comes out at 0.18ms (i.e., less than a sample at 2048 Hz). It'll be a bit worse than that in in the first minute after turning on the stream since there's less data.

Now, in case you don't actually need those stamps in real time, but you just want to record them to disk (eg to an XDF file), then that dejitter would instead be done on the whole recording at import time, in which case the timing error due to jitter will be even smaller (miniscule).

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

arnodelorme avatar Oct 08 '20 19:10 arnodelorme

Thanks to all for trying to help me. The device is a BioSemi device, should be this one: https://www.biosemi.com/ad-box_activetwo.htm. I link the device with the software that I think came with it to open the stream.

So far i would have said that it looks quite trustworthy.

@cll008 I normally link my BioSemi device to another PC than the one i run matlab on, but that's just because the lab i arranged like that. But I also tried to run matlab on the same Pc to which i connect the EEG and it seems to make no difference. Where do I find this stream's synchronization field?

@cboulay So far I haven't done any postprocessing. I sometimes used inlet.time_correction();. I also tried to apply inlet.set_postprocessing(); but than i says " Unrecognized function or variable 'lsl_set_postprocessing' "

@chkothe Unfortunatelly I do need the data online and i even have to classify them online. How do I dejitter, when processing the data online not saving them to an xdf-file?

@arnodelorme Eventually i will have a second stream that will be sent by Matlab, but so far i got stuck on this issue

While writing this answers i figured out that, when i put all my chunks together after each other in one matrix, my timeseries looks good, as such that I do not have any missing data.

Thanks again for all your answers but right now it looks like this issue won't be a problem for me any longer.

marvnmtz avatar Oct 09 '20 12:10 marvnmtz

. I also tried to apply inlet.set_postprocessing(); but than i says " Unrecognized function or variable 'lsl_set_postprocessing' "

That’s something we need to fix. My Matlab license just expired. Any volunteers?

cboulay avatar Oct 09 '20 12:10 cboulay

Well I just bought a Matlab license so I guess this goes on my todo pile.

I did implement this some time ago (https://github.com/labstreaminglayer/liblsl-Matlab/commit/2ad59f172b06f910610640d4c8feb7c4ce1071ad) but it might not be working on every conceivable combination of Matlab/OS/etc. @mrvnmtz, can you please tell me which version of Matlab and liblsl you are using? It could be that you just need to build liblsl-Matlab on your system.

dmedine avatar Oct 11 '20 22:10 dmedine

@dmedine Sorry for my late response, but if you still want to take a look at it, I'm using Matlab R2020a und i actually just updated liblsl when investigating this problem. But like I said, i figured out that it actually is not a problem for my results. I still wonder, why LSL or my BioSemi Device behaves like that, but as far as it works for my, I'm fine.

marvnmtz avatar Oct 16 '20 13:10 marvnmtz

Hello, I have the same issue. When I stream my EEG data (CXG) at 500 Hz, the difference between two consecutive samples is not close to 0.002 (as you can see in the figure below). However, the number of received samples is close to the frame rate. For instance, for a 5-second recording, I get around 2479 samples. Your feedback is greatly appreciated!

image

mrsaeedpour avatar Jun 23 '23 18:06 mrsaeedpour

@mrsaeedpour , it looks like the Cognionics LSL integration is calling push_sample (instead of push_chunk but the app is receiving samples from the device in chunks, so the first sample in a chunk has some non-zero interval from the end of the previous chunk but the remaining samples in the chunk have a near zero interval. But this is fine!

The uneven sampling intervals are dejittered automatically when loading the file via an xdf importer. And they can be fixed online automatically by LSL's built-in dejittering by setting the postprocessing flags.

However, do you really need the timestamps to be dejittered? LSL's dejittering simply assumes that devices have consistent inter-sample intervals at the source even if their timestamps in the stream don't have consistent intervals. If you simply process the data without paying attention to the timestamps then you are effectively making the same assumption.

If you need to align these jittered data with other streams and your precision requirement is < 10 msec then yes you'll need to dejitter. But this is a very unusual requirement for online analysis. Most online analyses will simply want to run things as fast as possible so you'll align the most recent EEG with the most recent other stream.

cboulay avatar Jul 07 '23 06:07 cboulay