lsl_archived
lsl_archived copied to clipboard
Size of the inlet.pull_chunk() buffer
Hi, first of all I want to know if pull_chunk do what I think it does, the pull_chunk get all samples that there is on the buffer of de LSL right? Second, how many samples can be stored? or how many memory has the buffer?
This is a great question. It is all documented in the C++ API. All the other APIs reduce to this, but I will attempt to explain.
There are two things to think about here. The first is the inlet, the second is the outlet. I will illustrate the issues at hand in terms of the C++ API: https://github.com/sccn/labstreaminglayer/blob/master/LSL/liblsl/include/lsl_cpp.h
On the inlet side, there are two ways to pull a chunk. You can call inlet.pull_chunk (or push_chunk_multiplexed in the case of multiplexed data) and provide it with an STL vector to put your samples in (lines 864-932). This method will allocate more memory to your vector as needed and keep putting as many samples as are currently available into the chunk vector until it there are no more samples available on the outlet side. That is to say, memory management is automatic, but the performance will be slower. The other way is to callinlet.pull_chunk_multiplexed on a pre-allocated buffer (i.e. a pointer to char, float, or whatever). In this case, performance is better, but you need to manage your memory. You also need to supply a number of elements to pull (size of the buffer) and a timeout when you call pull. If there are more samples to pull than there is room in the buffer, the inlet will attempt to pull the remaining samples on the next go around. If the the timeout is hit before the buffer is filled, the unfilled slots in the buffer are filled with 0s. So, if the calls to pull are not well timed in relation to the pushing, there is the danger that data could be lost or non-contiguous.
You can also specify chunk granularity (samples per chunk) when you create the inlet.
On the outlet side, you can specify in seconds how long samples will stick around on the network when you create the outlet (lines 321-328). Unless you are pushing a huge amount of data or have a very small amount of RAM available, this usually isn't an issue and the default is to let things stick around for 6 minutes (or 100 samples if the stream doesn't have a nominal sampling rate). But, if you are pushing large amounts of data per sample, then 6 minutes worth can be more data than the available memory in your computer. Or if there is very little RAM available, the same issue arises and bad things happen and you don't get any data transmitted.
You can also specify chunk granularity when you create an outlet, but this can be over-ridden by the inlet.
The one thing I am not certain of is when you make an outlet with some chunk granularity (e.g. 2), then connect an inlet to it with a chunk granularity that is larger (e.g. 4), will the chunk granularity of the outlet actually increase? Or, is it the case that the buffering of samples on the outlet is the limit, and it doesn't matter so long as there the chunks are being pulled and the buffer isn't being over-filled? I rather think the latter is true, but I'm not actually certain. Christian, I'm sure, can supply the answer.
On 11/11/2016 11:17 AM, MarceloRuizR wrote:
Hi, first of all I want to know if pull_chunk do what I think it does, the pull_chunk get all samples that there is on the buffer of de LSL right? Second, how many samples can be stored? or how many memory has the buffer?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sccn/labstreaminglayer/issues/153, or mute the thread https://github.com/notifications/unsubscribe-auth/ADch7gSSsCb8tfLw-wcupVxo-uWogm6tks5q9L8ugaJpZM4KwDS8.