OpenBCI_WIFI
OpenBCI_WIFI copied to clipboard
NTP local server, Poll frequency, Hardware timestamping
The document http://docs.openbci.com/OpenBCI%20Software/03-OpenBCI_Wifi_Server says
The WiFi Shield will connect to an NTP server to get the time once, and the WiFi Shield will then send the data in JSON.
Three questions:
- Can the NTP server be changed, e.g., to use a local server?
- Is it possible to increase the polling frequency? It often takes many NTP polls to reduce the offset below a millisecond.
- Is OpenBCI GUI using the timestamp from WIFI shield?
- Can the NTP server be changed, e.g., to use a local server?
Through the firmware, yes. This feature is not exposed as an end point, if this is a feature you think would be a good addition, feel free to open an issue requesting that feature.
- Is it possible to increase the polling frequency? It often takes many NTP polls to reduce the offset below a millisecond.
Through the firmware, I believe so. A different NTP library may have to be used to get a higher accuracy, the current ntp uses a remote server to get second level precision. If you want to say, syncronize mutliple wifi shields to a local server, I would recommend you roll your own NTP method using the TCP client/server socket used for streaming.
- Is OpenBCI GUI using the timestamp from WIFI shield?
No. The OpenBCI GUI uses a time stamp when the packet arrives to the GUI.
Thanks for adding this feature request. Using a local NTP server and allowing more frequent polls (e.g., 8 packets in the first 30 seconds at the initiation––i.e., the "burst" option in ntp configuration) can be quite essential to sync time down to 1 millisecond level----which is helpful to studies involving many timely stimuli or social interactions.
Unfortunately, I am not quite savvy in coding for the firmware, but would definitely appreciate such a functionality in the wifi shield.
Totally agree! This will have to have a driver on the other end to start the burst. I think this is totally doable. I think we need to code a method that creates a map from system time on driver to board time or vice versa. There is a feature on the Cyton for adding board time to every sample.
Just a few more notes:
-
after the initial “burst”, the WiFi shelf may still need to poll periodically (e.g., every 60 seconds) from the ntp server so as to correct time drifting. The period depends on the board’s timing accuracy and network delay. For example, my Mac mini would need a poll every 60 seconds from a local server to keep the drifting under 0.001 sec in a couple days.
-
I am not sure what did you refer to “system time” and “board time.” I use the Ganglion board, and it includes an index (ranging from 0 to 200) in each sample.
Another person just asked for this feature
@aj-ptw I increasingly think this is a useful feature (millisecond-level NTP sync). I am having a hard time to recover regular (equal interval) time series from the wifi-shield + ganglion. Due to the uncertainty of packet dropping and network delay, it is hard to pin point the exact time stamp of each sample, even with the help of the 0-200 cycles of sample index from ganglion.
Of course, the problem is not severe at this point, as I got about 1 to 5 seconds time drifting during an entire 90 minutes recording.
@wliao229 are you using UDP or TCP? And then you are inputting in LSL from the GUI?
@aj-ptw I am using TCP (should I use UDPx3?). I am using the SavedData from the GUI.
@wliao229 A 5-second time drift for a 90-minute recording is A LOT!
might be worth trying!
Some inspiration on how it's done in LSL: https://github.com/sccn/labstreaminglayer/wiki/TimeSynchronization.wiki https://github.com/sccn/xdf/wiki/Specifications
my time drift = (start_timestamp - end_timestamp) - number_of_samples / sampling_rate
@wliao229 Is there a lot of drops? What is your effective sampling rate?
@mesca I think a straightforward solution is: (a) hardware timestamping by the wifi-shield, (b) millisecond-level time sync with a local NTP server. Both of them are sort of being implemented.
@mesca A major source of dropping in my case happens at sample level (see #70) -- I customized ganglion firmware to read analog signals, and that causes drop of sample by ganglion when running at 1600Hz. My effective sampling rate is around 1400Hz (~ 8-10% of dropped samples), which I am still okay with.
The only problem is the unequal time interval due to a mixture of (a) sampling dropping, (b) packet dropping, and (c) network delay.
Network delays, intermitent long delays between sends, is a big problem with this RAM starved chip. I was only able to statically allocate enough space for 200 raw packets, so if a delay happens > 100ms (which it does) you’re going to loose packets if streaming 16chan at 1000Hz.
The next feature I’m trying to add is dynamic memory allocation on the board right as the request to start streaming comes in. This way, I estimate we could get up to 250ms for ring buffer at 1000Hz at 16 Chan.
I’m working on a new iteration of the shield with a chip that will have much more stack/heap space that will allow us to store seconds worth of data.
There is also a big potential speed up for ganglion where we can jam two samples per packet instead of sending empty space like we are now
Sounds good! I do feel network delays or packets dropping can be less problematic when high precision hardware timestamping is available (or even just a monotonically increasing ID) in each sample. In that case, missing samples/packets can be marked, and the time series still has equal interval. Otherwise, there is always uncertainty in networked/wireless environment.
On Mar 2, 2018, at 2:44 PM, AJ Keller [email protected] wrote:
Network delays, intermitent long delays between sends, is a big problem with this RAM starved chip. I was only able to statically allocate enough space for 200 raw packets, so if a delay happens > 100ms (which it does) you’re going to loose packets if streaming 16chan at 1000Hz.
The next feature I’m trying to add is dynamic memory allocation on the board right as the request to start streaming comes in. This way, I estimate we could get up to 250ms for ring buffer at 1000Hz at 16 Chan.
I’m working on a new iteration of the shield with a chip that will have much more stack/heap space that will allow us to store seconds worth of data.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.