Hello, I’m streaming osc.udp://localhost:5000 from muse-io with the flag to add the timestamp from the headset and recording the values in Python. When I plot the data against the timestamp, I find that the data is clustered and non-linear. There appear to be about 16 samples that occur within 1/100th of a second followed by a 7/10th of a second gap with no data. Over 1 second, the appropriate number of samples ~220 arrive. Can you tell me what accounts for this clustering?
The timestamps are generated on the receiving device rather than on the Muse itself, so the delta variance that you are seeing is the buffering of the bluetooth stack. Data is received, then processed in batches resulting in this discrepancy.
Interaxon have assured me that data is sent at a constant rate (220Hz for the 2014 Muse, 256Hz for the 2016 Muse) and that for the purposes of FFT we should treat it as such and average them out. See this post for details.
If you want to sync the data with an external time source, you can do this by using the micro-usb port connected as an auxiliary electrode to inject raw data pulses. On the 2016 MU-02 Muse, use pin 4 and on the 2014 MU-01 Muse, use pin 2. For safety, note that the max voltage range is 0-3.3V referenced to ground.
I also see this issue. In addition the data is not in order, with the timestamps of some datapoint more that 1/2 second after the previous one then more data that was fractions of a second after.
I want to Sync data in time for two or muses and this seems nearly impossible due to not knowing what the real time of each sample is. While most of the data seems to be within 1/10th of a second (which is already loose) these 1/2 second jumps are worrying me. I’m logging the data on an iPhone 6 plus.
I would like to up vote the need for accurate timestamps.
It would be great if either
- MuseLib would interpolate the actual (best guess) timestamp and also provide the uncertainty.
- The museStat examples would include code to calculate an accurate timestamp and uncertainty.
This is needed to be of use for any type of scientific experiment that involves events happening in real time or comparing data from more than one muse.
In designing the 2018 model please have museIO send the time to the muse and then stream the actual timestamp or milliseconds after back so an accurate time can be calculated.
The timestamp is generated on the receiving device at the point in which the data is processed, so there is no way of improving the accuracy with the current setup.
The only way for Interaxon to improve it would be for the Muse device itself to send the timestamp (or at least an internal counter) and this could only be done with a firmware update and possibly hardware changes.
So what is the recommend approach to using the 16 pieces of data that are transmitted at nearly the same time? Can I assume that they arrive in order and were sampled at an even interval? Should I average the data? Should I eliminate 15 samples at random? No idea how to move forward here…
Use all the data. Work on the assumption that each piece of data is generated at the proper interval and ignore the timestamps, or average out the timestamp deltas.