How to handle varying timestamp deltas?


#1

In muse-player, I’m seeing a significant variance in time deltas between two successive EEG samples. For instance, I might get several samples in a row separated by about 0.00004 seconds (this happens 90% of the time), and the next one might be separated by 0.07 seconds (about 5% of the time). The mean separation over the entire session does end up converging to 0.0045 seconds, as expected (1/220 Hz sampling rate). I have two questions about this:

  1. How is it possible for Muse to generate samples that are only 0.00004 seconds apart, if it only samples every 0.0045 seconds, and if the time precision is 0.002 seconds as indicated in the developer FAQ?

  2. What is the best way to interpret/process this data? MuseLab doesn’t seem to plot the X axis directly from the raw timestamps. It seems to smooth out the variance (perhaps by applying a flat 0.0045 separation?). Is that the right thing to do? If not, what is the right thing to do here, if one wants to make the timestamps as accurate as possible?


Muse-io time issue
#2

Hey Kartik,

  1. The timestamps are applied by MuseIO when the data arrives at the host computer. Much of the latency is due to buffering by the Bluetooth module. So a buffer of packets might arrive making them look like they have shown up all at once, or at least very quickly one after the other. This could account for the smaller than usual time delta you’re observing.

  2. Yes, the simplest thing to do is to assume that the sampling rate is actually constant at 220, which is more or less true.

What can make things tricky is samples getting dropped, which can happen if the Bluetooth connection is poor, because the Bluetooth buffer is pretty small. To account for this we send out the /muse/eeg/dropped_samples and /muse/acc/dropped_samples, so you can see when samples are being dropped and how many. See here for info: https://sites.google.com/a/interaxon…ped_samples:-i


#3

Understood. Is there an option to turn off the buffering? I have a good bluetooth connection, with no dropped packets being reported in the data streams.

The reason I’m asking is that I’d like to use the timestamps to synchronize the EEG data to reference events in ERP experiments. For example, I might display a particular image, note the response of FP1 from 0-500 ms after that image is displayed, repeat the trial several times, and average the data to plot a response curve for FP1 from 0-500ms. If the bluetooth buffering ends up skewing the EEG data timestamps randomly, it could reduce the quality of this process. So I’m trying to think through any workarounds that might be possible.


#4

I have been wondering about this. What happens when samples are dropped? Does muse-io still try to send an osc of 0 to /eeg or it is simply gone?
I hope it’s the first… Or I need some help to read /eeg properly…


#5

[B]Kartik[/B]: You can’t turn off buffering on the Bluetooth module, I’m afraid.

However, we’re working on adding a feature to the SDK that can estimate the correct absolute timestamps for the data fairly accurately - probably well enough to try some ERP experiments. Using that, you would then just need to make sure that the other data streams you are synchronizing with Muse are also properly labelled with absolute timestamps.

We might be able to get this up and running quite soon, so please stay tuned and we’ll let you know! Thanks for bringing the topic up, we know how important synchronization is to many developers and researchers.

[B]Rxz_14[/B]: MuseIO doesn’t send EEG data when samples are dropped. It sends out /muse/eeg/dropped_samples messages. Depending on your application, you are probably safe assuming the sampling rate is constant at 220Hz. Kartik’s ERP experiments require fairly good time synchronization between a few different data sources, which is when accounting for dropped packets becomes really important.


#6

Great, thanks for the response Tom!


#7

My concern is I’m reading 4 samples from /eeg and put them in each of the channel. So if a sample is dropped, channel will be switched…
Like this, using open osc and DATA is just an array to store all the input:
void OscDumpPacketListener::ProcessMessage(const osc::ReceivedMessage& m, const IpEndpointName& remoteEndpoint)
{
if(strcmp(m.AddressPattern(), “/muse/eeg”) == 0)
{
osc::ReceivedMessageArgumentStream args = m.ArgumentStream();
for(int i=0;i<4;++i) // do input for each channel
{
float d;
args >> d; // read 1 sample from osc
DATA[i] = d; // accumulate sum for calculate average
}
}
}


#8

If samples are being dropped, then the /eeg messages don’t get sent. Either they get sent with four values, one for each channel, or they don’t get sent at all.


#9

Great, thanks.


#10

Hi Tom,

Is the code used in muselab available for programmers to see how thinks are done?
This will be a great help for us all.


#11

Hi HvM,

We will release the source code for MuseLab shortly. It’s in the works! We’ll post about it here and send out an announcement to the mailing list when it’s available.


#12

Hi Tom, any update on this?


#13

Unfortunately I don’t have any updates to provide on the topic of ERPs yet. Except to say that we’re aware of the need and intend to address it in the future. Apologies for my vagary, but that’s about all I can say at the moment. If and when there’s an update, we won’t be shy about letting people know :slight_smile:


#14

Understood, thanks.


#15

I would like to up vote the need for accurate timestamps.

It would be great if either

  1. MuseLib would interpolate the actual (best guess) timestamp and also provide the uncertainty.
  2. The museStat examples would include code to calculate an accurate timestamp and uncertainty.

This is needed to be of use for any type of scientific experiment that involves events happening in real time or comparing data from more than one muse.

In designing the 2018 model please have museIO send the time to the muse and then stream the actual timestamp or milliseconds after back so an accurate time can be calculated.

Thanks


#16

Sorry to bump an old thread… You mention about the 2018 model, do you or anyone else know if a new model is soon available? In that case, what might the differences be between the 2016 model and the new one?