In muse-player, I’m seeing a significant variance in time deltas between two successive EEG samples. For instance, I might get several samples in a row separated by about 0.00004 seconds (this happens 90% of the time), and the next one might be separated by 0.07 seconds (about 5% of the time). The mean separation over the entire session does end up converging to 0.0045 seconds, as expected (1/220 Hz sampling rate). I have two questions about this:
How is it possible for Muse to generate samples that are only 0.00004 seconds apart, if it only samples every 0.0045 seconds, and if the time precision is 0.002 seconds as indicated in the developer FAQ?
What is the best way to interpret/process this data? MuseLab doesn’t seem to plot the X axis directly from the raw timestamps. It seems to smooth out the variance (perhaps by applying a flat 0.0045 separation?). Is that the right thing to do? If not, what is the right thing to do here, if one wants to make the timestamps as accurate as possible?