My primary goal with Muse is to track and identify brain states while playing musical instruments, however i thought i might want to use it for directly making music as well. Like you might nod your head to create a beat, blink to throw in snares, etc, could be fun. Unfortunately i seem to only be receiving osc messages (in Max/MSP) about 10 times per second, and this latency upwards of 100+ ms would make an application like this all but useless. Now i know the Muse is actually taking readings far more frequently than that, and the rate at which it sends messages is clearly variable (since some only transmit every 10 seconds, like battery info), so it begs the question: is there a command of some kind that can alter this?
You can receive the raw EEG and process it yourself with an FFT to get “blink” (which is a wide band power jump). Similarly the accelerometer does not have any processing on the data, but you can do some simple things to filter out “nods”. The latency of the eeg data is low (~15ms), but any processing takes some time, and sending it back out more time.
Seams that you need almost real time, will need to wait for the direct API release so you can avoid the OSC send-receive time lags,…
If you dont mind about the lag, the best is to get the Timestamp from the OSC and then match it with your task.
EDIT: TimeStamp seams to be from the sent time, not the captured time?
Thanks for the responses. I did realize that the accelerometer data was arriving more frequently than every ~100ms; about the same as eeg data, which once i get to measure it in Max falls in around the 5-40ms range. Averaged out, this should be borderline tolerable, we’ll see. I had noticed the blink signature dips in the side sensor eeg and, before i even got the dsp data streaming, i had thrown together a bit of code to identify them. The dsp blink data was more accurate though, once i got it streaming. Once FFTed, are the “wide (=low frequency?) band power jumps” starker/more identifiable than when you see them in the raw eeg? If so, it could be worth it to conduct my own fft.
What is this direct API you speak of? It sounds intriguing.
Using --preset 10 --no-scale , geting a lag of 4sec for eeg eye blinks for example, and a real sample rate of about 60 readings per sec for eeg instead of 220 ?
In the future should be an API to override Muse-IO and get the stream directly from the Muse like in other consummer eegs.
Do you mean 4 ms? What are you measuring these readings in, Muse Lab or what? Does the preset or --no-scale argument affect message frequency in some way?
Is there some place i can look up more details on this future API?
no 4sec, didn’t try without --no-scale, should be faster since it’s raw eeg dbs, reading through custom app in c#
Muse Team give no further details or dates in other threads…
No it doesn’t.
The scaling happens on the computer, and is just a multiplication with a scaling factor. The presets all have the same frequency.
Performance wise osc is commonly used in audio production environments, where latency is pretty critical. And it also is the supported available API. Another API might not be that much faster.
Please be aware that this is about the frequency of the eeg and accelerometer samples, the artifact detection and general ‘dsp’ features are different. They do analysis on the received samples and always analyze time frames.
As MattC mentioned, roll your own fft, or whatever else you prefer for signal analysis. Our fft is more of a convenience to get started.