Comparing Muse output with Mindwave output


I received my Muse this past Saturday…my previous experience is with the Neurosky Mindwave. I am working on a music project that uses the EEG information from the headset that is converted to OSC data. Going from the Mindwave to the Muse has made a lot more information available, but not being a researcher, I’m having some difficulty creating even some basic correlation between the data generated by the two different headsets. I’d appreciate some suggestions on how to use the Muse data in ways that are somewhat familiar considering my experience with the Mindwave.

There is a lot of data here. I almost don’t know where to start.

Also, it also isn’t really clear why I would pick one preset over the others. Is there a good primer for becoming more familiar with this technology?

Sorry if this isn’t the right forum…I wasn’t quite sure which one was the most appropriate.



I’m curious, what kind of info did the Mindwave give you?

I’ve been trying to figure out which messages are best to use for quite awhile also, but for now I’m using the processed relative readings they provide ( I figure they must provide them for a reason. But continued experimentation is probably the best way to figure out what to do.


The Mindwave provides Alpha, Theta, High Beta, Low Beta, Low Gamma, High Gamma and a raw data stream.

I’m experimenting tonight with the Muse reading the /muse/dsp/elements/ values. I am pulling all four channels of data, doing some simple equations, converting to MIDI then sending to a soft synth. Occasionally instead of a float I’ll get the character string “nan”. Why would there be a “nan” in the data stream?


I am not sure if I’m in the right forum here, but I am interested in only viewing the different Hz cycles my brain is putting out during my meditation sittings. Can anybody recommend an interface/program I can use to get these Hz cycle numbers? Muse made reference to “SDK” – can anyone provide more information? I use OSX and iOS. Thanks


Nice, yeah i’ve made some random ambient noise already myself based on the brainwaves, but i bet yours will sound better. I ran into those NAN readings too. i believe they occur when a sensor misses a reading or something along those lines. NAN is actually treated as a number (probably -infinity, or the result of the hardware trying to divide by 0 at some point?) so i just filtered them out with a [split 0. 1.].

Look here


You will see NaN on your frequency band outputs during a noisy signal or an artifact like blinking. This is because the signal is too noisy to give a meaningful representation of the frequency bands. Thus we ignore it until the artifact it gone. For now you should build your software to expect NaN during bad signals and artifacts and react to it accordingly.


May I know what kind of artifacts were removed? Including blinks? thanks!