Understanding Muse Capabilities



Few months ago, my friends gave me for my birthday the Muse headband. Great gift! I used it to train my calmness.

Besides, I’m a visual artist and I got excited when I saw that Muse could send the data through OSC. I use Touch Designer (TD) and gives me an infinite variety of interactions. So I started to imagine the possibilities such as control the lightness, or the color of a scene depending on the mental mood.

I’ve also heard of other projects that use headbands to interact with the position of the objects.
So it made me think that I could read the Muse data to determine when my mind is trying to focus on “left” or “right”, for instance.

But when I get all the data into TD I only see hundreds of fluctuating bars.
More or less, I see the FFT as a 129 band audio visualizer, and I understand that each FFT group corresponds to each channel (left ear, left forehead…). Should I see any difference on how the bars fluctuate in each channel if I focus on “left” or “right”? Or is it when there are positive or negative emotions?

So far, the only real responses I see is when I blink, on the EEG data, and when I try to “compress” my head that the FFT bars approach to 0.
Are the experimental focus and mellow channels reliable? What’s the additional processing made to the waves?

For my understandings (I’m not a doctor) I see no significant differences on alpha, beta… data neither.
I thought that if I’d try to stress my mind, Gamma or Beta values would rise while Theta or Delta would tend to 0. And the opposite if I’d get calmed. But no, I see no difference…

As you can see, I’m a little lost here, so I’m looking for some guidance on how to understand the capabilities of the device.
In which points I’m right and in which I’m totally wrong?
Is it a matter of brain training?
Can it read the levels of midiclorians? :wink:

Thanks a lot for your time and a attention.



Hi Cannavizion,

In raw EEG or the FFT of raw EEG, it is usually quite difficult to see obvious changes just by trying different things out and looking at it in real time. That’s why most Brain Computer Interfaces have to employ special signal processing and machine learning techniques in order to detect changes or states.

The Muse SDK provides a lot of data which is very useful to people with expertise in those fields. Right now it only contains a few outputs, like blinks and jaw clenches (a concentration and mellow, although those are only experimental, YMMV) that can be used right out of the box without additional processing. For using simple band power metrics like gamma and beta, simply averaging/smoothing may help quite a bit.

One really simple thing you can do is to close your eyes and relax for ten seconds or so, open them, and check to see if your alpha power has increased. It’s generally the case that it will, but play around with it and see what happens to get a feel for it.


I am trying to figure this out too. I have set up muselab to have 5 channels that only have one band running which is one of the 4 sensors. This way it is much easier to read one line rather than multiple lines all crossing over each other.

Once you understand what different brain frequencies are like theta, gamma etc. then you can correlate the differences between the alterations while you are doing any activity.


Say you start reading a book, you may see alpha go up and theta go down, then you can see if that happens with anything else you do and you know that that is what concentration is.

I am also trying to route channels to audio so I can hear my brainwaves in realtime, see thread here:


p.s. the signals muse is picking up ARE from the midichlorians, we are just being fooled this whole time :expressionless: