Parse raw data to alpha, beta, delta....,but not TP9,TP10,Fp1,Fp2 these 4 channels


[SIZE=12px]Hi Muse Developers (Tom & Farough)[/SIZE]

[SIZE=12px]I am a student currently writing my own MCP for parsing the muse data. and i would like to parse the raw data into alpha, beta, delta,gamma, theta. such as using your MuseIO -dsp command to somehow converted and than i can plot it, rather than parse it into that 4 channels. On the site, I read from the parse process of compressed eeg packet its quite clear and also there are other post that made by Hashmeer[/SIZE][SIZE=12px] DavidVivancos Eduardo and they[/SIZE][SIZE=12px]help me a lot. There is a section about [/SIZE][B]Absolute Band Powers. [/B][SIZE=12px]That sounds like what I’m looking for, but i didn’t see any more information about a way that how to decode the raw data into alpha, beta, delta etc. May I ask you if there is any information that i can read to help me get what i want? or if it is impossible to do it. Or can you point out some direction for me to achieve my goal?[/SIZE]

[SIZE=12px]I hope i explained what i want clear enough, but If there is more information that you need to know from me, please let me know. I will get back to you as soon as possible.[/SIZE]

[SIZE=12px]Thanks very much[/SIZE]


[SIZE=12px]And one more question , i get those values from MuseIO and are those four value after ffff is the current reading from the four channel??[/SIZE]

[SIZE=12px]/muse/dsp/elements/alpha ffff 0.203090 nan 0.133217 0.064190[/SIZE]
[SIZE=12px]/muse/dsp/elements/beta ffff 0.240029 nan 0.288321 0.069284[/SIZE]
[SIZE=12px]/muse/dsp/elements/delta ffff 0.299745 nan 0.301209 0.706207[/SIZE]
[SIZE=12px]/muse/dsp/elements/gamma ffff 0.083992 nan 0.173692 0.030702[/SIZE]
[SIZE=12px]/muse/dsp/elements/theta ffff 0.173143 nan 0.103561 0.129617[/SIZE]


Hi Sean,

If you want to get power band information from the raw Muse data without using MuseIO, you’ll have to calculate it yourself using a Fast Fourier Transform library of some kind. Power band information is not directly encoded in the MCP packets, it has to be derived.

If your goal is to obtain processed data from Muse, then I would recommend just using MuseIO, rather than writing your own parser. It provides much more than raw data. You can always replace your MuseIO/OSC layer with the native Bluetooth support that the SDK will eventually provide later. If your goal really is the parser itself, then you’ll have to also implement the methods that analyze the raw data.

Those four values are the alpha/beta/etc. power values for each channel, from left to right around the head. Pretty much all the power band values that MuseIO provides are per-channel. MuseIO doesn’t provide an overall “alpha” reading for the entire head, as this throws away very useful spatial information, and can be derived from the per-channel values anyway.


Depending on the level of knowledge that one already have on the matter, the following article of the “Learn” series from Adafruit may be interesting.

It is very basic and introdutory, but gave me I good understandig of the principles of converting signals from the time domain to the frequency domain.

It applies to sounds, but the principles are the same, and has practical apllications with source code for cheap microcontrollers.

[B][B]FFT: Fun with Fourier Transforms[/B][/B]

BTW, Sean … from your post I can see that you are still using the old SDK.
Download the latest SDK and you’ll get also another interesting bandpower values, like Absolute, Relative and Session_Score
Also, get to know the meanings of: Quantization, Horseshoe, Is_good, so you know when the values you get are reliable or not,


Thanks Tom, Thanks Eduardo

I think i will go for the hard way, to derive the algorithm to parse the different waves that I want. I am actually have some background on signal process and analysis, include FFT DFT DSP. I hope it will help me a bit at least.

As my project is considered to improve some of my software programming skills, I am not that interested use other software that can concatenated into my project.

I am currently halfway through the parsing process, and I am also spend some time to writing my proposal to apply funding for next year research project, which I can bring this project to a bigger picture that work on some project that related to human cognitive etc. ( this is the reason that I need to parse those alpha beta waves)

At the moment I am not sure what/how should I do before the data gets parsed into separate channel properly. So I hope I can keep this post as a record, that when the time I started to derive the “algorithm”, you still can help me with your wisdom.

Many thanks for your replies.