Is there a way to see absolute band powers?


#1

I’ve just learned that the dsp/elements/alpha,beta,theta,delta,gamma are relative bandpowers. - This means that the numbers shown are relative to what was the band power before. This means that looking at those numbers, I cannot see what the dominant brainwave band is at a given moment right?

So is there a way to find out what the absolute bandpowers are? So that I know which one of bands is the dominant brainwave band at the moment?


#2

Where does it say that?


#3

Or it are absolute band powers and I need relative? I’m a bit confused.

The real question is, how can I determine which one of the frequency bands is the dominant one at the time?

Because if I look now at the elements, always the Delta has the most power. But that doesn’t mean that Delta is the dominant brainwave band. It only means that slower waves have more electrical power. So is there a method to see how active each band is and to see how each band relates to the other band. Something like: Alpha 0.6 / Beta 0.4 / Theta 0.2 / Delta 0.1 <- not based on electrical power but on activity - so that in this example Alpha is the dominant brainwave band.


#4

What about writing some code to do a windowed fft on the data if you want to compare specific frequencies relative to the total signal power. Then you would have full control over the start and stop frequencies and how they are calculated.

A snippet (for example) -

class WindowedFft { float calcWindowPoint(float value, int n, WindowType window) {
switch (window) {
case HANN:
return (float) (value * (0.5 * (1 - Math.cos((2 * Math.PI * n)
/ (windowSize - 1)))));
case HAMMING:
return (float) (value * (0.54 - 0.46 * Math.cos((2 * Math.PI * n)
/ (windowSize - 1))));
case BLACKMAN:
return (float) (value * ((1 - BLACKMAN_ALPHA) / 2 - 0.5
* Math.cos((2 * Math.PI * n) / (windowSize - 1)) + (BLACKMAN_ALPHA / 2)
* Math.cos((4 * Math.PI * n) / (windowSize - 1))));
case BLACKMAN_HARRIS:
return (float) (value * (0.35875f - 0.48829f
* Math.cos((2 * Math.PI * n) / (windowSize - 1)) + 0.14128f
* Math.cos((4 * Math.PI * n) / (windowSize - 1)) - 0.01168f * Math
.cos((6 * Math.PI * n) / (windowSize - 1))));
default:
return value;
}
}
public void getWindow(float dest[], int ch, int offset, int len ) {

      double sum = 0;
      
      /* apply window */
      for (int i = index + offset, j = 0; j &lt; len; i += 1, j += 1) {

          i %= maxSize;
          dest[j] = calcWindowPoint( samples[ch][i], len, WindowType.HAMMING);            
          sum += dest[j];
      }
      
      
      double mean = sum / len;
      
      /* subtract dc offset */
      for (int j = 0; j &lt; len; j += 1)
          dest[j] -= mean;
  }


  float calcBandPower( float buf[], float low, float high ) {
      
      float power = 0;
      
      // get the values between the highpass and lowpass frequencies
      for(float f = high, j = 0; f &lt; low; f += resolution, j++) {
          int index = (int) ((f / resolution));
              
          // magnitude of frequency
          power += buf[index];
      }

      return power;
  }

  void doFFT() {
      FftOutput result[] = new FftOutput[4];
      /* for each channel do*/
      for (int ch = 0; ch &lt; EEG_CH_PER_PACKET; ch += 1) {
          getWindow(buffer, ch, 0, windowSize);
          fft.realForward(buffer);
          result[ch] = new FftOutput();
          result[ch].fft = new float[buffer.length / 2];
          for (int i = 0, j = 0; i &lt; buffer.length; i += 2, j += 1) {
              result[ch].fft[j] = (float)Math.sqrt( buffer[i]*buffer[i] + buffer[i+1]*buffer[i+1] );
          }
          result[ch].total = calcBandPower( result[ch].fft, highPass, lowPass );
   }

}


#5

blah… that doesn’t look nice. Should post on a repo I guess.


#6

thedas,

The scores given by muse elements are relative to themselves. That is your score in alpha will rise if your current alpha score is larger than your previously recorded alpha score. This means the scores are relative to yourself, but not one another.

Electrical power is not the best way to categorize your frequency bands as each frequency band tends to have it’s own scale. Beta may be much less power than alpha, delta and theta as they the frequencies tend to degrade in power logarithmically.

You may find your delta is influenced by your eye movements and blinking however as artifacts are most present in the lowest frequencies and although we do try to remove artifacts as best we can, it is possible they will present themselves in your delta waves a little.

As MattC has said, you can access the raw frequency data as well to do your own frequency analysis if you wish.


#7

MattC & Farough, thanks so much for the replies. That clarifies a lot!

The frequency bands are divided something like this:

Beta: 16 - 30 Hz
Alpha: 8 -15 Hz
Theta: 4 - 7 Hz
Delta: 1 - 4 Hz

So I need to be using the /muse/dsp/bandpower/raw_fft path which says this:

[I]130 decimal values with a range of roughly -4.0 to 2.0. This represents FFT for the first channel, show the absolute power on a log scale(dB) [/I][I]of each frequency from 0hz-110Hz, divided into 130 bins.[/I]

This means that I need to take for example the Theta band:

/muse/dsp/bandpower/raw_fft0 4 - 7
/muse/dsp/bandpower/raw_fft1 4 - 7
/muse/dsp/bandpower/raw_fft2 4 - 7
/muse/dsp/bandpower/raw_fft3 4 - 7

take the average of these numbers and then I would get a number somewhere between -4.0 - 2.0 right?

Then measure this for all the four bands and see which one of the four bands has the highest number between -4.0 and 2.0? To know which one of the bands is dominant at a given moment.

I hope that I understood it correctly, and hope that you can confirm or explain further!

Thanks so much for your help!


#8

Hi Thedas,

As I stated above, you’ll find the higher frequencies will tend to have lower total power because EEG’s degrade logarithmically with frequency. That is Delta power will always be higher than beta powers, they are on a different scale entirely, which is why they are plotted logarithmically as well. Comparing absolute power across frequency bands will not produce what you’re hoping to achieve. The Delta and Theta bands will dominate in dB power a lot more than the higher frequencies.

Additionally you will find frequency bands with more significance in different channels as the signal is captured in a different direction relative to other channels and some frequencies will be more dominate some channels than the others. As such taking an average may not lead to the best results.

Ultimately the human brain is a very complex mechanism and it is difficult to summarize the meaning of all the frequencies in all the channels simply, however if you’re interested you should read more about past research on the subject to gain a better understanding of the frequencies significance. EEG’s are a fairly old technology and there is a lot of information out there.

For the time being, our scores between 0 and 1 for each band is a useful metric for the comparisons you’re hoping to achieve. These scores are relative to their frequencies as well as the user and broken into the individual channels so you can distinguish which channel is exhibiting the frequency powers.

​Hope that helps


#9

Hi Farough,

Thanks, I indeed posted too quickly. I saw it when reading your previous post again. Sorry for the inconvenience.

But using the elements metrics I still would have higher scores for Delta and Theta. and couldn’t really see which one of the bands is the dominant one right?

Is there any possibility that you guys develop dsp elements for measuring the dominant band in the near future? Because I think that would be great for a lot of developers. As these bands say something about the awareness state, which in turn can be translated into apps that could have practical uses for end consumers.

Thanks for thinking with me!


#10

Farough,

Undoubtedly the brain is immensely complex and one could spend a lifetime uncovering the nuances in its function. I plan on doing that myself, in one way or another, as i find it all endlessly interesting. However, for the sake of effectively coding applications, it might be pertinent to cut to the chase and just ask:

The Calm app seems to do pretty well at identifying a certain brain state that i think a lot of us are after (call it say, calm), so how exactly does it do it? Is it using raw fft, or the relative measurements? Is it taking into account only (high readings of) alpha, or also an absence of other waves, like beta? Is it averaging readings from the 4 sensors or doing something else, like using only the dominant reading at any given moment? How exactly does calibration work? Any other idiosyncrasies of the final method, things that took a lot of trial and error to hammer out, and weren’t immediately obvious to begin with?

Seems to me that it’s in the company’s best interest to give developers as many details on this process as possible, even a default “here’s how we did it and where you can begin” sort of standardization model, as you wouldn’t want to see apps for your product entering the market that aren’t actually identifying brain states accurately (or as well as they could be). I know i’m certainly thirsty for more information. It’s fun to play around with and I’m sure with enough time and experimentation i can get it working pretty well on my own, but why, as they say, reinvent the wheel?

In any case, thanks for all the work i know you’ve put into it,
Ryan


#11

Hi to_the_sun,

Unfortunately the exact methods used in the Calm app are proprietary. We won’t be posting here on this forum the details of that process.

That being said we do care about the development community and we have some exciting features coming in the future SDK tools. I can’t get into the details of these features here, but they will be designed to help the development community and we’re excited to release them.


#12

Did you find an answer to this question? I thought looking at the relative EEG data does that. I.e. if you add Alpha/Delta/Gamma/Beta and Theta, it adds to 1 every single time in a data set i recorded. So i assumed that relative EEG given by muse direct gives the dominant brain wave - am i incorrect in that assumption.