Why did Interaxon need machine learning for their Calm metric?


#1

I’m working on my own neurofeedback based app, and I’m been getting familiar with all the traditional techniques, such as reward and inhibit frequencies, theta/beta ratios, that sort of thing. However, as I understand it, Interaxon uses proprietary machine-learning techniques for their metric. Why was it necessary to bring in that level of complexity? What problem would it solve that traditional neurofeedback can’t?


#2

One equation may not fit all of our emotional states. For arguments sake, lets say 2xAlpha/Beta will decide your calm state, for me it might be 2.8xAlpha/1.3xBeta… so I would think machine learning is doing an individual profiling and then deciding states. It is their secret and dont expect them to tell us anything about it for two reasons, you and I will then incorporate that method in our app, making theirs not competitive or someone might prove their theory wrong. I think they are doing a good job, and I am sure over time these things will become more transparent.


#3

Machine learning might pick up on nonlinear patterns as well, like rewarding certain ranges of alpha, but not above or below, and only in combination of certain ranges of beta, etc.

Or it might be a buzzword flim-flam marketing smoke screen.

Or a little of both.


#4

machine learning (ML) techniques is to make it easier.
for example, i know nothing about neuro science. but just throwing the data onto a ML algorithm like RNN, and labelling it as ‘eating chocolate’, the algorithm might just be able to find a strong pattern of that vs ‘twiddling my fingers’.
its not that hard (fresh computer science graduates will probably learn this), and it is free.

also, all ‘proprietary’ ML algorithms is usually based on something commonly known, then adapted to fit the situation. lastly, they cant give it away how it works (if it does), else they cant make a business out of it.


#5

I don’t really know anything about machine learning, however my speculative contribution is thus: everyone is slightly different, there is no universal one size fits all, Alpha Rhythm for example, while it exists within the upper and lower limits of the frequency band there is a fair bit of individual variability. It is common practice to calibrate an EEG to a participants unique Alpha frequency prior to beginning any lab work with that individual, perhaps the machine learning helps provide that degree of automatic calibration for the individual user?