Download raw data from Calm sessions


#1

Hi all,

is there a way I can access the raw data recorded during my muse sessions.

Thanks
Philipp


#2

Hey Phillipp,

There is no access to the data from the Muse app sessions apart from what you see in the app right now.

However, it is possible to access and record raw data from Muse using the SDK, which is free and available on our dev site at developer.choosemuse.com.


#3

My App, Muse Monitor will let you record and download the raw data.


#4

Thanks Tom, I downloaded the SDK.

Enigma644 did you get your app approved for ios yet? Really looking forward to this!


#5

Not yet. They’re having some problems sorting the MFI stuff out. Once they get all their ducks in a row it should get approved pretty quickly I hope.


#6

Hey Enigma - any luck with the MFI stuff? Can’t wait for your app!


#7

They’re working on it, but still not quite there yet. I can see the light at the end of the tunnel though!


#8

Any news Enigma644?


#9

Nothing concrete… They’re still sorting out their MFI program. Once that’s sorted, it’ll be 7-10 days for my app MFI approval, then ~9 days for the Apple approval!

This is why I prefer Android development! It took a whole 2 hours for Google to approve my app and have it up on the Google Play Store!


#10

Thanks for the update. Is there anyway i can run the android app on my mac? Or is that not possible. Or cant I connect to your private appstore? I would be happy to just pay you by paypal or what have you.


#11

Technically you can run Android on a Mac. To do it you can set up an Android virtual machine using Oracle’s VirtualBox (free) for the virtualisation and Android from android-x86.org. Configuring the bluetooth may be a pain as you might need a separate USB bluetooth dongle with OSX. I’ve done this VM on my PC and once configured it runs really nicely. It’s a bit much to go through to run one app though!

I could make a private sale, but if I did, your version of the app would not automatically update from the Apple App store. Once properly released you would have to remove the version I gave you and pay again to install from the App Store. If you don’t mind paying twice, email me direct and we can sort it out. I will need the UDID from the device you wish to install on. You can find this in iTunes.


#12

[B]Enigma644 [/B]I have sent you a private message.


#13

As a blind user, I hope the data from Muse Monitor can be read by Voiceover, the screen reader built into iOS. Or, I hope the data can be exported in an accessible form … Excel maybe?


#14

Firstly, regarding data export, yes; you can output to Excel CSV format. I also support Muse Google Protocol Buffer format, if you want to render the data in Muse Lab.

I just turned on TalkBack on my Android and much to my own surprise Muse Monitor works quite well in all it’s user interface elements! Unfortunately the main display is a single graphic, so there’s no info for it to read out when you tap it. However, it looks like it’s pretty simple for me to fix this for you. I just need to populate the content description field of the image.

If you could give me some pointers on what you would like that would be really helpful.
With the default settings the image display contains the following:
Top left - Text name for each brain wave selected and it’s value.
Bottom left - Image of the muse headband showing which sensors are correctly adjusted. This vanishes once the headband is on correctly, unless you specify in the settings for it to remain.
Bottom right - Battery level. This can be hidden in settings, but when it gets to (I think 20%), it always appears.
Lastly over the entire view is a scrolling graph with each brainwave.

What I propose is to change the content description value to include all the selected brain waves, any muse headband sensors that are not on properly and the battery level if turned on.

So when you tap on the view you would hear something like:
Headband left ear sensor has a bad connection.
Alpha 46.4dB.
Beta 12.5dB.
Gamma 45.6dB.
Battery 72.8%.

How does that sound to you?

I should note that this is all on Android, I’ve not done any accessibility testing on IOS yet, but the code base is the same, so provided the IOS screen reader works like the Android one, there shouldn’t be any differences.


#15

Hi. First of all, your proposed description is excellent! That’s exactly the sort of info I’d love to hear. :slight_smile:

and, it seems you know what a screenreader does. Don’t laugh, but that is really rare among sighted devs.

Thanks for including visually-impaired folks and I look forward to trying your app.

One question: I’m not a developer, but I’m curious to know if the developer tools you use include accessibility support. Often when I write software companies to discuss this stuff, it would be great to be able to reassure them that some of the tools they are using already include these things, that it’s easier to implement than they may believe, etc.
so, just curious what you are using there.


#16

I use Eclipse to develop Android apps. In the layout editor there is a little warning icon next to items if they are missing recommended attributes.
Previous to this discourse my main view had a warning prompt with: “[Accessibility] Missing content description attribute on image”.

I just ran a test and it’s fairly easy for me to modify the content description to update with the image. However, as a blind user, you have to keep touching the image in order to get the voice feedback, so I don’t think that’s a terribly good user experience.

I think what I’d like to do for you is make it so that touching the image reads out the status as I described above, but during normal runtime have a tone playing where the pitch represents the value. Then you can just hear it going up and down.

Regarding this I have a few questions.
Are there standard tones/frequencies which I should use to represent this?

By default I display five brainwave values. I assume that playing five different tones at once would be an unintelligible mess? Or should I allow that?

Right now there are options in settings to select which brain waves you want to display and then you can further break it down by showing the left/right component of that wave, front/back component and of course the average from all four sensors, which is what is displayed by default.
I’m thinking I would add an option completely separate to those to select a single brainwave average value for which to play the feedback tone. Then you can listen to just one tone, but tapping the main view would read the text description for all the values you have selected.

Thoughts?


#17

Hi. That’s another great idea.

Speaking for myself, as a musician with perfect pitch and someone who mixes/masters audio for folks, I would actually like hearing anywhere from one up to five tones, selectable. The volume of each tone could rise/fall as signal level fluctuates. (as long as said tones spell out a pleasant-sounding chord, not an atonal mess) :slight_smile: so pitch stays constant, but volume indicates dB, which makes sense.
Here’s a handy frequencies of musical notes chart:

Heh, add in some stereo panning and you can indicate positioning, strongest signal currently coming from this spot etc.

There are so many ways someone could manipulate audio in an app like this, many of which I sent to InteraXon waaaay back…
You could have a sound moving left to right, the movement slowing down as the intended brain state is reached. i.e. goal is to keep the sound in the middle between the ears/speakers.

or, any modulation type effect that pulses (phasing, flanging, chorusing etc.) and said oscilation slows down as the ideal state is reached.

or, the sound has less echo on it as the state is reached, i.e. it sounds closer to the listener.

But, that’s the audio geek in me speaking. :slight_smile:

Most people would probably want to just focus on one or two tones and how they change over time, beta versus alpha for example. (I’m only guessing that’s what the Calm app is focusing on to determine active versus calm)

Or, here’s a better idea … rather than separate tones, why not use one fundamental and it’s first four or five harmonics?

how about playing one fundamental (pure sinewave) tone for a brainwave type, and the other brainwave types are upper partials (harmonic series) of that tone?
That way, as the levels of the various waves fluctuate, the sound of that one tone gets richer or more plain. (i.e. the timbre changes)
Does that make sense?

Its the relative intensity of those harmonics of the fundamental that help us distinguish the same pitch, middle C for example, on a guitar, piano, saxophone, etc. Same note, different blends of upper harmonics.

ok, that’s enough geeking out over here :slight_smile:


#18

Ok I can’t help myself … one more audio pipedream…

Along with regular left/right panning, add in some convolution impulse responses of head related transfer functions (HRTF) and you get almost too real 3D positioning in headphones. Holy crap, that was right behind my left shoulder!
Note: this is not related at all to binaural beats.

Try this on your best headphones:

Or this:

Stephen King’s short story “The Mist”, as a radio play in binaural stereo:

It’s a recording technique but you can also simulate it with an impulse response type reverb generator and some head related transfer function (HRTF) responses.


#19

Muse Monitor is now available on the Apple App Store!


#20

Hi Enigma, Would it be possible with your app at the end of a session to just save the average values?

So I would just get :

Session X - Duration MIN on DATE:TIME

Average value:

Alpha 46.4dB.
Beta 12.5dB.
Gamma 45.6dB.
Etc.
Etc

I have no idea whether this would be meaning ful data. But I would just like to compare different brain states. Hungover vs excited, tired vs motivated etc etc.

Is that possible with your app?