CSV Data


I used MusePlayer to convert a .muse recording into a CSV file. If I wanted to replicate the graphs that MuseLab’s visualizer shows, how would I go about doing that based on my CSV. For instance, how would I use the data in any row of the uploaded spreadsheet sample screenshot to visualize the data as MuseLab’s visualizer does?


Hi Yotta,

You have a couple of options for this.

Firstly, you could separate your data into the different data types using excel and then plot them using excel graphs.

Secondly, if you like the way Muse-Lab displays it, Muse-Player will allow you to stream the data back into Muse-Lab the exact same way you did the first time. muse-player -f mymusefile.muse -s UDP:5000 will take the contents and mymusefile.muse and stream is out over OSC onto port 5000 using UDP.

Let me know if that’s what you’re looking to achieve.


Well, we’d actually like to visualize the data using Mathematica. Could you maybe explain how the data correlates to the visualization so that we can achieve similar visualizations using Mathematica?


Hi Yotta,

I’m not personally familiar with Mathematica, but a quick Google search seems to imply that you can import csv files directly into Mathematica. Have you tried this already? What kind of issues did you encounter?


Yes, CSV data can easily be imported into Mathematica. Our problem is that we don’t understand how your data is organized. For instance, at one time step, you have several rows of EEG data with four columns of numbers for each row with no headers indicating what each column signifies. Could you break down what these are?


Oh, certainly. This should be documented here:

As well as all other messages from Muse-IO.

However, EEG is as follows:
Tp9, Fp1, Fp2 and Tp10 microvolts in the 10-20 system.

DRL and Ref are as expected:
DRL and Ref in microvolts.

Hope that helps.


This helps tremendously. Thank you so much. Also, could perhaps explain the way this data is time stepped? We aren’t understanding why sometimes there are multiple rows of the same data at the same time step and a different number of rows of that same type of data for another time step, such as 16 rows of EEG all for one second and 11 rows of EEG data associated with another second.


Certainly, at the moment Muse does not have an accurate way of measuring the timestamp on the headband. However, we are confident it samples at 220Hz quite precisely, with some tolerance for temperature fluctuations in the crystal. We’ve very carefully made sure the sampling rate is even and as accurate as possible.

To this end we timestamp the samples presently on parsing them. We send 16 samples in a single packet so samples may be clumped into sets of 16 in the timestamps. It is valid to assume an even distribution of the EEG samples. So if you get 220 samples in one second it should be safe to assume the time between them is 1/220 each. Redistributing the timestamps.

We understand this is not ideal and we intend to improve this in the future through better prediction metrics, but for most applications an assumption of equal distribution should suffice.


I also have some unrelated questions. Firstly, do the numbers for the four EEG graphs (0-3) correlate to the four electrodes TP 9, FP 1, FP 2, TP 10, respectively?
Furthermore, is the raw FFT data used to compute the EEG data or something else? Is /muse/elements/raw_fft0 thus associated with EEG 0 (TP 9 according to my guess)?

Also, is there a way to reset the headband’s accumulated data that the session scores are calculated from other than waiting the maximum time period the data is held? To my understanding, the session scores average each absolute band power. Thus, if two people were to use the same headband, this average would not accurately convey information about their separate data.

What is the explicit maximum time, if any, that the data is stored for averaging in the session score?


Hi Yotta,

Correct, 0-3 correlate to TP9, Fp1, Fp2 and TP10 respectively. Consequently the same can be said for raw_fft0 - raw_fft3.
The FFT’s are all computed from the raw EEG data. If you turn off the DSP option you will only see raw data like EEG, Accelerometer and Battery information.

All session scores begin their history when the headband is successfully placed on a user’s head and they are cleared when the headband is removed. There is a flag sent from Muse-IO called headband_on. If this is set to 0, which happens if the headband is removed, the session scores are all reset.

The way the session scores operate is they create a personalized history for the user. This history adapts to the present more than the past. 10 times a second it will take a sample of the user’s live EEG and use that as a band power input. That input decays with a half life of 10 seconds, which means 10 seconds after it’s been read it affects the output score by half of what it originally contributed. It’s weight is halved.

In reality, all the data is held more of less indefinitely since the history decays logarithmically new entries into the history will decay towards 0 forever without ever really truly reaching 0 weight.

Just to clarify, session scores are not correlated to an average absolute band power directly. It will create a history of all your absolute scores, weighting the most recent more heavily, and then return a score between 0 and 1 to tell you where you are with respect to your own personal history. It should not be possible to share history with another user.

Hope that all makes sense.


Yes, you’ve been a great help. We’ll make another post or something if any other questions arise. Thanks so much again!