Hi, according to the information online, the raw EEG data is quantized if it’s too large.
But when checking the data in the following dataset, I find that there 2205 rows with 4 columns. But my raw eeg data is 35556 rows with 4 columns.

[SIZE=13px]IXDATA.raw.eeg.quanitization - amount quantized during lossy compression due to large data swings.[/SIZE]

[SIZE=13px]Anyone can tell me why?[/SIZE]


According to the docs of the Muse Communication Protocol (MCP) at: https://sites.google.com/a/interaxon.ca/muse-developer-site/muse-communication-protocol

I did the following math and I think the numbers mach:

1 - for each Uncompressed Packet follows 8 Compressed Packets (and just the Compressed packets have quantization information), so:
2 - if you got 2205 rows of quant. data you should have 2205 / 8 = 275.625 ( = 276 ) rows of Uncompressed EEG data
3 - also each Compressed Packet contais 1 quant. data for 16 EEG readings ( x 4 one for each channel)
4 - so 2205 * 16 = 35280 rows of EEG data
5 - and 35280 + 276 = 35556 (that’s what you got)

I hope Tom or Farough can confirm my math, but it makes sense to me :slight_smile:



I can endorse Eduardo’s explanation.

The math is sound. 276 raw EEG samples, because the first sample is uncompressed. So 1 + 2205/8 rounded down. 276 uncompressed EEG.
2205 * 16 compressed sample delta is 35280 samples.

Therefore, you expect 35280+276 EEG samples at 35556.

In case you’re just wondering why you have 2205 quantization data sets it’s because regardless of whether you’re data is getting quantized we send the quantization levels.
As such, no quantization levels will be transmitted at 1, level 2 as 2, 4 as 4 and so on and so forth.

So yes it’s possible for all 2205 by 4 values to all be 1 if your data was always good.

Hope that clarifies things for you if you were still uncertain after Eduardo’s response.


Thank you for your kind reply, it’s really helpful!



Is the uncompressed packet (heartbeat packet) contains only 1 sample EEG data and compressed EEG data contains 16 samples of EEG data?



That’s exactly correct.

Uncompressed is 1 EEG data point for each of the 4 channels. Compressed is 16 EEG datapoint delta’s per channel, allowing you to calculate the next 16 datapoints for each of the 4 channels.

You will see one uncompressed EEG packet after every 8 compressed EEG packets and the first packet will be a uncompressed packet.


Hi Farough,

My apologies, but no intention to correct you ( it would be too pretentious of me :slight_smile: )

To my understanding, I see the opposite of what you say … I would say:

"You will see [B]8 compressed[/B] EEG packets [B]after[/B] every [B]1 uncompressed[/B] packet"
and therefore I do not see any necessary rounding, but only not complete compressed packets read (or recorded) at the end.

In my diagram flow for my developments based on the MCP I have used the following line of reasoning:
(I’m doing it for the Arduino, so, very limited C++ resources, but it’s working at a rate around 7400-8200 bps, exactly like Muse-io)

  1. Just after start I have to wait for a synchronization packet (FF FF AA 55)
    The MCP documentation on the sync packet says:
    [B]“This is a legacy packet and is really not needed by the protocol, but for now it remains in the firmware of it it harmless”[/B]
    but, how could I get to start without some kind of initial mark?
    only after that mark I can start parsing the other packets, and from it, knowing the size in bytes for each type of header, proceed without errors.
    And if a failure occurs during transmission?
    in my flow I stop everything and wait for a new “sync packet” to restart … am I wrong?

  2. Then wait for an uncompressed EEG packet (E0 or E8):
    So this kind of packet must come before the 8 compressed EEG packets (and not after) because these compressed packets contains only differences (which need a baseline to be applied).

  3. only then wait for 8 compressed packets (C0), to apply the differences, and then a new uncompressed packet, and so on …

So what happened to wymice was that he ended the recording after reading only 5 of the 8 expected final compressed packets. (and this is not an error, but expected, since one can stop recording anytime)

1 uncompressed packet -> 8 compressed packets

275 uncompressed packets -> 2200 compressed packets (275 * 8 = 2200)

  • 1 uncompressed packet -> + 5 compressed packets (not recorded the last 3)
    276 uncompressed packets + (2205 * 16) = 35556 total packets.

Sorry for this long writing (it’s not intended for advanced developers, but for beginers on development or on the MCP).

Thanks for any corrections or comments,



Hi Eduardo,

My understanding is that Bluetooth ensures against failures in transmission using some sort of reliable retransmit protocol—so the bytestream you see at the app layer has no gaps in it. You can immediately start parsing assuming you’re at the start of a packet, and keep going from there. A sync packet would only be helpful if bytes could be dropped at random, which isn’t the case currently.

Sync packets can also be mildly helpful while debugging your parser, so if you have a length wrong, you don’t have to wait a couple seconds to reestablish the connection. But in the end, you don’t want to be relying on them—you want your parser to have all the lengths right. :slight_smile:

Correct in principle. In practice, I believe there are cases where the firmware can start in the middle of a stream of compressed packets before getting to the first uncompressed packet. It’s reasonable to ignore these—there’s certainly nothing else you could do with them. You lose up to a half a second of data at the very start of the connection in the worst case. Not so bad, since you’ve lost several hours to several days of data beforehand while the user wasn’t wearing a headband.