Interesting! I have some experience with BLE MIDI (built some microcontroller-based interfaces on the nRF51822 and ESP32), and did notice that OSX for example enforces a pretty long connection interval (11.25ms), despite the fact the MIDI spec should allow for as low as 7.5ms for better latency characteristics. Do you think something similar happening for the Muse - for whatever reason, the BLE radio/stack combination on these desktop platforms is not able to operate at high enough rates?
Even with 7.5ms (the minimum possible value for BLE, IIRC), it means that given a sample rate of 256 Hz would imply multiple samples would have to be packed together for each connection interval. Not ideal for “real” realtime data, but at the same time the raw sampled EEG data is likely not that interesting and the processed wave signals which need to go through quite a bit of frequency domain processing don’t really need such high sampling rates anyway… I suppose you could have a frame skip of 1 sample for each FFT, but the processing requirements would be immense and most likely not necessary for the signals of interest.
This is making me curious about how the headset transmits BLE data: does it throw everything out all the time when active, or is it dependent on the driver app asking for certain signals? Briefly looking at the BLE profile (via PunchThrough’s tool on iOS), I see a single writable characteristic and a lot of notify ones. So my guess is that you write certain commands to it which then selectively activates other characteristics as needed… but of course I don’t need to try muck around here since we already have a great tool (Muse Monitor) that does all this… so this is more of an academic exercise
Sorry, getting a bit off topic