I am a sufferer of severe tinnitus. Not the kind that makes your ears ring a little bit and might even annoy the hell out of you at night. More like jet engines jammed on either side of my head that you simply can’t get away from. I know it’s a hard idea to relate to. But it sucks and it makes it really hard to function.
Currently I’m participating in a medical study to test out a new surgically implantable device (a vns stimulator) to see if it can treat the root cause of the disease. It’s by a company called microtransponder. If you’re curious about this you can read more here:
I’ve learned that in going through the microtransponder tinnitus therapy is that they try to get you to match the ‘tones’ you are hearing in your head with synthetically generated tones that they create. They also try and match the loudness level of your tinnitus. The process is slow, laborious and difficult. It may not even be that accurate because getting the synthetic sounds to match what you’re experiencing is surprisingly challenging.
They then take the data from the therapy sessions and program that info into a laptop that controls the VNS device. My worry is, what if the data we’ve produced isn’t accurate? Will the therapy be less effective or not effective at all if that’s the case?
One aspect of the disease is that it involuntarily throws my body into a state of extreme stress. That leads to all kinds of social and family problems as you can imagine. And working is far more difficult than it needs to be.
So that’s why I ordered a muse. To see if I can use the strategies you’ve developed to try and calm the hell down! Even a little bit would be nice.
But my question to you is, do you think that someone (maybe the developers at Microtransponder?) could take a muse and it’s SDK and come up with some way to automate the process of detecting the abnormal ‘phantom’ tones that’s generated in the mind as a result of Tinnitus? If so that could improve the accuracy of their device and software quite a bit I’d imagine.