I work at CTRL-labs, a startup focused on electromyography (EMG) based control devices. This article has a bit more technical detail about what we do: https://www.wired.com/story/brain-machine-interface-isnt-sci...
EEG--reading signals from the brain--is pretty hard. But EMG--capturing muscle contractions in the arm--produces relatively much cleaner data. This can then be fed into a variety of machine learning algorithms to map high-fidelity time series data to discrete signals or continuous gestures for which we have appropriate training data.
Want to find more? Come visit our offices in NYC!
I don't think I've ever used my brain signals in a useful way, so this is a step in the right direction.
If you find this topic interesting, Neuralink is hiring. We’re looking for a lot of different backgrounds across applied physics, biomedical engineering, software and hardware. Though Neuralink sounds cool - I know many people are skeptical - the reality is even cooler.
Especially if you’re great at firmware, robotics control software, or computer vision get in touch! Either through the links on the website, or there’s an email in my HN profile.
I'm not sure how this would work in practice. Thoughts are incredibly noisy. Any mechanism that could filter out the noise basically can decipher intent. I'd argue intent deciphering is the actual problem trying to be solved by these devices (e.g. I wish I didn't have to type. I wish the computer just knew what I wanted to type, not that I wish the computer simply typed out what I thought). Solutions like "oh, just keep on thinking of the same thing over and over again" is highly error prone and will definitely be slower than typing. Say you wanted to type "[the quick brown brown quick the quick brown quick the brown]" a strategy of repeatedly thinking of the phrase to be typed will be error prone, regardless of any ML techniques you use, simply because it cannot be known in advance what you wanted to type unless you knew the intent.
Perhaps it'll pick it up as "the quick brown", or "quick brown the", and so forth.
Another problem can be illustrated below:
Say you had your brain device on now. You're ready to reply to this.
Oh, I guess you read the above and now have "horse poop" typed. Well, you can just remove that ---
I predict that BMIs are going to suffer from the same problem as AI, where the applications that are working in the short-term get very overestimated because they are confused with the long-term where you create a singularity. If you had a BMI that could read/write the entire brain on neuron-level resolution, you could create computer back-ups of people, and if hardware were fast enough you could create superhuman intelligence. If you just have cochlear implants and prosthetics, the best case is a world where nobody is impaired, which is good, but still very far from a singularity. The Neuralink version is that if you can do telepathy, that might be valuable in some situations, but it will probably just be like faster email until the computers become smarter than us.
I once met an ex-apple engineer who created a hat that would read your thoughts and play the song you were thinking about. It only worked for certain people and had a limited playlist to choose from but it was really cool watching your "brainwaves" on a screen and then thinking "Daft Punk Get Lucky" and having it play on the speakers.
For a good review of the the brain learning to use BMI's, see:
Are we witnessing the very first steps on the long road to complete erosion of the privacy of our thoughts, the last remaining facet of privacy?
What if you could detect the vocalization somehow instead of relying on a very noisy data source (brain signals), which I see becoming a roadblock...subvocalization would be like being able to chat without typing...you would still be interacting with a UI that will make sure you dont give out your bank card number etc.
Maybe even hold up your phone and it will beam some sort of ultrasound or laser to detect tiny movements in the larynx (I have no idea what I'm talking about) but seems like there's patent in the works by physically attaching sensors...
Back when I had to write a lot for my courses, I was wondering the same about the usefulness of EEGs . At times all I wanted was to lie on my bed, point a projector to the ceiling, and write.
Alas the tech/understanding of neuroscience is just not here yet, but maybe it would be in a few decades?
There's a lot more in the way of interesting discussion in the link if you would like to read more.
This is one of the first broad-audience articles I've seen that actually acknowledges neuronal firing rates as an important practical consideration.
Usually it's simplified in explanations as "binary", on or off. This isn't wrong for any instant in time (and is sometimes good enough for conceptual models), but in reality the firing rate varies as a function of the stimulus. Analog, if you like...
I hope that before the end of my career I get to write code with just my thoughts. Add a method here, a loop there, refactor this block out with, etc.
The big question for me at least is, are these signals uniform or have some kind of similarities for specific concepts or actions from person to person?
If every brain has its own language, the effort is magnitude higher.
Now if we could turn useful information into brain signals...
Ya most people's brain signals contain nothing of value ;P