Sabi, a California-based startup, has emerged from stealth with a brain-computer interface embedded in a beanie that the company says can convert a person’s internal speech into words on a screen. CEO Rahul Chhabra told Wired that the first product — a brain-reading beanie — will be available by the end of the year, with a baseball cap version to follow.
The device uses EEG, reading the brain’s electrical activity through miniature sensors pressed against the scalp. Where a conventional EEG headset might carry a dozen to a few hundred electrodes, Sabi’s beanie will pack between 70,000 and 100,000 miniature sensors. Wearable sensors have to listen through skin and bone, which dampens neural signals considerably compared to surgically implanted electrodes. Sabi is betting that sheer sensor density can compensate for that loss. Chhabra says the high-density array lets the system pinpoint where neural activity is occurring with enough precision to decode what a person is thinking.
The company is targeting an initial typing speed of about 30 words per minute through thought alone. That is slower than most people type on a keyboard but faster than early results from most non-invasive speech decoding research. Chhabra says the speed will improve as users spend more time with the device.
Sabi’s approach inverts the sequence most BCI startups follow. Rather than building hardware first and collecting data later, the company started with the dataset. It has amassed 100,000 hours of brain data from 100 volunteers, which it used to train what it calls a brain foundation model — a large-scale AI trained on neural data from many people to learn patterns of activity that correlate with inner speech. The sensors were then designed around what the model needed for accurate decoding. The logic mirrors the scaling approach that produced large language models: build the data and the model first, then optimise the hardware around both.
The challenge Sabi faces is well understood in the field. Decoding imagined speech is harder than decoding motor intent because natural thought patterns vary enormously between people. Even two individuals thinking the same phrase will produce different neural signatures. Most high-accuracy speech decoding results to date have come from invasive cortical recordings. Meta’s Reality Labs demonstrated in a July 2025 Nature paper that a non-invasive EMG wristband could generalise across users for motor-based input at 20.9 words per minute — but that system decoded motor signals from the wrist, not imagined speech from the scalp.
JoJo Platt, an independent neurotech consultant based in San Francisco, told Wired that consumer brain-sensing devices will need to work right out of the box if developers want a viable product. Most BCIs require calibration before each use because brain signals shift with fatigue and focus. A consumer device cannot ask users to sit through a setup routine every time they put on a hat.
Chhabra says that when data leaves the device and is uploaded to the cloud, it is encrypted end to end, and that Sabi’s AI models train on the encrypted data rather than the raw neural signal. The company is consulting with neurosecurity experts at Stanford and elsewhere to audit its full technology stack. Chhabra told Wired that neural data is the most private kind of data a person could have, and that treating it without care would be unfair.
Vinod Khosla, founder of Khosla Ventures and a lead investor in Sabi, told Wired that a non-invasive wearable is the only viable path to mass BCI adoption, arguing that if a billion people are going to use brain-computer interfaces for everyday computing, the technology cannot require surgery. Other investors reported on the company’s LinkedIn include Accel, Initialized Capital, and Kevin Weil. The funding amount has not been disclosed.
Sabi enters a consumer BCI space that is beginning to fill out. Neurable has embedded EEG sensors into headphones. Elemind sells a neurostimulation headband for sleep. EMOTIV markets research-grade EEG headsets. None has shipped a thought-to-text product. On the implanted side, Neuralink, Paradromics and Synchron are all developing devices designed to be invisible under the skin or inside blood vessels. The form factor divide — wearable versus implanted — increasingly defines two parallel markets: one pursuing clinical-grade accuracy for people with severe disabilities, the other chasing a broader consumer audience willing to trade precision for convenience.
Whether Sabi can deliver a working consumer product that reliably decodes imagined speech from outside the skull remains an open question. No peer-reviewed research describing the sensor technology, the foundation model’s architecture, or its decoding accuracy has been published. The company does not appear to have a public website. What it does have is a specific claim — 30 words per minute, non-invasive, from a beanie — that the BCI community will now expect it to demonstrate.