What if We Could Control Software with Thought Alone

Why the Answer Lies in the Neck, Not the Brain

An Seungwon · Wonbrand · April 18, 2026


Opening

As a child, I watched anime where ninjas formed hand signs and fire poured out. Where a boy commanded a mysterious entity called a Stand with nothing but his will. Where mages cast spells without uttering incantations. I never saw those scenes as mere fantasy. Instead, I thought something like this: "Isn't that ultimately a technology where commands from the brain go directly outward without passing through the body?"

Now in 2026, this imagination is no longer confined to comics. BCI, the brain-computer interface that reads brain signals to operate machines, is already replacing the hands of patients in clinical settings. Quadriplegic patients move cursors with nothing but their minds. Patients with aphasia project the words in their heads onto screens. Yet for this technology to work, the skull must still be opened and electrodes placed on the brain's surface.

Here is the question. Must we truly drill into the brain? Can we not read thought without going through the skull? And if such a path exists, where is it?

This essay is one answer to that question. The answer lies in an unexpected place. Not the brain but the "neck." Specifically, the area around the cervical spine. Why this is so, I will unfold step by step.


1. Where BCI Stands in 2026

The BCI field has advanced at a remarkable pace over the past three years. Let me first map the current landscape.

The Advance of Invasive BCI

After its first patient implantation in 2024, Neuralink has implanted its N1 chip in more than 21 people as of early 2026. Quadriplegic patients move computer cursors with thought alone, play chess, and use social media. The implanted chip reads the activity of thousands of cortical neurons directly.

Synchron took a different route, inserting stentrodes through blood vessels to reduce surgical difficulty. In 2025, Apple partnered with Synchron to establish a BCI direct-input standard for the iPhone and Vision Pro. When a user imagines a tap in their mind, the device responds.

A paper published in Cell in August 2025 went a step further. Researchers used a microelectrode array implanted in a patient's motor cortex to decode 'inner speech' at 74 percent accuracy on a 125,000-word vocabulary. Phrases the patient thought silently appeared on the screen. When they mentally used "chitty chitty bang bang" as a password, the system decoded it exactly.

The Stagnation of Non-Invasive BCI

Meanwhile, non-invasive BCI has been studied for much longer but has not reached daily use. The reason is simple. The skull attenuates and scatters signals. Electrodes on the scalp barely capture averaged activity across entire brain regions, not individual neuronal firing.

Still, some products have been commercialized. Muse places electrodes on the forehead and behind the ears to monitor brainwaves during meditation. Emotiv sells EEG headsets for games and research. Apollo Neuro uses vagus nerve stimulation to manage stress.

The Diversification of the Interface Market

In September 2025, Meta released the "Neural Band" and shifted the landscape. This product attaches not to the brain but to the wrist. It reads the "intent" of finger movement through surface electromyography. Even without actually moving a finger, the machine responds the moment the signal of intent to move reaches the wrist muscles. Meta sold it bundled with Ray-Ban Display glasses. An AI model trained on data from 200,000 users works right out of the box without individual calibration.

The meaning of this event is not simple. That invasion is not the only path. That intent can be read from places other than the brain. And critically, that the question of "where to place the sensor" is now wide open.


2. The Gap Between Neuralink and Meta

These two companies occupy opposite extremes.

Neuralink is inside the brain. Maximum resolution. Maximum invasion. A technology for a very small number of severely ill patients. It requires surgery, and long-term in-body safety is still being verified.

Meta is at the wrist. Minimal invasion. Minimal resolution. For mass consumers. But functionality is restricted to "intent related to finger movement." It cannot read emotion, focus, or arousal.

Between them lies a gap. A place close to the brain but not penetrated. A place wearable every day but providing more information than finger control. No one has properly planted a flag here. Muse at the forehead remained a meditation-session device. Neurable at the ear has been delayed in commercialization.

What is this middle ground?


3. Where Should the Sensor Go?

To find the answer, let us examine all possible body regions.

The Forehead

The forehead sits just in front of the frontal lobe. The brain region involved in decision-making, attention, and emotion regulation. But forehead-worn devices are visually conspicuous. Headbands are tolerated in meditation sessions but hard to wear for 24 hours daily. The social barrier is high.

The Wrist

Meta has already taken this territory. Muscle signals are easy to read and wearing is natural. But the information is limited. Intent to move fingers can be read, but whole-body states like emotion or arousal are hard to capture. Also, direct competition with Meta is unreasonable for individual developers or small companies.

Inside the Ear

It can be worn like earbuds and sits close to the brain. However, the inside of the ear is anatomically complex with large individual variation. Signal quality is inconsistent. Also, wearing something in the ear all day burdens hearing.

Back of the Head

Over the occipital lobe. Visual processing signals are available. But having something attached to the back of the head is hard to accept in daily life.

And the "Neck"

The neck gathers remarkable things. All nerve signals descending from the brain pass through this narrow corridor. The top of the spinal cord, the path of the vagus nerve, the carotid artery, the thyroid, vocal cords, larynx, lymph nodes. It is the most information-dense narrow zone in the human body.

And this area has been almost untouched in BCI research until now.


4. Why the "Neck" - The Anatomical Answer

Let us examine why the cervical region is special from an anatomical standpoint.

A Bottleneck for All Nerves

Nearly all nerves connecting the brain and the body pass through the cervical spine. Motor commands, sensory inputs, the autonomic nervous system, and the vagus nerve all pass through this single compressed pathway. Other body parts handle specific functions only. The forehead carries frontal-lobe information. The wrist carries arm motion. The ear carries hearing. The neck carries information for the entire body.

Where Brain Output Passes in "Pure" Form

Consider the path a command takes from the brain to the muscle. A command is generated in the brain, transformed by the motor cortex, delivered as a pure signal through the spinal cord and cervical spine, then executed at the muscle through peripheral nerves. At the cervical stage the signal has not yet been contaminated by muscle noise. While wrist EMG captures signals at the execution stage, the cervical region is closer to the command stage.

A Gathering Place of Speech Organs

The larynx, vocal cords, hyoid bone, and root of the tongue are all in the neck. All organs that produce speech are here. And importantly, muscles in these organs activate subtly even during 'inner speech.' Even without actual sound, the moment a word is summoned in the mind leaves traces in muscles around the vocal cords. MIT's AlterEgo project decoded limited-vocabulary inner speech with over 90 percent accuracy using EMG under the jaw.

A Window to the Autonomic Nervous System

The vagus nerve passes through the neck, carrying information about the heart, lungs, and gut to the brain. Accessing this nerve at the neck allows real-time reading of emotional state, stress level, and autonomic balance. Commercial medical devices like gammaCore already "stimulate" the vagus nerve at the neck to treat migraines. If stimulation is possible, in principle reading is too.

Where Emotion Surfaces First

In daily experience, the area around the cervical spine is where emotion first surfaces. When tense, the back of the neck stiffens. When afraid, the nape tingles. When deeply focused, heat rises at the back of the neck. The front line where conscious state translates into bodily signal is the neck.


5. The Place Eastern Traditions Pointed to Long Ago

What is interesting is that not only modern anatomy sees this place as special. For thousands of years, Eastern practice traditions have pointed to the same spot.

In Korean and Chinese traditional medicine, the hollow at the back of the neck is called "Pungbu (風府)," meaning "gate of the wind." It is a passage through which qi ascends to the brain. Just below it lies "Amun (瘂門)," meaning "gate of speech." It was seen as connected to vocalization.

In Daoist cultivation, "Okchim (玉枕)" is the final gate through which qi must ascend to the brain to attain enlightenment. It corresponds precisely to the uppermost cervical region below the occipital bone.

The yoga tradition has the "Viśuddha Chakra," the fifth chakra located at the throat, center of communication and expression. Only when this chakra opens can true expression occur.

In acupuncture, extreme caution is taken when inserting needles around the cervical spine. A misplaced needle can cause death. It has long been recognized as a "lifeline" of the brain-body connection.

For thousands of years, practitioners across different cultures saw this place as special. Modern anatomy is only now catching up to explain why. Whether this is coincidence or ancient wisdom reading the body's structure accurately, either way the fact that this place is special does not change.

6. A Concrete Design Proposal - The Open U-Shaped Neckband

So what should an actual device using this region look like?

Form Factor

Imagine a form similar to an ice neck cooler. An open U-shaped structure that starts at the back of the neck, wraps around the sides, and ends near the collarbones. The center of the front of the neck stays open. This form is optimal for the following reasons.

First, the center of the front of the neck is a vital area. The thyroid cartilage, trachea, and carotid bifurcation lie there. Placing a sensor here hinders swallowing and vocalization and creates psychological discomfort. Leaving the front open keeps breathing, swallowing, and speaking free.

Second, the neck-cooler form is already popularized as a summer product. Visual resistance to wearing a U-shaped band around the neck is low.

Third, it allows sensors to be distributed along the neck's circumference. The back, sides, and front-side regions can all cover key muscles.

Sensor Placement

About 13 electrodes are placed inside the band. Imagining the neck as a clock face, only 12 o'clock (the front center) is left open. Sensors occupy the rest.

At 1 and 11 o'clock lies the platysma. It reads startle responses and emotional reactions. At 2 and 10 o'clock lie the infrahyoid muscles and the inner edge of the sternocleidomastoid. They capture speech initiation and inner-speech intent. At 3 and 9 o'clock lies the center of the sternocleidomastoid. Carotid pulse and vagus nerve surface access are optimal here. At 4 and 8 o'clock lie rotation and breathing. At 5 and 7 o'clock the upper trapezius for stress and posture. At 6 o'clock the suboccipital muscles for focus and arousal. Reference electrodes go behind each ear.

Electromagnetic Design

The sensors themselves are "passive" components. No electronics, no battery, no wireless transmitter. Electrodes only touch the skin. Signals travel through wires inside the band to a "central unit" at one end. The central unit sits near the collarbone, 25 to 30 centimeters from the brain. Only this central unit contains the battery, processor, and Bluetooth module.

This structure ensures no electromagnetic emission occurs near the brain. Unlike other brainwave wearables currently on the market that transmit BLE directly from the forehead or ear area, this is a clear design philosophy.

Wearability

Expected total weight is 70 to 90 grams. Slightly heavier than an Apple Watch. A soft silicone outer shell naturally fits the neck circumference, and weight is distributed to both ends so no single point is pressed. 24-hour wear is feasible, and discomfort during sleep is minimal.


7. What the AI Does - Decoding, Not Sensors, Is the Essence

No matter how sophisticated the hardware, without signal decoding, it means nothing. Surface signals at the cervical spine are noisy. Multiple muscle activities overlap, individual variation is large, and daily movement noise constantly intrudes. Extracting useful information from these signals depends entirely on the capability of AI.

A Multi-Layer Decoding Structure

AI operates simultaneously at multiple layers.

At the lowest layer, raw signals are cleaned. Noise is filtered. EMG, EEG, pulse waves, and skin conductance are separated. This work is handled in real time by a lightweight model inside the device.

At the next layer, muscle activation patterns are classified. Which muscle activated, how much, and when are tracked. Signals from multiple sensors fuse into integrated interpretation.

At a higher layer, these patterns are translated into the user's "intent." State classification, directional intent, behavioral precursors, emotional responses, speech intent, and attempts at inner speech. At this stage, the AI becomes a completely different model for each user. The same signal carries different meanings for different people.

At a higher layer still, context is combined. Time of day, place, and past behavior are compared to judge what this signal actually means now. Is it trembling from drowsiness or a genuine intent to turn?

At the final layer, execution is decided. When confidence is high, it is automatic. When ambiguous, it is a suggestion. When it does not match context, it is ignored.

An Approach Called Natural Intent Decoding

This product's AI philosophy is "natural intent decoding." The user need not memorize promised codes like Morse. AI reads intent naturally embedded in ordinary movement. It is the cervical version of what Meta Neural Band does at the wrist.

When a person intends to turn left, the right sternocleidomastoid activates 100 to 300 milliseconds before the head actually moves. When intending to speak, the infrahyoid muscles prepare before the mouth opens. When startled, the platysma contracts instantly before any visible reaction. The AI learns these "precursor signals."

The Depth of Personalization

The longer the period of use, the deeper the AI's interpretation becomes. A week brings basic state recognition. A month brings direction and intent detection. Three months bring reflection of personal habit. A year brings learning of subtle patterns even the user does not consciously perceive.

This accumulation becomes a barrier to entry competitors cannot easily replicate.

Where the Computation Happens

For this product to work in reality, it must be clear where "computation" takes place. To put the conclusion first, the heavy AI decoding computation happens primarily on the user's smartphone. The neckband itself is close to a signal collector, while the phone plays the actual role of a brain.

The reason is clear. Running complex AI models directly inside the neckband would require a powerful processor, a large battery, and a cooling structure. This leads to thick devices, short usage times, and heat near the neck. It does not suit a device targeting 24-hour wear.

So computation is distributed across three places.

1) The central unit inside the neckband performs only lightweight preprocessing. Noise filtering of raw signals, band separation of EMG and EEG, analog-to-digital conversion, and basic compression. The processed signal is transmitted to the phone via BLE. To conserve battery, unnecessary sections are not transmitted.

2) The user's smartphone is the main computation platform. Multi-sensor signal fusion, pattern recognition, running the personalized per-user model, intent decoding, context inference, and execution decisions happen here. Modern smartphone NPUs and GPUs can handle real-time inference at this scale. Meta Neural Band and Apple Watch use the same structure.

3) The cloud plays an optional supporting role. While the user sleeps, long-term learning models are updated, personalized models are backed up, and new features are distributed. Real-time decoding works without the cloud. Basic functions remain operational even when the network is disconnected.

This distributed structure gives users several practical benefits. The neckband stays light (70 to 90 grams). The pendant battery lasts 2 to 3 days. Feature expansion is possible through app updates alone. And because it leverages the smartphone users already carry 24 hours a day, no separate external computing device is needed.

Real operation flows like this. When a user forms an intent to turn left, the right sternocleidomastoid activates subtly 100 to 300 milliseconds before the actual movement. The neckband's 13 sensors capture this, the central unit preprocesses, and it transmits to the phone. The personalized AI model on the phone decodes this as "intent to turn left," judges whether it is a command or a natural movement in the current context, then sends the necessary command to external devices. The entire process completes within several hundred milliseconds. To the user it feels nearly instantaneous.

In short, the neckband is a "sensory organ," the phone is the "brain," and the cloud is "long-term memory." This structure, with each role placed where it fits best, underpins the feasibility of this product.

8. A Philosophy of the Practitioner's Tool

This product does not provide the same value to every user. This is not a flaw but an intentional design.

Why the Highly Focused Have an Advantage

Meditation practitioners, trained athletes, long-term creators, surgeons, professional gamers, and others who can deeply immerse in a specific state use this product better. The reasons are simple. They can consciously control subtle activation of neck muscles. They produce cleaner signals with less noise in focused states. They form clear intent.

While an ordinary person generates the ambiguous signal "I kind of want to see something," a practitioner produces the refined signal "view a specific region on the left screen." The AI decodes it far more easily.

Why Long-Term Users Have an Advantage

Long-term users accumulate two assets. One is a personalized AI model. A model trained for years on a user's signal patterns achieves accuracy new users can never match. The other is trained self-control. Commands like subtle head tilts, clumsy at first, gradually become precise.

When these two combine, for long-term users this product becomes a third hand, an externalized consciousness. A tool hard to leave.

A Machine That Can Be Trained With

What this approach means is that the device has 'depth' of use. Beginners only use basic monitoring and a few commands. Intermediates expand to intent detection and various commands. The skilled attempt complex contextual automation and inner speech decoding. The master level operates the device almost purely through thought.

This path resembles the stages of martial arts or instrument mastery. It also aligns with Eastern traditions of cultivation. This structure is an intentional design. The more you use it, the deeper it becomes. The deeper it becomes, the harder it is to leave.


9. Between Medicine and Everyday Life

Applications of this device are broad. From health management to productivity, medical monitoring, and assistance for the disabled.

Health Monitoring

There has not been a device that monitors autonomic nervous system activity 24 hours a day. Apple Watch centers on heart rate. Oura Ring centers on sleep. The cervical neckband observes autonomic activity as a whole in real time. Stress can be detected before it becomes chronic, and recovery can be measured objectively.

Medical Applications

Arousal monitoring for ADHD, quantitative evaluation of dysautonomia, early detection of sleep apnea, fall prediction in the elderly, and recovery tracking in rehabilitation patients. Multiple tests currently possible only in hospitals become possible through a daily device.

Driving and Safety

While driving, the device detects precursors to fatigue in real time. Far faster than current drowsiness prevention systems that rely on vision or hearing. It is valuable for logistics, public transportation, and private drivers alike.

Assistance for the Disabled

For patients who have difficulty producing voice, inner speech decoding opens a new door to communication. For users with limited hand movement, natural intent decoding becomes a third hand.

Professional Tools

Focus-state monitoring during surgery, arousal management for pilots, tension control for athletes. The value is especially high for professions that demand extreme concentration.

A Tool for Cultivation

For meditation, yoga, and martial arts practitioners, the device objectively measures their own depth. It is the next generation of meditation apps.


10. Variations in Design

The design described so far represents "one optimal point," not "the only answer." This direction allows for several branches of variation. I specify these to show the breadth of this technology without being confined to any single implementation.

Variation in Sensor Count

Thirteen is one reference point. Depending on product purpose and price tier, this can be adjusted from 4 to 30.

The minimum configuration is four. One at the central suboccipital region, two at the left and right sternocleidomastoid, and one ground. This configuration still enables state recognition and basic directional intent detection. Suitable for entry-level models.

The intermediate configuration is 6 to 10. It combines the sternocleidomastoid center, lateral infrahyoid muscles, platysma, and upper trapezius. Speech detection and varied command decoding become possible. Suitable for standard products.

The advanced configuration is 13 to 30. Multi-channel access extends to deep muscles. Inner speech decoding and fine-grained intent classification become possible. Suitable for premium and research products.

Variation in Form

The open U-shape is one possibility. The following variants follow the same principle.

A fully closed neckband wraps completely around the neck. It maximizes sensor density but compromises wearability.

A choker form sits high on the neck. It provides good access to the hyoid muscles but is hard to wear daily.

A necklace-and-backpatch separated form reads anterior pulse waves via the necklace and cervical muscles via the back patch. The two parts connect wirelessly or by wire.

A scarf-integrated form embeds sensors inside a neck warmer. A winter-specialized version.

A combined earring-necklace form integrates the ear-back ground and cervical sensors into visible accessories.

Variation in Sensor Type

Surface EMG is the primary sensor, but others can run in parallel from the same positions.

Electroencephalography captures parts of the occipital lobe at the upper back neck.

Photoplethysmography reads heart rate and blood flow over the carotid artery.

Galvanic skin response detects autonomic activation.

Skin temperature reflects stress and peripheral circulation.

Accelerometers and gyroscopes track head movement.

Near-infrared spectroscopy measures blood oxygen saturation beneath the skin. It serves as an indirect marker of cerebral blood flow change.

Ultrasound sensors read deep muscle thickness and movement. Useful for dysphagia diagnosis.

Variation in Wireless Method

Bluetooth Low Energy is the standard, but other options are possible.

Ultra-wideband sends data in instantaneous pulses and produces less cumulative electromagnetic exposure.

Near-field communication transmits data only when the phone contacts the device. Real-time performance is low but electromagnetic emission is nearly zero.

Body area network uses the skin as a medium to transmit signals to the wrist or a pocket. Maximum distance from the brain.

A fully wired version connects to a waist belt unit. A medical-grade high-quality version.

Variation in AI Computation Location

Phone-centric is the default, but other distributions are possible.

An edge-concentrated form processes most work inside the device. It can operate without a phone but the hardware grows complex.

A cloud-centric form transmits raw signals to the cloud where decoding happens on a server. It enables powerful models but requires a network.

A hybrid form switches between edge and cloud depending on the situation. A balanced approach.

Variation in Applications

One device can be adapted to many contexts.

Health management and wellness is the most general application.

Medical devices serve ADHD, dysautonomia, sleep apnea, dysphagia, and rehabilitation.

Assistance for the disabled includes communication for patients with voice impairment and device control for those with movement limitations.

Driving safety enables early detection of fatigue and drowsiness.

Games and entertainment use it as a third input device for VR, AR, and PC games.

Professional tools include focus monitoring for surgeons, arousal management for pilots, and tension control for athletes.

Military and security apply it to covert communication, high-focus work, and special-environment interaction.

Cultivation tools measure internal states in meditation, yoga, and martial arts.

Why These Variations Are Named

Perfectly implementing any one design is difficult, but the direction itself is widely open. The variations listed here are not exhaustive. More will be discovered during actual development. All of them share this essay's fundamental proposition: "reading intent not from the brain but from the neck."


11. Remaining Questions

Honestly, several challenges remain on this path. Decoding inner speech from cervical surface signals has been partially proven at the jaw by MIT's AlterEgo, but accuracy from the lateral surface of the cervical center has not yet been studied. This is an area to pioneer.

Training data is also an issue. Meta trained a general model on wrist data from 200,000 people. Cervical data effectively does not exist. Someone has to build it from scratch.

And there is regulation. Whether to obtain medical device certification or launch as a wellness device determines the path.

But none of this implies technical impossibility. These are only problems to be solved. Given that invasive BCI was still an imagination a decade ago, it is entirely plausible that non-invasive cervical interfaces reach maturity within the next ten years.


12. In Closing

The path that does not drill the brain yet gets closest to the brain. The path that starts at the cervical spine rather than the skull. The path where cutting-edge engineering and ancient cultivation traditions point to the same place. And the path where personalized AI grows together with the user.

The era when the wordless magic of anime existed only as a distant vision is ending. The future of controlling software with thought alone is already approaching, and the most realistic and most unassuming starting point of that future lies at our neck. The very spot that practitioners millennia ago already knew.

If someone arrives at this place, I want to remain as the one who was there before them.

References and Related Technologies

Non-Invasive BCI and sEMG Wearable Trends

- Meta Neural Band (Meta, released September 2025). Wrist sEMG interface based on CTRL Labs technology. Sold as a set with Ray-Ban Display glasses for USD 799.
- MIT AlterEgo (Kapur et al., 2018). Inner speech detection prototype based on sub-jaw EMG. 90% accuracy on limited vocabulary.
- OpenBCI. Open-source EEG/EMG hardware platform.
- Muse S Athena (Interaxon). Forehead EEG + fNIRS meditation wearable.

Latest Invasive BCI Achievements

- Willett et al. (Cell, August 2025). 74% accuracy on inner speech decoding using invasive Utah Array.
- Neuralink PRIME Study (2024-2026). Over 21 patients implanted. Practical BCI input demonstrated.
- Synchron Stentrode. Brain implantation via blood vessels. Apple BCI HID standard collaboration (May 2025).
- Metzger et al. (2023). Neural signal decoding for speech BCI.

Prior Non-Invasive Cervical Neural Technologies

- gammaCore (electroCore). FDA-approved non-invasive cervical vagus nerve stimulator.
- ARC-EX System (Onward Medical, FDA approved December 2024). First non-invasive cervical spinal cord stimulator.
- Vagustim, ZenoWell, Apollo Neuro. Consumer devices based on vagus nerve stimulation.

References for Eastern Traditions and Anatomy

- Pungbu (風府), Amun (瘂門). Traditional Korean/Chinese meridian points.
- Okchim (玉枕). Daoist internal cultivation gate.
- Viśuddha Chakra. The fifth chakra in yoga tradition.
- Hall, S. J. (2018). Basic Biomechanics. McGraw-Hill. (Cervical spine anatomy reference)
- Drake et al. (2020). Gray's Anatomy for Students. Elsevier. (Neck muscles and nerve structures)

An Seungwon / Wonbrand / https://wonbrand.co.kr