What Happens When a Woman Wears High Heels

Affinity Weighting Hypothesis — Between Recognition and Affinity, Where Attention Turns

Seungwon An · Founder, Wonbrand · May 5, 2026


1. The Sound That Wasn't There

I fell asleep watching Instagram Reels one night. The same clip kept looping. I'd slept deeply, and a fair amount of time had passed by the time I came to. In the moments right after I woke, I couldn't hear the video. The sound only came through once I looked down at my phone. Normally I hear my surroundings the instant I wake. This had never happened before.

The audio had never stopped — not from before I drifted off until well after I came to. Physically, it had been entering my ears the whole time. None of it had registered in awareness. A single glance at the phone was what brought the sound up to consciousness.

What gets registered is what consciousness turns toward. However powerful a signal may be, if the receiver's attention does not turn to meet it, the signal stays outside awareness — indefinitely.

This essay starts there and works through what actually happens when one person begins to feel drawn to another. High heels offer the clearest illustration. The moment their click registers, the entire mechanism of affinity plays out in compressed form.


2. What Blocks the Stimulus and What Releases It

Three mechanisms were operating in concert to keep the audio out of my awareness in those moments after I woke.

First, stimulus-specific adaptation. As the same clip looped, my auditory neurons adapted to its sound. Neurons in the thalamus and auditory cortex dampen their response to repeating stimuli and respond strongly only to novel ones. When the same pattern keeps arriving, it disappears — neurologically.

Second, thalamic gating. The thalamic reticular nucleus inhibits sensory signals ascending from the thalamus to the cortex. The deeper the sleep, the stronger the suppression — and stimuli already undergoing adaptation are suppressed even more aggressively. The volume of external input reaching the cortex during sleep falls to roughly half its waking level.

Third, sleep inertia. Sudden awakening from slow-wave sleep impairs sensory processing for a period after waking. Consciousness has returned, but residual gating lingers. Stimuli that were already adapted to before sleep stay blocked even longer.

As these three effects compounded, the video audio in those moments after I woke was, neurologically speaking, a sound that did not exist.

The act of looking at the phone was decisive because it was the moment top-down weight got assigned to the sound source. Consciousness has two input pathways. Bottom-up draws stimuli into awareness automatically through their physical salience — sudden loud sounds, changes, novelty enter through this route. Top-down raises the processing priority of specific stimuli through goals, learned meaning, and emotional weighting. The fact that you hear your own name across a noisy café operates through this pathway.

The video audio was blocked along both routes. Bottom-up, salience had been erased by adaptation. Top-down, consciousness wasn't assigning weight to that source. The instant the phone came into view, vision triggered a semantic binding — sound is coming from over there — and that binding immediately raised the processing priority of the auditory signal. Thalamic gating released; the signal in auditory cortex amplified.

Had I not looked at the phone, the sound would never have been heard. What registered the stimulus was the moment consciousness turned toward it.


3. A First Encounter Is the Same Scene

A stranger can be sitting right next to you, and if your consciousness hasn't turned toward them, that person isn't registered. You can share the edge of a café table with someone for an hour without noticing them, only to suddenly perceive them when they bump a cup against the table, when a particular phrase reaches your ear, or when your eyes meet. The stimulus had been arriving the whole time. It carried no meaning because consciousness hadn't turned toward it.

For affinity to form in a first encounter, two things have to happen at once. The other person's stimulus has to pass through your bottom-up gating and reach the threshold of awareness. At the same time, your top-down has to assign weight to that stimulus. If only one of the two clears, no affinity forms.


4. What High Heels Show Most Clearly

The click of heels evades stimulus-specific adaptation against the background of urban sound. A short, sharp attack, a patterned variation. Sneaker footsteps get registered as background almost immediately, but heels pass through the gate. Visually, the altered gait also evades the adaptation that absorbs ordinary walking patterns. This is the bottom-up half.

But the click alone doesn't produce affinity. To a child hearing it for the first time, it's just noise. In an adult brain, something else happens. The instant the sound is heard, categorization fires automatically: woman, formality, city. The learned images bound to that category light up at the same moment — elegant scenes from films, magazine spreads, characters from drama.

This activation is the unconscious top-down weight. The listener didn't consciously decide it. The instant the stimulus arrived, learned associations fired automatically. So before that person is met, before they're even seen, the weight is already in place. Pre-affinity arises.

This is the core structure of a first encounter. Learned associations lay down weight in advance, and the moment the stimulus arrives, that weight applies automatically. A significant portion of affinity is determined before the encounter has even begun.

The power of the heel comes from striking both systems at once. It punches through gating bottom-up while assigning top-down weight in advance through learned associations.


5. Recognition and Affinity Are Different Neurological Events — The Affinity Weighting Hypothesis

Here the central distinction of the essay arises.

Recognition is the act of passing through the bottom-up gate. It's the event in which a stimulus reaches the threshold of awareness and its presence becomes registered. Variation, novelty, contrast, supernormal stimuli — these break through this gate.

Affinity is what happens when top-down assigns weight to that object. It's the moment consciousness begins, deliberately, to turn toward that object. Learned meaning, emotional binding, semantic priming — these generate this weight.

The two are sequential. Without recognition, there is no affinity. If the stimulus doesn't even reach the threshold of awareness, there's no object to assign weight to. At the same time, recognition without semantic binding doesn't convert into affinity. The stimulus passed the gate but soon disappears through adaptation.

Here we can name the central hypothesis of the essay.

Affinity isn't produced by the stimulus itself. It arises from the weight that learned associations, activated by the stimulus, assign to consciousness. I call this the 'Affinity Weighting Hypothesis.' Before the encounter, learned associations lay down weight in advance. The moment the encounter begins, triggers activate that weight. As encounters accumulate, the weight strengthens. At no stage is affinity produced by the stimulus itself — it's produced by changes in the weighting of the meaning network the stimulus awakens.

6. Primary Categories Are Fixed; Design Happens Within Them

Primary stimuli — height, frame, the bone structure of the face — can't be altered. The first-tier category they generate is fixed too. When you first see someone, this primary classification finishes within a second. It isn't something will can change.

But classification doesn't end with the primary stimulus alone. Final classification is determined by the sum of details. The same primary stimulus, depending on which details accompany it, can land in entirely different sub-categories. The thalamus and the temporal lobe begin classifying the moment the primary stimulus arrives, but as additional stimuli enter, the classification narrows and grows more precise. Classification stabilizes in roughly one second. Within that brief window, additional stimuli can shape the sub-category.

A person with a given primary stimulus, walking in with neat clothing, a settled posture, a steady gaze, deep breathing, and an unhurried speech tempo, lands in the 'solid and trustworthy' sub-category. The same primary stimulus, accompanied by ill-fitting clothes, shrunken posture, an unsteady gaze, and rapid speech, lands in 'withdrawn.' The primary stimulus is identical, but the weight learned associations lay down is the opposite.

The principles of design that lead to favorable sub-classification compress into five.

First, accept the classification your primary stimulus generates and aim for the strongest sub-category within it. Designs that try to escape one's primary category almost always read as misalignment. Classification hesitates, and weight weakens.

Second, converge details in a single direction. Clothing, posture, speech, tempo, expression, and gaze should all point to the same sub-category. If details diverge, the other person's brain hesitates over classification, and that hesitation itself reads as a negative signal.

Third, posture and breathing are decisive. Posture sends the strongest categorical signal within the first second. Someone with squared shoulders, a stable gaze, and deep breathing lands in a favorable sub-category regardless of the primary stimulus. The external signals of confidence shift categorical classification more powerfully than anything else.

Fourth, the neatness of small details. Hair, nails, shoes, the state of one's clothing — these small stimuli determine categorical weight. The big stimuli can't be changed, but every small one can.

Fifth, place a small note of unexpectedness in the details. A single small detail that drifts slightly from your primary category prevents classification from collapsing into the simple, and deepens it instead. An ordinary face with a deep gaze, a serious bearing with a flicker of humor — that kind of small surprise escapes simple classification and produces stronger weight. The neurological substrate of attractiveness lies close to this.


7. The Moment the Encounter Begins — Top-down Triggers

Even with weight already in place, no affinity is registered if the other person's consciousness hasn't turned toward you. It's the same as the video sound that would never have been heard had I not looked at the phone. The encounter requires triggers that pull the other's consciousness toward you.

The strongest trigger is the precise use of a name. A name is a top-down trigger learned over a lifetime, so consciousness turns toward it automatically. It isn't the calling itself but calling at the right moment with the right intensity that decides the matter.

Next comes speech that catches the other's specific context. Generic remarks may enter through the bottom-up channel, but they can't draw weight. Catching the other's concrete context — the matter occupying their thoughts, a small detail — produces immediate semantic binding, and their consciousness fixes upon you. It's the same mechanism by which looking at the phone activated the auditory signal through vision. Catching the context triggers top-down through semantic binding.

Eye contact is top-down synchronization in the visual channel. It's the event in which the signals that two people's consciousness are turning toward each other get confirmed at the same instant. Too short, and the signal is weak; too long, and the category shifts toward the uncomfortable. Eye contact of the right duration synchronizes mutual consciousness.


8. After the Encounter — How to Accumulate Weight

Once affinity has been registered in a first encounter, that weight applies automatically in subsequent encounters. It's the same as the video sound continuing to be heard after the phone was looked at. Once gating has released, the stimulus stays above awareness. But this automatic maintenance has limits. If adaptation isn't avoided, it disappears again.

There are four mechanisms for actively reinforcing weight.

First, deepen the semantic binding. In a first encounter, only surface categories activate — clothing, appearance, speech. As encounters proceed, deeper categories have to activate: character, values, capability, depth. The deeper the category, the more strongly its learned associations produce weight.

Second, emotional binding. Encounters in which emotion arises together register more strongly, neurologically. Memories formed when the amygdala and the hippocampus activate together remain far stronger than ordinary ones. Pleasure, safety, deep conversation, moments of shared laughter, moments of shared seriousness — these are the accelerator of weight. One emotionally bound encounter produces greater weight than ten merely pleasant ones.

Third, the balance between adaptation avoidance and consistency. If you check in with the same pattern every time and praise in the same way, stimulus-specific adaptation sets in and neural response weakens. What erased the looped video audio during sleep is the same thing that happens in relationships. The effort isn't dismissed — it disappears, neurologically. The solution is to vary the details within categorical consistency. The same meaning has to be delivered through different details each time, so adaptation is avoided while weight strengthens.

Fourth, make the next encounter anticipated. Predictive coding is at work, neurologically. The brain predicts the next stimulus and responds most strongly when prediction is mostly right but slightly off. When the next encounter is broadly predictable but contains something subtly new, consciousness begins, voluntarily, to turn toward that person. The accelerator of accumulated weight lies here.


9. Brand and First Impression Run on the Same Neural Circuit

Advertising is the stimulus that punches through the bottom-up gate; the brand itself is the category in which top-down weight gets learned. Strong stimulus alone ends in ad fatigue. A good category alone ends in a brand no one knows. Good brands are engineered the same way a strong first impression is — to pass both gates at once. The fact that meeting a person and building a brand operate on the same neural circuit is the applied territory of the affinity weighting hypothesis.


10. Where Affinity Lives

Affinity isn't produced by the stimulus. It's produced in the learned meaning network the stimulus awakens. Different people form different first impressions of the same person because their learned associations differ. The same person leaves different impressions on the same observer at different times because the learned associations have shifted in between.

What happens when a woman wears high heels compresses into a single thing. The click of the heels activates a learned category in the listener's brain. The meaning network bound to that category lays down weight in advance. A significant portion of affinity is already decided before sight enters. The instant vision arrives, that weight applies to that person. The stimulus didn't produce affinity. The meaning the stimulus awakened produced affinity.

Just as the act of looking at the phone awakened the video sound, what stimulus awakens what meaning in a first encounter is what decides affinity. Drawing affinity isn't the work of making a stimulus stronger. It's the work of designing the meaning network the stimulus awakens. You bind details so as to be classified into a chosen category. You send signals so that the learned associations bound to that category lay down good weight. The moment the encounter begins, you fire the trigger that pulls the other's consciousness toward you. As encounters accumulate, you avoid adaptation while reinforcing weight.

Affinity is the event of the receiver's consciousness turning toward you. The design of all the conditions that produce that event is the precise definition of drawing affinity.


References

Coenen, A. M. L. (2024). Sensory gating and gaining in sleep: the balance between the protection of sleep and the safeness of life. Journal of Sleep Research.

Antunes, F. M., & Malmierca, M. S. (2014). The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding. Frontiers in Systems Neuroscience.

Andrillon, T., & Kouider, S. (2020). The vigilant sleeper: neural mechanisms of sensory (de)coupling during sleep. Current Opinion in Physiology.

Sleep inertia in slow-wave sleep awakenings. Neuroscience and Behavioral Physiology (2021).

Auditory attention modeling. Philosophical Transactions of the Royal Society B.

Seungwon An / Wonbrand / https://wonbrand.co.kr