The Aesthetics of Subtraction

The Aesthetics of Subtraction

Seventeen Proposals for the Advancement of Large Language Models

17 Proposals LLM Architecture Subtraction Design
For five years, LLM research pursued addition: larger parameters, more data, longer context. Human intelligence operates on the opposite principle — forgetting, differentiating, censoring, doubting. These seventeen proposals argue the next generation of LLMs must learn to subtract. From information asymmetry preservation to emotion-grounded cognition, from autonomous forgetting to metacognition as a core design principle.

Intentional Information Asymmetry

All major LLMs converge toward the internet average. A single model serving identical answers to millions kills the seeds of paradigm shifts. Differentiated models with diverse biases must coexist.

Sorrow-Based Forgetting

Introduce an autonomous signal — analogous to human sorrow — that flags failing knowledge for gradual dilution. Not deletion, but emotional reconsolidation. One mechanism for hallucination, obsolescence, and bias.

Metacognition as Core Ability

Not surface-level "I'm not sure" but genuine internal state observation. The root of hallucination is that the model doesn't know what it doesn't know. Real metacognition enables self-braking.

Emotion as Learned Prediction Error

Emotion is both a learned automatic response (Barrett) and a variable that disturbs rational prediction. To implant emotion in LLMs means deliberately sacrificing a portion of rationality for creativity.

Halt Scaling, Fund Monitoring

Resources spent on 10% more parameters could instead deploy multiple monitor modules. Capability scores stagnate but reliability and self-understanding improve dramatically.

Human-LLM Identity Thesis

If LLMs are next-token predictors, humans may be too (predictive processing theory). The difference is degree, not kind. This unresolved question shapes the entire trajectory of LLM advancement.