From Torpedo to Torpor
The semantic origins of the word “torpor” are said to originate from the medical practices of Scribonius Largus, a Roman physician, who used torpedo fish for pain relief in the 1st century AD. The numbing effect of the fish’s electric shocks allayed pain brought on by gout or headaches. From its origins in Roman electrotherapy to its modern meaning, “torpor” is an early example of how scientific practice can leave a lasting imprint on language and on how we describe mental states today.
The metaphors we use to describe the mind have consistently mirrored the technological understandings of the era and have adapted new meanings over time.
Consider the following three examples:
“Shell Shock” (1910s)
Coined by British psychologist Charles Myers to describe the physical and psychological trauma faced by soldiers under heavy artillery fire. Today, we might use this to describe hearing bad news.
“Meltdown” (1950s)
Used by nuclear engineers to describe the catastrophic melting of a reactor's core (or slang for ice cream melting to fast). Today, we might associate this with one’s uncontrolled release of emotions.
“Get wires crossed” (1970s)
Became popular as household electronics and telephone switchboards became more complex. Today, we might use this to express a misunderstanding or misinterpretation.
The list is lengthy, but one insight is clear. Human lexicon preserves a record of scientific discovery. Over time, the way we understand the world reshapes how we describe ourselves and how we interact with our environment.
Language at the Frontier
I can’t help but think how and in what ways the new era of generative AI and rapidly evolving frontier models will do the same (Vibe coding is actually an interesting counter example). If AI reshapes how we understand cognition and if history is any indicator, then it will certainly modify our thinking through language. Perhaps a term like “token” might nestle its way into everyday speech.
In the context of frontier models, a token is just a fundamental unit of information (a character, part of a word, etc.). At the atomic level of language, models consume tokens and organize them in a way that enables the prediction of future tokens. In this way, an LLM is not an oracle, but a next-token prediction machine. Questions are given ostensible answers, not necessarily the truth.
Our contemporary understanding of the brain is actually quite similar. Our brains “trick” us sometimes with heuristics or biases. We have limited short-term memory, and the cost of input is generally less than the cost of output. In this sense, both brains and models operate less as truth-seekers than as prediction engines, systems optimized for plausible continuation rather than objective accuracy (even the use of “engine” to invoke complexity is an example of this lexicon phenomenon).
Idiomatic Reengineering
It’s enjoyable to muse, but I believe deeper meaning is found in this theme. We continually reinterpret the mind through the dominant innovations of each era. As new vocabulary enters everyday speech, it names new mechanisms and quietly structures how we conceptualize what we (likely unconsciously) believe to be possible. Over time, these metaphors settle into implicit assumptions about thought itself, until new forms of technology gradually displace them and redraw those assumptions once again. In this way, language subtly participates in defining the limits of our understanding.