What Does it All Even Mean?

Dancing

On Language, Parties, and Machine Intelligence

People’s conversations around AI consistently expose one anxiety or another. Some say AI is coming to either kill us all or take all our jobs. Others claim that the AI industry is a bubble, and when it inevitably pops, the economy will go down with it (which is the same as either killing us all or taking all our jobs, but with more steps.) These are reasonable fears, but I wonder if they are each predicated by something deeper: AI, in the form of Large Language Models, steps into the human space of language-use.

Language is a distinctly human behavior, and while most other animals communicate to some extent, we’re alone in our ability to do so on deeper, more abstract levels. Our words and sentences are not just peculiarly arranged sounds, but meaningful actions we use to bind ourselves to one another, building social relations out of the very air we breathe. When a bot shows up one day and reproduces language, it’s understandably alarming.

Further, LLMs like ChatGPT and Claude are really good at reproducing language, writing code, and even reasoning. One would assume that learning how these machines are able to pull off this magic trick would put people at ease, but it’s the opposite. I’ll spare the reader of this essay the technical explanation, but it’s all just math and scaling laws, i.e. a lot of compute. Claude can write a SQL query or a rhyming couplet about the CONCAT function because it’s crunching numbers in a data center outside of some suburb in Indiana. In other words, it’s mechanical–it’s producing language by predictive algorithm.

When people produce language, we’re not just vocalizing the output of neurological events occurring in the 2.9lbs of electrified pâté that is a human brain. We’re vocalizing the output of our interactions with each other, our relations to our direct environments, and even our place in the whole of human history. In the domain of human life, meaning isn’t a secondary property rooted in something more basic. It’s not a something that is discoverable like the elements that form water or gold. It’s not an event we’d observe like a chemical reaction. Meaning is constructive–and we construct it.

Language follows rules. The root of those rules are not axioms, but practices. We learn to respond to words with other words, not by calculating the best possible response, but by observing cultural norms. We aren’t always connecting words to referents, but we’re considering various implicatures as we communicate with one another.

When I’m at a party and comment “It’s getting late” or even, since I’m a Midwesterner, just slap my knee and say “welp”, my listeners can infer I’m preparing to make an exit. I didn’t have to say it directly though, as my listeners can find my meaning from the cooperative structure of communication. As language users we share an understanding of words, contexts and elements of social behavior. Meaning is constructed from these shared understandings, not because of a prediction, but as an organic practice of participating in language as an activity.

We don’t participate in language for no reason, however. We do so as a means of surviving and even prospering. As an organism trying to find its evolutionary niche in nature, our words are sharper than our claws. We’re able to create a collective experience, one that gives us the ability to build the shelter we somewhat dramatically call “civilization.” When we participate in language, the stakes are high. Our words can bring us closer to people, or they can separate us from the communities that not only build out our individual identities but are essential for our day-to-day survival.

Machines simply don’t do this. What the LLMs are doing is predicting tokens based on a training set of example tokens. They imitate language, and their ability to make life-like imitations is honestly stunning at times. But when an LLM produces language, it’s turning a crank, making coordinated moves, empty of anything more. When it gets something wrong, it’s dismissed as a hallucination (an interestingly misleading metaphor, but I’ll leave that aside.) The LLM lacks the table stakes inherent in human language-use. When calculating the next word or phrase, there’s no risk of social death or exclusion for the LLM. Where humans use language to draw closer to one another, the LLM is playing a very complicated game of pachinko.

All this is to simply point out that humans, as language users, and LLMs, as models, occupy different domains. I exist in a world bounded by my language; Claude exists within a calculation of tokens. This is an important but loaded distinction. It’s important because it maps out the lines between us and LLMs. It’s loaded as it requires some discussion of a spooky thing called “consciousness.”

When we examine our brains on a physical level, we don’t find anything describable as consciousness, yet when we introspect, we can’t avoid that we do, in fact, have conscious experiences. We can describe our experience in subjective terms, but not in objective terms–there’s a gap. The gap isn’t due to our lack of physical understanding, we know what brains are made of, it’s just that our experience of ourselves can exceed what our language can express. Language, in my view, is inherently public, it’s a shared space of activity, so when language turns inwards, it lacks the grammar needed to capture subjective experience. This isn’t a problem outside of dense philosophical discussions on some guy’s blog, though–the expression of one’s own experience sits quite happily in the domain of art, for just one example.

LLMs, on the other hand, aren’t quite as mysterious in the same way. We may not be able to trace each output to a specific event in the LLMs processing, but we do understand the process. This issue is largely due to the scale and nature of probabilistic token selection. There’s no gap between the experience of being an LLM and its language-use: there isn’t any experience for language to fail to capture.

One could object that the same argument would apply to human infants and those with extreme impairments who cannot participate in language-use. My response would be two-fold. First, language-use is a human ability, like flight for many species of bird, but one wouldn’t say a sparrow with a broken wing isn’t a bird. Second, being human arguably comes with its normative commitments, namely, to see all humans as part of a moral community. (And while some may wish to debate this point, I do not.)

This may read like a deflationary view of AI, reducing it to software running billions (trillions?) of calculations for it to produce an illusion of language-use. However, I am very much in favor of using AI as the powerful tool it clearly is. If I want to build a vector database, I can do most of my research and a fair amount of coding using Claude Code. After all, it’s clearly better at python than I am. But if I want to write a poem, I’m much better served to learn the names of flowers and pack light for a walk in the woods. And once I’m moved, I write.

If the underlying anxiety around AI is that it supplants humans from activities that are exclusively our domain, I’m feeling strongly that it cannot. Human life and meaning are created out of human practices and ways of living. LLMs use our production, our language, to help us build what we deem worthy of building. It can even plan a party if prompted correctly, but we’re going to be the ones staying later than we should when the party gets fun.