The opening words of David Chapman’s Better Without AI:
This book is a call to action. You can participate. This is for you.
Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.
AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival.
Could this happen?
This essay is a commentary on Better Without AI, a free book available on the web. It’s not a review. I’m not interested in evaluating this text as a written artifact. If this essay pulls in arguments that weren’t in the book — well, it’s not like I’m grading his homework or anything. I just want to talk about whether this could happen.
Disclosure: I’m hardly an unbiased critic when it comes to David Chapman. I find In The Cells of the Eggplant, his in-progress book on meta-rationality, extremely lucid and meaningful. It’s played no small part in developing my intuition on what intelligence actually is. On the other hand, that intuition has led me to be a frequent critic of AI risk. I’m biased in opposite directions, which puts me in an interesting position to evaluate this claim.
No need to keep you in suspense: the reason for this divergence is that the risks that worry Chapman aren’t the ones people typically bring up. Better Without AI warns about “medium-sized apocalypses”:
This book considers scenarios that are less bad than human extinction, but which could get worse than run-of-the-mill disasters that kill only a few million people.
Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively. AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination. AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction.It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum.
However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety. Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development.
AI ethics is something worth taking seriously — but weirdly enough, it has a lot less to do with AI than you’d think. Recall Desystemize #2 showing US cancer diagnoses erroneously tied to turning 65 (because that’s the age Medicare makes diagnosis more accessible). We mention the problems that arise from using this data:
Let’s say that some well-meaning hospital executive reads the same study we did and thinks — wow, okay, we need to fix these diagnosis inequities. Let’s use machine learning to predict which patients are mostly likely to test positive for cancer and proactively reach out and get them tested. We’ll ignore the insurance gap entirely and look purely at the data! Well — we know what looking purely at the data gets us, don’t we? We just finished figuring out that it shows a massive spike in cancer diagnoses at age 65, and sniffing out massive spikes is what machine learning does best. As far as a predictive model trained on data from the United States is concerned, there really is an exactly-age-65 specific time bomb in your body that causes a spike in cancer diagnoses.
How do you control for this bias in your model? Well...you don’t, really. You could artificially weight the scores to some target AoA, but what’s the “right” target for AoA, anyway? The fundamental problem is that you want your model to guess who has undetected cancer, but the only data you can feed it with are patients with detected cancer. So any correspondence break between cancer in the general population and the patients that actually get diagnosed can’t help but feed that bias into that model, compounding the tragedy of the original problem. The impact of insurance inequity is a group of 64 year olds who have undiagnosed cancer because they’re waiting for Medicare. The impact of doing statistical analysis on data generated by inequity and then using it to drive decisions is another group of 64 year olds who have undiagnosed cancer because they’re 64 year olds.
This is a life or death AI ethics issue. But the actual problem is the conflation of cancer diagnoses with true incidence of cancer in a population. A non-AI, artisanal, hand-crafted algorithm on US healthcare data will face exactly the same selection biases. Work to make US health insurance more equitable would make all inference on the data (AI or otherwise) more accurate. The vital importance of getting this right is the essential thesis of Desystemize. But the ethical issue is not an apocalypse scenario that’s capable of suddenly snowballing out of control; it’s that AI will help preserve the same old power imbalances we already have. Often the actual capabilities of the AI system aren’t even relevant to the severity of the ethical issue, aside from people being more willing to trust AI that’s superficially more “powerful”. AI ethics issues are specific, contextual battles like ethics issues always are, not explosive tail risks.
Conversely, “AI safety” is all about explosive tail risks. The fear is that someone makes an AI to build a better AI for making better AIs, this iterates repeatedly over a short period of time, bam, godlike intelligence. This isn’t something I take seriously. Training data is not a training environment; without the ability to interactively touch the world and develop new ways of seeing, a retrospective look at past data has a lot of unavoidable fragility.1 The importance of interaction is a very Chapman-esque point, which is why I was surprised to hear he was working on an AI safety book. In The Cells Of the Eggplant is a major component of why I don’t trust static rationality; are you really scared of this?
Honestly, I’m not convinced he is. Better Without AI’s official position on superintelligence is something like “Hey, it could be possible, so it’s a good reason to stop AI! It’s less likely than other alternatives and by definition we can’t reason about the unthinkable, so I’m mostly going to focus on other potential risks relating to AI”. But once he’s in the “radical progress without Scary AI” section, pages like “What kind of AI might accelerate technological progress” and “limits of experimental induction”, meant to describe a vision of better scientific progress without AI, also just-so-happen to shed a light on the exact flaws of the recursive superintelligence argument. I don’t begrudge Chapman his obliqueness here. If you’re making an anti-AI book for everyone, it’d be pretty stupid to take a pot shot at AI safety before going into your own arguments. But this is my commentary, and I can be direct: I’m not worried about recursive superintelligence, and this isn’t a book that acts worried about it either.
What’s the broad middle ground we should be paying attention to instead? Chapman tries to decouple discussions of agency or human-like minds from evaluating the risk of AI systems. AI started as a branch of cybernetics, a field spawned during war to create automatic weapon targeting systems and bomb computers. As those tools became more and more refined, exact control of how to act with lethal force was gradually taken away from the humans at a point in time and pushed back to the architects of algorithms, and eventually into the Byzantine folds of a flowchart no one can quite map the full extent of. Some people think that a flowchart that’s Byzantine enough will “wake up” and suddenly make agential decisions in a mind-like way, some people (me!) think that’s a mistaken belief, and Chapman says - the flowchart is in control of lethal force, who cares whether it ever wakes up? Fear power, not intelligence.
Serendipity showed me this tweet while I was writing this draft:
Abundance dies twice - in the field, and in memory. One of the greatest struggles facing all restoration-initiatives in conservation is the tyranny of low expectations.
An abundance of human discretion dies twice: when we lose the ability to decide the world around us, and when we forget that the world was ever expected to be so responsive. On Christmas in 1914, spontaneous ceasefires sprouted along the trenches of World War 1. Back then, men had the freedom to choose not to kill. Do you think our new bomb computers have an exception for Christmas programmed in? Is spontaneous peace still possible? Or must peace be made legible to the whole military-algorithmic complex before it can be enacted? How many values in how many databases must be updated for killing to stop for a day? Is that number low or high? Is it getting larger each year? Do you know how you’d go about finding that number?
Do you think anyone knows it?
The scariest section of the book is probably “At war with the machines”, which points out how much control we’ve already ceded when you learn to look for it. Recommender engines determine what you see. Tracking scripts embed themselves on your machine. Automated phishing scams target your specific vulnerabilities. AIs make spam to try to get promoted by other AIs. When I post about this essay on Twitter, I’m not going to link to it in the first post, because the algorithm punishes external links. The AI is already controlling my behavior and how I communicate with my peers.
This isn’t “superintelligence”, of course. These are just semi-autonomous systems written by engineers. But that doesn’t mean the engineers control them. From the section “AI is out of control”:
Who or what is in control of Mooglebook’s2 AI?
There’s no big red button anyone at Mooglebook can push to shut it down. Mooglebook can’t stop optimizing for ad clicks. There are people inside and outside the company who realize it has dire negative externalities, and they are trying to make those less bad, but they’ve brought water pistols to a tactical nuclear war. If Mooglebook’s executive team unanimously agree that its activities are harmful, and they want to get out of the advertising business and pivot the whole company to rescuing abused beagles, they cannot do that. They would be fired by the board immediately. If the board agreed, they would be fired by the shareholders. If somehow the advertising business did get shut down, the company would go bankrupt in a few months, and less scrupulous competitors would pick up the slack.
The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs Those are subordinate in turn to the AI the company depends on for its survival. If enemies of Mooglebook’s AI—activists, regulators, competitors—try to harm it, the institution can’t not do everything in its power to defend it. As, in fact, it is currently doing.
Humans don’t have control over Mooglebook’s AI, not individually, nor as defined groups, nor perhaps even as a species.
Mooglebook AI is not plotting to destroy the world—but it may destroy the world unintentionally, and we may not be able to stop it.
This may seem more like an indictment of capitalism than AI. But humans have had institutions for a lot longer than they’ve had capitalism, and it’s not as though there’s a perfectly aligned institution that could safely use AI if they got the keys. Institutions will always have gaps in their definitions that require squishy improvisation. The relevant variable here is whether a human is able to use their squishy improvisation skills to directly address the situation, or whether they need to use their squishy improvisation skills to try to change values in databases so the algorithmic expression of the institutions power hopefully, maybe, switches to doing the right thing. This is about whether the interface between the institution and the world is a human or not. This is about control.
There’s this thing now called “prompt engineers”, people who know good words to try and whisper to the AI systems so they’ll behave for two seconds. Kids pass around all sorts of oral traditions on how one might become viral on TikTok. A constant stream of A/B testing leads to us swapping tales of our different realities on the same websites, wondering whose world will become canon and whose will be abandoned. Thanks to the tyranny of low expectations, this is all humdrum background noise of the modern world. We don’t expect to understand things down to their roots. Everything is turning in to magic words and automatic reccomendations.
Fight DOOM AI with SCIENCE! And Engineering!! tries to snap us out of this reverie:
“Science” means “figuring out how things work.” “Engineering” means “designing devices based on an understanding of how they work.” Science and engineering are good. Current AI practice is neither.
Most AI research is not science. In fact, the field actively resists figuring out how AI systems work. It aims at creating impressive demos, such as game-playing programs and chatbots, more often than attempting scientific understanding. The demos often do not show what they seem to.
Most applied AI work is not engineering, even when it produces practical applications, because it is not based on scientific understanding. It creates products by semi-random tweaking, rather than applying principled design methods. Consequently, the resulting systems are unreliable and unsafe.
Obviously AI research is a kind of science, and applied AI work is a kind of engineering. These models are a thing that did not exist before and now do. But there’s a distinct and meaningful sense in which the feats of these models, and our understanding of them, should be considered separately from our normal ideas of intelligence, science, and engineering.
But to go any further here, I need to develop an understanding of identity and individuality that requires some explanation. We’re going to talk about ants and their queens and their colonies as a strong example of the distinctions we’re interested in. We’ll worry about relating them to AI later and let ourselves zoom completely to ant scale so we can see things properly.
We’ll start with a fun trivia fact: what do leafcutter ants eat? It’s not leaves. Oh, they cut the leaves, sure, and haul them across makeshift highways to their nests. But the leaves are used to cultivate a fungus that grows in their hive. The ant larvae won’t survive without the fungus, and the fungus has long since lost its ability to produce spores, so they need each other to survive. When a leafcutter ant queen strikes out on her own, she must bring a bit of fungus with her to seed the new hive. If a colony loses all its fungus, they must go to war with a neighbor to get some back or risk extinction.
At first, this might make you feel a kind of pity — those poor ants, so totally reliant on this one way of being. But this should instead fill you with a reverence bordering on terror. For tens of millions of years, it has worked. The genetic code of leafcutter ants doesn’t include information about the fungus itself. It has pointers to environmental configurations where you can access the information stored in the genome of the fungus. Queen, grab that fungus from your hive and take it with you. Workers, bring the leaves down to the chamber with that fungus. Larvae, eat that fungus. That fungus is always well-defined. Ants are little correspondence machines that make sure their immediate environments always are a certain way. Ant society is a tool to make sure that when you say “that fungus” and point, there’s always something there.
Next question: is this pointing being done by individual ants or the colony? Intuitively, you want to say the individual ants. They’re the little ones you actually see scurrying around doing the work. However, there’s an important sense in which the individual worker ants aren’t responsible for their own fate. They’re sterile and unable to reproduce. To propagate a better pointing strategy forward in time, they must ensure the success of their queen so she can give birth to more relatives who share their strategy. In the “selfish gene” model, with the replicating entity as the unit of focus, the actual physical ant bodies are relegated to something like our individual fingers, limbs instead of individuals.
As it happens, the answer is somewhere between. The individual ant bodies are sort of individuals, but also, the colony itself is sort of an individual. We want to have a handle on how much “individuality” we should attribute to each, so we’ll turn to the information theory of individuality (ITI):
Work on social insects and on a number of plant, fungal and prokaryotic species demonstrates the possibility of individuality simultaneously at multiple organizational levels—physically distinct ants form aggregations called colonies and these colonies may be divided into spatially noncontiguous subsets (Gow et al. 2008; Esser et al. 2001). Furthermore, in many ant species the majority of worker ants do not replicate and the colony as whole does not replicate, but contiguity between past and future is nonetheless a feature of the system. And, importantly, it is the combination of reproduction by a minority of colony members coupled to the industry of the majority that allows the colony as a whole to adapt in response to changes in the environment. Taken together, these two observations suggest it is possible to have individuality without replication and some forms of individuality benefit when replication is partial.
ITI defines individualism as a measure of temporal stability that propagates some information forward in time. (Information about fungus, for example.) We’ll use intuition instead of formalism here. Imagine a storyteller — a good storyteller, one who never uses more words than they need to.3 They’re going to tell the complete story of one particular little ant. What would they need to talk about?
Perhaps the little ant is missing part of one antenna after an accident. Perhaps the little ant is currently off her own, away from the rest of the colony. Perhaps the little ant has a nice big leaf in her jaws at this very moment. The storyteller would need to spend at least some words on the little ant herself. The little ant is at least a little bit an individual.
But then they’d need to start talking about why - why is she looking for leaves? Why are there highways for her to follow? Who made sure there was a fungus chamber for her to deposit the leaves in? This would take a lot more words than were needed for the individual little body. The little ant is part of a colonial individual, and it’s more of an individual than the little ant herself.
How does the colony point at the fungus? Through the little ants. The colony as a whole is an individual carrying a survival strategy forward in time. That strategy works by saying “that fungus” and having it mean something. The little ants get to be a little bit individual, but only as long as they make sure “that fungus” stays meaningful. They have no freedom to become more individual, because the colonial individual has the information needed to send the recipe for little ants forward. A little ant without the drive to point is a little ant that never was. How could she have been born?
The fungus is a meaningful pattern in the environment needed for survival. The colonial individual holds the key to exploiting that pattern so it can send its own pattern forward in time. The organismal individual is the link between the two. Hold this lesson tight as we zoom back out to human-scale.
Humans are not ants. The storyteller has all sorts of individual, organismal context to get through when they’re telling your story. Your injuries, your preferences, every little fact your memory takes forward in time. You are much more individual than the little ant.
But you’re part of a colonial individual, too. More than one! Some of your “whys” will be explained by yourself (and good for you), but others will be your job, or your community, or your country, or your ideology. Those stories will entail a lot of telling as well. More than an ant colony.
Don’t imagine a continuum from “organismal individuality” to “colonial individuality”, with humans closer to the left and ants closer to the right. Imagine individuality as something you can have more or less of in absolute terms. As a human, you have a lot of individual context to call your own, plus, also, additionally, you are part of several colonial individuals which each themselves have a lot more context than an ant colony. There’s just more individuality to go around, total! And maximizing the amount of context one person is free to accrue is pretty close to being a definition of human flourishing. We want to live long, with much freedom to choose what we learn and remember and build with others.
Ant society is a tool to point at fungus. Human society does that too, sometimes. Penicillin is a fungus. Odds are, you personally don’t know how to make it. You participate in a colonial organism that sends the knowledge of how to tend penicillin forward in time. This allows you to live longer than you would without it. In this way, you’re like a little ant in a colony.
But your colonial individuals are much more interactive and accessible to you than the ant-genes are to the ant-bodies. There are some individual people who do know how to make penicillin. You could learn, if you wanted to. Sterile ant workers are born of the colonial individual, sustain and are sustained by it, but they cannot change it. Humans join their cultural colonial individuals with unimaginably more intimacy. We too sustain and are sustained by them, but we can also give birth to new ones, choose which ones to devote ourselves to, take a chunk of them and hold them ourselves.
This is where human flourishing comes from. This is what’s at stake.
The AI risks literature generally takes for granted that superintelligence will produce superpowers, but which powers and how this would work is rarely examined, and never in detail. One explanation given is that we are more intelligent than chimpanzees, and that is why we are more powerful, in ways chimpanzees cannot begin to imagine. Then, the reasoning goes, something more intelligent than us would be unimaginably more powerful again. But for hundreds of thousands of years humans were not more powerful than chimpanzees. Significantly empowering technologies only began to accumulate a few thousand years ago, apparently due to cultural evolution rather than increases in innate intelligence. The dramatic increases in human power beginning with the industrial revolution were almost certainly not due to increases in innate intelligence. What role intelligence plays in science and technology development is mainly unknown; I’ll return to this point later.
Our colonial individuals are our power.
The AI safety literature also reasons that power consists of the ability to take effective action, and effective action derives from plans, and intelligence centrally features the ability to make plans, so greater intelligence means superintelligent AI’s actions would be more effective, potentially without limit. This greatly overestimates the role of planning in effective action. Power rarely derives from exceptional planning ability. The world is too complicated, too little known, and too rapidly changing for detailed plans to succeed. Effective action derives from skillful improvisation in specific situations. That is limited by unavoidably incomplete knowledge, regardless of intelligence.
Our power does not derive from our exceptional planning ability. It derives from our ability to embed ourselves in layers of individualism that touch real things. Penicillin was discovered serendipitously and is sustained by an organic web of obligations. That web makes us more powerful than chimpanzees. Losing it would be a disaster.
It’s this disaster that Chapman is afraid of. Years before Better Without AI, he wrote A bridge to meta-rationality vs. civilizational collapse and The collapse of rational certainty. The moral of those stories: technical rationality can never perfectly describe the world, but it is often sufficient to perform great miracles. If criticisms of rationality are totally ignored, we end up with a fragile understanding of what knowledge actually is, engaging in cargo cult science that neglects the actual tacit knowledge glueing things together. (Knowledge isn’t all in a single civilizational database; it’s distributed among many levels of individuals. It cannot be written down. It cannot be optimized by recursively iterating over what we have written down.) But we must take care to not fall into irrationality — that would be far worse. If we let the colonial individuals rot by replacing rationality with only personal, organismal experience, the amount of personal context we’re able to maintain will be inarguably diminished (through famine, disease, exposure, and every other ill we’ve found ways to mitigate); and, for many, totally destroyed. (If you are no longer pointing to the fungus, your line dies out.)
The individual ants within a colony communicate by vibrations and pheromone trails. Humans communicate all sorts of ways, but especially by words. We’ve generated an awful lot of words by now, and AI has proven to be able to imitate inclusion in the colonial individual using them. The traditional fear seeing the recent feats of AI is that artificial systems will start to act as organismal individuals themselves, directly placing themselves into the physical world as an I.
But imagine having the same fear about ant colonies. What if the ant colony becomes self-aware and starts influencing the world directly? What do you even mean? An ant colony is a strategy to propagate information about an environment forward in time. Influencing the world through the medium of the workers is how it survives. What would it mean to say that it “wakes up”? That it consciously experiences the simultaneous sensory outputs of millions of workers and controls them explicitly? Ants get along with local, stochastic rules about how to behave. Imagine the overhead to actually run each individual ant. Would that even be possible? I don’t know. But we don’t think of social insects as striving desperately to form a single conscious mega-ant. The strategy works because it works when you run it in ants, not because it’s a crystalline artifact of pure logic that is primitively approximated in the movements of ants.
The thing to fear is the hijacking of our colonial individuals. These are the things that have grown so quickly in a small handful of centuries, the things that have made us much more powerful than chimpanzees. The analogy with ants is a lot less comforting on this one. What if the ant hive forces the workers to submit to it, making survival without it impossible? That’s literally how it works, yeah. That’s how it’s been done for tens of millions of years. That’s how the strategy lives. That’s what it means to be an ant.
They cannot be saved.
We started on this tangent a while ago trying to explain how AI science wasn’t science and AI engineering wasn’t engineering. At last we’re ready: science and engineering are interactive interfaces with our colonial individuals. Oh, not perfectly interactive: stuff happens you don’t have access to, knowledge is generated in a way that can’t be made legible to your organismal individual self. But you’ve got a stake in it. You helped make it. It’s not like the ant workers who must correspond with one particular fungus because of a predestined genetic plan they’re helpless to alter.
That’s the direction we’re heading nowadays, though. We’ve done such a good job building up our colonial individuals that we can feed records of the directions they’ve generated into an algorithim and have that algorithim spit plausible sounding directions back out. I’m not worried about those directions “waking up” and suddenly correspond to reality all on their own. We’ll be the ones to correspond, same as we ever were. I’m worried that it’ll kill our collective selves. Our personal quests of observing and tinkering will be reduced to trying to find the magic keywords that makes the black box give out the right answer, our joy in discovering novelties in the world will be replaced by an endlessly remixed slurry of stuff we liked before, our conception of knowledge will be limited to what was encoded in our previous data scraping attempts.
I’ve seen some comments from people who suddenly feel dumber when ChatGPT goes down. How close are they to sterile worker ants, forced to groom the queen’s larvae as the closest proxy to sending their own pattern forward, absolutely forbidden from increasing their personal individuality?
(This is an exaggeration. But it’s less of an exaggeration every year.)
We’re embedded in beautiful, unimaginably complex colonial individuals that have dramatically improved our well-being. They sustain us, and us them. It’s an impressive trick to take the groaning weight of communication that’s flowed through them and make a disembodied computer speak with the voice of the colony. But being made of organsimal individuals is the whole thing that makes colonial individuals work. Taking instructions that aren’t from individuals means ceding the power we hold as individuals, castrating yourself in service to the instructions you cannot hope to change.
If it was that or non-existence, it’d be a good deal. Ants took the deal and they got to live. But AI only came to life because of how well we were doing without it; because we created these social entities that could send instructions to organismal individuals and trust them to correspond. We can point at the penicillin fungus, and hell, we’ll point at air conditioners, and fertilizer, and nice warm blankets, and great cables that keep us connected to share instructions even faster. AI can’t point. It’s just regurgitating our directions and asking us to make them point. We progressed without it, and we can keep progressing without it.
We thrive because we have powerful intuitions about how to effectively embed ourselves in colonial individuals. These intuitions work when the colonial individuals are made up of all of us. Creating a single interface to an artificial colonial self forces us to mistrust text and images in ways we never have before, makes us generate new prayers to reason with something that can’t be made accountable, flattens out a web of obligations into an incomplete record of what has happened as a result of those obligations.
Every move towards this sort of future makes our collective selves more brittle and dependent on edicts from on high. Every move away preserves the culture that builds the patterns AI exploited to be born in the first place. We’re better without AI.
This argument is fleshed out further in Chasing the Treasure Fox.
From the source: “Mooglebook is a humorous generic term for internet advertising technology companies: a portmanteau of Microsoft, Google, and Facebook. It was coined by Gwern Branwen, in “It Looks Like You’re Trying To Take Over The World.”"
This is a stylized attempt at describing Shannon information.
Thank you for this - I think the notions of the information theory of individuality and context maximization as human flourishing are very valuable.
I strongly disagree that we're better without AI, but I find compelling your argument that we risk ceding our individuality to AI. Open-source and/or personal AIs might be the way forward - if I can train my own GPT on my own personal experiential data, then the AI becomes downstream of me, and I'm less likely to cede my individuality. To achieve something like this, we'd want to increase the power of consumer-grade hardware and/or decrease the need for high-parameter models - these are strategies that decrease the marginal value of a model that only Google can train over a model that I myself can train.
Really nice, and really thought provoking, but I wonder if it tells us the same thing when we consider any other big systemic change to our superorganisms, whether it is the invention of democracy or academic publishing as new ways to organize, or even something like writing, which Plato criticizes as destroying our well established ways of organizing our information?