Quick blurb before the article: the Inexact Sciences group, which I'm proud to be a member of, has been keeping up a group blog at tis.so for a couple of months now. Our goal is sort of a repository of “atomic ideas”, snippets of conversation or examples that we want to be able to refer to again. The posts are much shorter, but come out once a day. I’m most proud personally of my post “On formless empiricism”, carving out a distinction that explains the exact territory I’m working in. Desystemize isn’t going anywhere, but if you want maximal exposure to this way of thinking, or just a proof of life that I’m still thinking in the time between Desystemize posts, make sure you keep an eye on tis.so.
I.
Today, we’ll be following the foxes from The Elder Scrolls V: Skyrim to see what we can learn about the limits of empiricism. Like in Desystemize #9, studying emergence in games is interesting precisely because games have such well-defined base objects. Studying reality using the techniques of games is extremely dangerous because those techniques ignore emergence; for exactly that reason, though, emergent phenomena that do show up in games are some of the easier examples of emergence to understand.
I learned about the treasure foxes from this Twitter thread; I’ll recap it for people who haven’t read it. Foxes in Skyrim run away when a player goes near them. The Skyrim community quickly noticed that the spooked foxes would generally run towards treasure. A cute little feature — but one that no Skyrim developer put in explicitly. Instead, the behavior was due to the invisible geometry of the game world. Skyrim is partitioned into a covering of triangles known as the navmesh:
The various actors within Skyrim - monsters, guards, foxes, whatever - “see” the world of Skyrim through the navmesh. Players don’t see it at all. From the player perspective, the foxes running away from them in the “world” of Skyrim. From the foxes perspetive, they’re actually running away along the navmesh, trying to put a lot of triangles between themselves and the player. What’s interesting about this distinction is that not all triangles are created equal. In the wilderness, there’s a lot of broad natural areas that don’t need much detail from the AIs pathfinding point of view, letting the triangles be simple and large. But around areas of interest, you have more objects, actors, and tricky features; and all of those need handling in the navmesh. This results in a denser concentration of smaller triangles. The fleeing fox, trying to get triangles between it and the player as fast as possible, will naturally go for these clusters.
So it’s not exactly treasure that the fox is going for. It’s more accurate to say the fox is going for interesting locations. Areas where a simple description can cover a lot of terrain make for large triangles, whereas finely detailed descriptions require many more triangles to account for the detail. The fox is sniffing out these local accretions of complexity, which are more likely to have story elements, danger, and treasure to reward you for engaging with it. Since treasure is the reward the player is after, it makes sense that they would mistake order-seeking behavior for treasure-seeking behavior.
But notice the language I used: treasure is placed to reward you for engaging it. What’s the relationship between local complexity and treasure? It’s not intrinsic within the system via an explicit interestingness value. It just so happens that designers generally try to reward players for engaging with local complexity. The link between local complexity and treasure isn’t maintained by Skyrim itself; it’s imposed from without by the whims of designers, subject to change at any time.
Imagine there’s a naturalist living within Skyrim who’s trying to understand the behavior of foxes. Their model for agent behavior is their own behavior. They’re an autonomous agent who often goes towards treasure when they see it, because they love treasure. However, sometimes they need a break to rest or repair their weapons. Let’s say that, on average, they head towards treasure 90% of the time. Various factors relating to themself (fighting ability, gear quality) or the treasure (how obviously it’s signposted, how valuable it appears) could change that.
They’re studying the foxes, who are also autonomous agents who also often go towards treasure when they see it. Designers are pretty reliable at maintaining the link between order and treasure. Let’s say they’re 90% reliable. Then the chance that the naturalist and the fox will head towards treasure is the same, and if probability theory were the only meaningful measure of uncertainty, this parallel would be enough to perfectly explain the behavior of foxes. But of course it isn’t; the naturalist is going towards treasure because they want it, while the foxes don’t even know what treasure is, and are drawn to it only via the glue of interestingness. And unlike the naturalist, whose chance to interact with treasure is mediated by factors relating to their local context (the naturalist and the treasure), the explanation for whether a given fox goes towards treasure or not can’t be explained by learning more details about the fox.
II.
There’s a philosophical problem called “the problem of induction”. In brief, it goes like this: all we have to predict the future is what’s happened in the past. But what justification do we have for saying the future will keep acting like the past? If one person has a hypothesis that “all apples are red or green” and another has a hypothesis that “all apples are red or green, but when the earth’s magnetic poles shift, they’ll become orange or purple”, you have an equal amount of evidence of both hypotheses. How do you square this against our obvious intuition that the first hypothesis is right and the second hypothesis needs a “reason” to be credible? Some people try to appeal to Occam’s razor and look for the “less complex” hypothesis, but it’s not like we have a solid definition of complexity, either.
We can see that the problem of induction is much more relevant to the fox than the naturalist. The link between the naturalist and treasure is stable in a way the link between the fox and treasure is not. Philosophers fretting about induction are worried that every link between two phenomena could be a just-so-happens link that fails to apply in the future. As formless empiricists, though, we’ll look at it the other way around. We know that one of these links (naturalist → treasure) is well-founded and stable, while the other (fox → treasure) is fragile in exactly the way the problem of induction warns us about. If we were that Skyrim naturalist, with access only to in-world tools, how would we be able to tell the difference?
Notice that probability theory isn’t the tool to help us here. That 90% chance to path towards treasure is the same for the naturalist and the fox, but we want to account for the difference between them. Someone who speaks solely in terms of probabilities and priors is forced to equate the two cases, which leaves you vulnerable in one of two ways. Either you decide to tacitly ignore the problem of induction, letting you work unimpeded but leaving you completely exposed to blowouts when the fox → treasure link gets severed; or you take it to mean “nothing is knowable”, leaving you with a certain theoretical purity but practically unable to explain phenomena as obvious as “the person who likes treasure will often walk towards it.”
We need to find some sort of value that’s different between the naturalist → treasure link and the fox → treasure link, some way of saying “Don’t worry about induction here, but do worry about it there.” And I’ve already spoiled what it is: explanations for variation. They’re both the same probabilistically, but the naturalist can tell a story each day about why they did what they did. The link between the naturalist and the treasure is more stable precisely because that link is listening to the context of the naturalist and the treasure. By contrast, even if there’s just as much “evidence” that foxes like treasure as much as naturalists, the explanation for why a particular fox does or not path towards treasure is not explainable by the context of the fox.
We’ll use the phrase “account of variation” of describe this. Think of an account of variation as “everything you’d need to pay attention to if you wanted to explain all the possible outcomes of the phenomenon you’re studying”. It’s a combination of the channels of information you’re looking at (your “accounting system”) and your understanding of what’s possible given that information. The control panel of a power station and the training of the operator looking at it combine to form an account of variation of that power station.
No account of variation can ever be perfect. The power station will fail to generate power if it’s hit by a giant meteor, but the control panel doesn’t have the position of every large object in outer space. But again, we’re going to ignore the theoretical universality of the problem of induction and instead notice the large practical differences between good accounts of variation and bad ones. The power plant operator clearly has a better account of variation than the homeowner connected to the grid, whose mental model is something like “The power will always be on unless there’s a big storm or there’s a news story about how power might be cut soon.” And the homeowner, in turn, has a better account than the moth who’s flying toward the homeowners lights, aware only that the light is on in this particular moment.
Nassim Taleb likes to use the example of the “turkey problem” to explain induction. Imagine a turkey statistician analyzing how well the farmer is treating him and his fellow turkeys. Each day, he records that the farmer has treated the turkey population well. The farmer knows that this is to slaughter them all before Thanksgiving, but the turkey statistician has no account of variation to understand this, and can only look at things empirically. He identifies as a Bayseian, proud of “updating his priors” each day according to the evidence. And every single day, the evidence is more and more positive — until the day all the turkeys are slaughtered. Being an empirical turkey was worse than randomly guessing: he was maximally confident in his hypothesis the moment before his death.
Once again, note that the problem of induction is more or less irrelevant to the farmer when it comes to the turkeys. He knows why he’s feeding the turkeys and when he’s going to stop. To the turkeys, though, it’s literally a matter of certain death. So some kinds of evidence are more induction-proof than others. And the answer isn’t just better statistical tools - all the turkey statistician could do with his mathematical prowess is mistake absence of evidence for evidence of absence and get a little wronger each day. 1 What he needed was a better account of variation, an awareness that “we’ll stop being fed if our life status changes, a change that can happen to all of us at once” was a possibility. But how do we generate accounts of variation?
III.
After hearing about the turkey problem, our Skyrim naturalist isn’t satisfied by probabilistic evidence. They want to know why the fox moves in the way it does, not just how often it moves towards treasure. The movements of an individual fox are a function of the navmesh, which the naturalist can’t see. What exactly can they figure out about fox movement given this blindness?
Let’s say they study the movements of a great many foxes, mapping out their paths over time. The regularities in the ways they turn would be observable, given enough data. It would take a lot of looking, but you could derive the existence of the navmesh as mediated through foxes.2 You could give yourself a brand-new sense for something you can’t perceive directly. And if you happened to see a fox wandering about the countryside, you’d be able to guess exactly where it would run depending on your angle of approach.
This indirect understanding of the navmesh wouldn’t be a theory of interestingness, a “why” behind the navmesh. If there was a new location added in DLC, the navmesh you found so far wouldn’t let you instantly know where foxes go in this new territory. You would need an understanding of interestingness to predict that. Can the navmesh you’ve found so far help develop that theory? Well, in a sense. Once you know what the triangles are, you can start to notice that they’re denser around some objects and actors than they are out in the wider world. Theoretically, the same spirit of discovery that led you to finding the navmesh could lead you to finding its correlation with local complexity.
Why do I say “theoretically”? Wasn’t our understanding of the foxes perfect? Yes — but perfectly predicting the movements of foxes doesn’t mean you perfectly understand the provenance of the navmesh. You’ve used the effect of the navmesh on the movements of foxes to create a new sense, but foxes won’t help you understand how to use the new sense — in this case, correlating it with the points of interest on your map. And if you don’t do that correlation, your findings will only be as good as the territory they’re trained on. You can’t see a new map and guess where the foxes will go unless you’ve made that connection.
And even if you do make that connection, you’re still missing a lot of what you’d want from a theory of interestingness. Suppose that one day there was a Bethesda all-hands meeting. The CEO mournfully declares “The kids don’t appreciate nature anymore! We need to do something about it! Move all of the treasure into beautiful, untouched nature, with nothing else around it. That’ll teach them the joys of exploring instead of fighting.” The developers feverishly put in a patch, and suddenly interestingness is no more — the correlation between treasure and local complexity is completely reversed. You still have a perfect understanding of the foxes, who keep right on respecting the navmesh3, but they’re now running away from treasure, not towards it.
This should put you in mind of the turkey problem. Evidence-based techniques within the framework couldn’t save you from abrupt change coming from without. The Skyrim naturalist was able to do meaningful empirical work deriving the existence of the navmesh through analyzing foxes. But they didn’t have an account of variations of the navmesh — in this case, a theory of Skyrim developers — just like the turkeys didn’t have a theory of farmers. So we can think of the problem of induction as an attack on empiricism that can be “shielded” by accounts of variation. The farmer has a theory for when they’ll stop feeding these turkeys (because it’d be weird to feed a turkey you’ve killed), and so their analysis of what will happen over time with the turkey feed is more induction-proof than the turkeys, even if they’re both looking at the same evidence of “so far the farmer fed the turkeys every day.”
And look, this is common sense, right? Why have I taken over two thousand words to get to “people who understand why things are happening know more than people who don’t?” Because our theories of what evidence is are based far too much on outcomes of individual events, and not accounts for variation. In truth, outcome-based evidence is only useful to the extent that you’ve picked induction-proof outcomes to tally up and have a sense of when to abandon outcome-based analysis entirely. (Farmers don’t say “I’ve fed the turkeys every day so far, so the conditional probability that I’ll stop feeding them the day after slaughter is really quite low.”) And exactly because we have this intuitive common sense buttressing our formal analyses, we’re free to write bad fanfiction about the probabilistic reasoning we imagine must be “really happening” in our brains. But probability theory doesn’t generate accounts of variation; it’s dependent on them.
IV.
This essay is not an extended parable about the limits of machine learning, or at least it wasn’t meant to be when I started writing it. But there’s been a lot of talk recently about the existential risk to humanity from machine learning models, so it’s worth being explicit about the connection the field has to what we’ve talked about so far.
In The Scaling Hypothesis, Gwern writes:
The blessings of scale in turn support a radical theory: an old AI paradigm held by a few pioneers in connectionism (early artificial neural network research) and by more recent deep learning researchers, the scaling hypothesis. The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is ‘just’ simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale. As increasing computational resources permit running such algorithms at the necessary scale, the neural networks will get ever more intelligent.
Is the scaling hypothesis true within Skyrim? It depends which level we’re looking at. One way to state the scale hypothesis is “If you look at the movements of enough foxes, you’ll be able to perfectly predict the movements of any fox” — which is really just saying “If you look at the movements of enough foxes, you’ll find the navmesh.” I’ve already taken this as a given in the earlier sections, and to the extent the scaling hypothesis is saying this, I believe it. We’ve been supposing the naturalists themselves would be doing the fox-recording work, but if they had a computer doing it, it would look pretty much like the machine learning models we have now. Fox lovers within Skyrim might say something like “Oh, nature is too complex and beautiful! Your foolish machines will never *perfectly* understand fox pathing! Maybe it’s doing well on a high proportion, but there will certainly be diminishing returns on trying to get everything!” and then the foolish machines would keep looking at foxes and eventually they would perfectly understand fox pathing without diminishing returns on trying to get everything. Notice also that in this scenario, the people maintaining the model won’t know that it understands “the navmesh”. They just know that they have a magic box that can take “there was a fox here, I scared it from this angle” and it returns where the fox will go.
Will the scaling hypothesis hold true for the theory of “interestingness”? That is, if you feed the model some location where you have no data on the movement of foxes, will it figure out that the foxes are more likely to move generally towards the local complexity? Well, what does “feed the model some location” mean here? This only works if it’s been trained with the fox data in conjunction with data on where the points of interest are on the map. But unlike the naturalists, who have the navmesh as a sense that they can try to combine with their other senses, the machine cannot “go back out” into the world with this new understanding.
So the model will predict that foxes in a new world will go towards the treasure exactly when the data it’s fed about that new world includes points of interest as a parameter. Will “points of interest” be parceled out in our descriptions that we feed the machine? Maybe — if the people doing that parceling think to represent it that way. In other words, the people doing the describing have to figure out the right description to make “interestingness” inferable; the machine will solve the problem if the people feeding it data figured out the right accounting strategy for that data beforehand.
Thinking quickly, the machine learning model constructs a homemade account of variation, using only petabytes of training data, exoscale computing, and an account of variation.
And finally, there’s the question of induction-proofing - will the model construct a theory of Skyrim developers? Well, of course not. How on earth could it do that? That’s not how machine learning models work, scaling hypothesis or not. There’s simply no way that studying the movements of foxes can trigger an output that says “but these trends might all instantly reverse if there’s a Bethesda all-hands about instilling an appreciation for nature in the player”. These techniques are based on evidence of outcomes, and can’t help but be susceptible to the turkey problem.
What if we injected meeting minutes from Bethesda level design meetings into Skyrim? Is the problem just that the data doesn’t get there? Could the machine predict the reversal based on the notes from the-all hands? But now we’re right back to the accounting problem again. How do you know that this note was about that bit of level geometry? To make the memos legible to your model - which is, remember, trained to predict the movements of foxes - you need to relate strings of arbitrary text not to the movements of a fox, but to features in the terrain. The problem is that the memos will only be helpful for predicting the movements of foxes to the extent that they describe patterns that will occur in the navmesh. But it’s cheaper and easier to just infer the navmesh from the actual fox movement! You can give a machine learning model the information it needs to foresee a great sea change, but it will only do so to the extent that the sea change has happened in the past; it can’t help but be a turkey.
It’s worth noting here that the relationship between the navmesh and the observable world is unusually straightforward, in a way that might make this problem seem less severe than it actually is. The treasure fox is hard to understand because the navmesh is invisible to the player, making it impossible to just see. But every point in the obserable world happens to belong to exactly one triangle of navmesh, making it somewhat clear that “location” data is the kind of data you need. Imagine instead you’re trying to explain the behavior of fish in a river. A new factory is built upstream and they bribe the water comission to say the chemical assay is unchanged even as they dump large quantities of illegal pollutants into the water. These pollutants will cause the fish behavior to become signficantly more erratic. What account of variation do you need for fish behavior to generalize into this new scenario? Well, you need to include the building permits of everything that runs off into the river, the projected integrity of the current members of the water commission, and the properties of chemicals that aren’t what the factory says they’re using but would be cheaper for creating the things the factory creates. So what does it mean to feed the model “the state of the river” to try to generalize the findings you’ve had so far? Does your dataset of rivers often include “Dale from the water commission has racked up a substantial amount of credit card debt?”
This is the sort of thing we should keep in mind looking at models like GPT-3. You can give it a prompt like this:
Machine learning models trained on human language without being able to interact with it will sometimes substantially improve prompts that are given to it by humans, but sometimes return complete non-sequiturs. A theory of when it will do well vs poorly is that
And GPT-3 returns many plausible sounding answers. Here’s one I want to look at in particular:
Machine learning models trained on human language without being able to interact with it will sometimes substantially improve prompts that are given to it by humans, but sometimes return complete non-sequiturs. A theory of when it will do well vs poorly is that models that are good at the hallmarks of human language will also be much better at understanding and responding to human prompts.
The theory goes that human language has certain hallmarks that make it distinct from other forms of communication. Models that are good at understanding these hallmarks will also be much better at understanding and responding to human prompts.
Some of the hallmarks of human language include:
1. A rich and varied vocabulary
2. The ability to use language for a variety of purposes, including persuasion, description, and explanation
3. The ability to combine words into complex structures
4. The ability to create new words and expressions
5. The ability to use language in a flexible and creative way
Models that are good at understanding these hallmarks will be much better at understanding and responding to human prompts.
GPT-3 surfaces text that says that models that are “good at understanding these hallmarks” will be better at responding to human prompts. But is GPT-3 itself good at understanding hallmarks? Is it “using language” like it says the good models do? Or is it just good at predicting which words tend to follow other words? We should imagine the path of a sentence to be like the path of a fox, traversing word space according to some unconscious rule. Unlike the navmesh, where every triangle maps cleanly to a single set of coordinates in space, word-space is much more sensitive to minute changes in condition and self-reference and a million other complexities. But it’s the same sort of thing where seeing enough paths that you can predict another path — within the domain.
But no one is really scared about an AI that might one day be really good at repeating how humans have tended to end sentences in the recent past. The fear is that it’s going to “wake up”, that it will see enough statistical correlations relating to agents that it will become an agent itself. Can it develop a theory of self-embodiment? By now you can guess the answer - it depends on how the model can perceive the factors from without, which itself depends on the accounting system. Remember our lesson from Representation and Uncertainty - representations annihilate detail. Your understanding of the universe must be a constant conversation with the phenomena itself, because you can never be sure you’re capturing “all of the details” effectively enough, like the impact of a meteor on the power plant.
And this is how human beings are turkey-proof in a way current machine learning models definitely aren’t. We’re also vulnerable to the problem of induction - but it’s with hypotheticals like “What if tomorrow all of the universes photons turn into frogs?” If that happened, we’d all be blinded instantly since our eyes are expecting photons, and we’d also probably be killed by the sun hitting us with a constant beam of frogs at the speed of light. But we have, via our embodied cognition, a multi-billion year guarantee that so far photons have remained photons, and taking their behavior as an unassailable starting point has always worked.
Fears of AI awakenings are fears that somehow hearing enough primates talk about their embodiment will give the machine a theory of embodiment. But why would it generate such a theory? It’s always going to be easier to cheat, in the same way that the machine didn’t need to create a memos → navmesh theoretical linking when it could simply figure out where the navmesh is by looking at the foxes. The power of scale is at once vindicated (you really don’t need theory to make predictions about the here-and-now) and bounded (you really can’t scale your way out of being a turkey).
Note that this doesn’t preclude the possibility of ever having a machine that can think like a human. If we could somehow embody a machine to take the base senses we’ve been evolutionarily trained to take as given and similarly decide that all theories of variation must “ground out” in them, then go out and interact with the world while holding that belief, there’s a possibility of a machine that can generate accounts of variation beyond human capacity. But it will never happen by the current approach of simply training a model on vast quantities of data exhaust; you need the interactive capacity to generate a new sense by interfacing with the universe in a novel way. And if such a machine is ever developed, there’s strong reason to believe that it won’t have the worst-case “hard takeoff” where it instantly goes from slightly superhuman to arbitrarily powerful. Hard takeoff relies on self-improving in machine time, while interactivity means your understanding of the phenomenon is bounded by time it takes to observe that phenomenon, since you need to work with your new account of variation and can’t look at past data which didn’t account for the detail you’re now experimenting with. For example, when you discover the microscope, you can look at microscopic things going forward, but you can’t look back at all your old notes and observe the microscopic things there, because you didn’t account for them until now. (Though sure, let’s test the first interacting machine in a building with no internet and a big moat outside. I’m confident, not stupid.)
Our popular conception of intelligence encompasses both the generation of accounting systems (creating the microscope to perceive the hitherto-unseen) and the effective use of accounting systems (taking your observations from the microscope and using them to correctly guess what other microscopic things might do). People who are “smart” are usually good at doing both of these things, so there wasn’t much harm in blending them together conceptually. The scale hypothesis forces us to be more aware of the difference, since scale is so incredibly powerful at working within existing accounting systems while being powerless to escape them. If we don’t take care to draw this distinction, we’ll either be left like the AI skeptics who constantly underestimate what scale can do (by assuming machine learning’s inability to “think” like we do means it can’t find cryptic patterns) or the doomsday prophets who think the model will kill us all (by assuming that it’s ability to surface cryptic patterns within an accounting system means that it has “the intelligence” of a human being who could surface that pattern.) For the vast majority of human history, creating new accounts of variation and solving problems within them were just two things that people could solve using “intelligence”, in a way that made it seem natural to equate them. But the overwhelming might of scale has severed that loose equality forever, and we need to follow the treasure fox to a more nuanced description of how human beings actually solve problems.
V.
And what is that more nuanced description of how we actually solve problems?
Here’s where I confess: this article is the start of something, not the conclusion. I don’t have a solid enough understanding of accounts of variation to perfectly describe exactly which sorts of reasoning I would expect to be possible from a passive and outcome-driven frame and which require interaction with the universe. All I want to impart to you for now is that such an understanding is necessary to talk about “intelligence” in a way that remains coherent in a world with modern computing power. We need to change our concept of what “evidence” is. When our knowledge of a phenomenon is solely probabilistic, we should consider ourselves senseless turkeys, capable only of pointing at a pattern with no account for when that pattern may stop holding. We should take our common-sense feelings of what understanding is — that those who can expand if-then chains many stack levels, or deal with complex hypotheticals, understand more than people just tallying up the score — and try to figure out a more rigorous way of classifying where on this continuum a given piece of evidence is.
And let’s dispense with the false humility of us being “just” biological beings, imperfect copies of some hypothetical paragons of pure reasoning. Evolution didn’t start with reasoning. It started with replicators looking for a way to take information and defy entropy on a local scale to send that information forward in time. The replicators uncovered the most induction-proof patterns possible, the signals that you could use to drive decision making in a way that would remain useful for billions of years. Formal, abstract reasoning came much later, a cute little module that helped the information make it into the future a little better. But that reasoning came after, and has always been subservient to, the senses that can account for variation as accurately as possible4. So it was, and so shall it be.
We followed the fox looking for treasure, and we found only complexity. Still, the fox was going somewhere, and it’s worth figuring out where it took us. Let’s blend humility in the face of induction with confidence in the wake of scale and see what it gets us. To end, I’d like to share this quote from Remarque’s The Black Obelisk:
“Chaos”, Wernicke says. “Is it really chaos? Or is it so only for us?” Have you ever considered how the world would be if we had one more sense?”
“No.”
“But with one sense less?”
I reflect. "Then you would be blind, or deaf, or you couldn’t taste. It wouldn’t make a big difference.”
"And with one sense more? Why should we always be limited to five? Why couldn’t we perhaps develop six someday? Or eight? Or twelve? Wouldn't the world be completely different? Perhaps with the sixth our concept of time would disappear. Or our concept of space. Or of death. Or of pain. Or of morality. Certainly our present concept of life. We wander through our existence with pretty limited organs. A dog can hear better than any human being. A bat finds its way blind through all obstacles. A butterfly has a radio receiver that enables it to fly for miles directly to its mate. Migrating birds are vastly superior to us in their orientation. Snakes can hear with their skin. There are hundreds of such examples in natural history. So how can we know anything for certain? The extension of one organ or the development of a new one—and the world changes, life changes, and our concept of God changes. Prost!"
I lift my glass and drink. The Moselle is tart and earthy. “And so it’s better to wait till we have a sixth sense, eh?” I say.
“That’s not necessary. You can do what you like. But it's a good thing to know that one more sense would knock all our conclusions into a cocked hat. That puts an end to too much solemnity, doesn't it? How's the wine?”
I go into a bit more detail on this in the tis.so post Excluded middle frames and when to doubt them.
If someone actually wants to run this experiment in game and attempt to explain the movement of foxes using only in-game tools without any mods/dev tools that talk to the navmesh directly, I’d be willing to pay a grant of hundreds (not thousands) of dollars to support a paper about the experiment. Send enough of an experiment design that I know you’re serious to collin@desystemize.com or via DM to @CollinLysford.
Actually, I think the navmesh would need to change slightly to accommodate the new locations of the chests, but holding the navmesh constant makes this line of thinking easier to follow.
“Ontology recapitulates philosophy” is too niche of a pun for the main essay, but I can’t resist putting it in a footnote.