Yes, Desystemize is not dead! I just got sick of tip-toeing around the general representational crisis and wanted to get it nailed down once and for all. This took a lot of time, but I'm glad that it's done. We're probably never getting back to weekly cadence, but we're definitely going back to something more regular than this. Going forward, I'll probably retain the "Desystemize #N" naming scheme for stuff that's kind of just "one thought" and use proper titles for arguments like this that are more comprehensive and structured.
Also - the eagle eyed among you may have noticed that the last Desystemize said this would be about AI existential risk and it's not. Basically, I wrote a couple thousand words about it and then I got bored. I might collaborate with Crispy on it and revive it in some form but I didn't feel like pushing it through. I will say, though, that this article kind of gets my main point across - which is that any intelligent AI will need to be an embodied AI that can pick novel details out of it's environment, not a correlation machine munching on pre-chewed data. Hopefully once you've internalized "You can't diagnose patients from behind the door", it's easy to see why "Okay, but what if the computer behind the door is like, *really super duper fast*" is not an especially serious argument.
I learned my lesson from the above and didn't put a preview at the end of this one. Though if I don't change my mind, next article will be about ontological remodeling and sudoku.
This is much less general than the broad representational issues you're talking about, but a term I like for the problem that the city doctor and country doctor are having is "denominator problem."
Whenever someone gives you an average, you should ask "what are they dividing by?" There is some background set that's "normal" and if you divide by measurements of different things, you get different averages.
The country doctor and the city doctor have different experiences because they see different people. They aren't explicitly doing math, but their ideas about "average" have different denominators.
However, there are two levels of concern (1) do you understand the dataset? What do they think they're dividing by? (2) what are they really dividing by? As you say, reanalyzing the dataset might not be enough. You might have to go and look.
And then there are the broader questions of whether you're even counting the right things.
Some housekeeping in the comments...
Yes, Desystemize is not dead! I just got sick of tip-toeing around the general representational crisis and wanted to get it nailed down once and for all. This took a lot of time, but I'm glad that it's done. We're probably never getting back to weekly cadence, but we're definitely going back to something more regular than this. Going forward, I'll probably retain the "Desystemize #N" naming scheme for stuff that's kind of just "one thought" and use proper titles for arguments like this that are more comprehensive and structured.
Also - the eagle eyed among you may have noticed that the last Desystemize said this would be about AI existential risk and it's not. Basically, I wrote a couple thousand words about it and then I got bored. I might collaborate with Crispy on it and revive it in some form but I didn't feel like pushing it through. I will say, though, that this article kind of gets my main point across - which is that any intelligent AI will need to be an embodied AI that can pick novel details out of it's environment, not a correlation machine munching on pre-chewed data. Hopefully once you've internalized "You can't diagnose patients from behind the door", it's easy to see why "Okay, but what if the computer behind the door is like, *really super duper fast*" is not an especially serious argument.
I learned my lesson from the above and didn't put a preview at the end of this one. Though if I don't change my mind, next article will be about ontological remodeling and sudoku.
Looks cool, don't worry about existential risk (like Gray Mirror and ACX did it first). Take your time.
Excellent post! I just have one small aside:
This is much less general than the broad representational issues you're talking about, but a term I like for the problem that the city doctor and country doctor are having is "denominator problem."
Whenever someone gives you an average, you should ask "what are they dividing by?" There is some background set that's "normal" and if you divide by measurements of different things, you get different averages.
The country doctor and the city doctor have different experiences because they see different people. They aren't explicitly doing math, but their ideas about "average" have different denominators.
However, there are two levels of concern (1) do you understand the dataset? What do they think they're dividing by? (2) what are they really dividing by? As you say, reanalyzing the dataset might not be enough. You might have to go and look.
And then there are the broader questions of whether you're even counting the right things.