8 Comments

If interested in another look at numbers and counting, watch "Breaking the Maya Code" (2008) a documentary about the 200 year struggle to decipher the writing system of the ancient Maya, a 4,000 year-old Mesoamerican civilization.

Life Lesson: It takes many different kinds of people to solve a complex problem.

Movie Scene:

"Our decimal system counts by tens and powers of ten. Förstemann realized that the Maya used a base 20 system, counting by 20s and powers of 20. With this system, they could express and manipulate extremely large numbers."

https://moviewise.wordpress.com/2015/07/31/breaking-the-maya-code/

Expand full comment

It's funny, you frame this like a bad thing, but it actually gives me hope, because if this problem continues, it would appear there is not much to fear from AI, as it would fail to ever become superhumanly effective.

Expand full comment

This was such a good piece, and a wonderful follow after reading 'The Limits of Data' by T. Chi Nguyen: https://issues.org/limits-of-data-nguyen/

In my world (data protection), we have a lot of 'shrew v. white-footed mice' examples -- easily measurable harms (data breaches, 'insufficient' legal basis or technical controls, cookie & privacy notices) which are often subjective, but easy to spot get lots of press/regulatory interest. Harder problems -- unethical / illegitimate business practices, data accuracy, bias/discrimination, non-material privacy harms -- are like the tick problem, likely to be far more impactful but require more time, effort & energy than the easy stuff.

Expand full comment

Here is a fun anti-empiricism proposition: most domains, if not all, are resistant, anti-fragile, or worse, archotrophic, to the tyranny of systems, thus first hand perception is superior. Reference: #16 https://eggreport.substack.com/p/how-to-find-god-10-16

Expand full comment

Interesting feedback on this piece over at /r/slatestarcodex:

> This doesn't seem like the only way error detection in complex systems can work. You might have many ways to realize something is amiss; data where Lyme disease doesn't seem to vary based on mouse population as expected, failed interventions to reduce mouse populations having no effect on cases, ecology models suggesting white-footed mice which really had that many ticks wouldn't be competitive with other mice, etc.

> As your great mound of data and models grows, inconsistencies should be easier to detect, even for ML. New lines of evidence introduce puzzles that contradict models based on tainted older data; only basically correct theories will have explanatory power, and the edge cases will suggest avenues to improve the model or determine its limits. That's how science works.

What do you make of this?

Expand full comment