9 Comments
Jul 1, 2023Liked by collin

Thank you for this - I think the notions of the information theory of individuality and context maximization as human flourishing are very valuable.

I strongly disagree that we're better without AI, but I find compelling your argument that we risk ceding our individuality to AI. Open-source and/or personal AIs might be the way forward - if I can train my own GPT on my own personal experiential data, then the AI becomes downstream of me, and I'm less likely to cede my individuality. To achieve something like this, we'd want to increase the power of consumer-grade hardware and/or decrease the need for high-parameter models - these are strategies that decrease the marginal value of a model that only Google can train over a model that I myself can train.

Expand full comment

Really nice, and really thought provoking, but I wonder if it tells us the same thing when we consider any other big systemic change to our superorganisms, whether it is the invention of democracy or academic publishing as new ways to organize, or even something like writing, which Plato criticizes as destroying our well established ways of organizing our information?

Expand full comment

What's actually gonna stop AI though? I've heard grumbling in multiple quarters about either shutting it down or regulating it, but it doesn't seem to have coalesced into something like the abolitionists yet.

Expand full comment

So basically you are saying:

1) Develop AI systems that augment and empower humans, rather than replace them. The goal should be to enhance human capabilities and autonomy, not reduce human participation and agency, requires rethinking current approaches to AI research and development.

2) Increase transparency and oversight of AI systems. Current "black box" AI lacks interpretability and explainability, making it difficult for humans to understand how decisions are made and intervene when needed. More transparency could enable humans to maintain some control over AI systems.

3) Limit the scope and impact of AI systems. Not every task needs to be automated or "AI-ified." Human beings and human decision making still confer advantages in many contexts. AI systems could be designed for more narrow, circumscribed roles.

4) Foster diversity in AI development. A broader range of perspectives and expertise beyond computer science, including from the humanities, social sciences, law and ethics, could help design AI that is more compatible with human flourishing.

5) Support research and experimentation in human-AI partnerships. New models of human-machine collaboration and interaction could help mitigate the risks associated with more traditional notions of "strong" or "general" AI that replaces humans.

6) Continually reevaluate whether AI is truly enabling progress for humanity. Every new AI system should be assessed based on whether it meaningfully enhances human capabilities, wellbeing and autonomy, and allows humans to continue shaping institutions in positive ways.

7) Remain vigilant and proactive. Even small, incremental changes due to AI systems can accumulate over time, so risks need to be identified and addressed early. A precautionary approach may be warranted.

Focus on augmenting rather than replacing humans.

Expand full comment

> There’s this thing now called “prompt engineers”, people who know good words to try and whisper to the AI systems so they’ll behave for two seconds. Kids pass around all sorts of oral traditions on how one might become viral on TikTok. A constant stream of A/B testing leads to us swapping tales of our different realities on the same websites, wondering whose world will become canon and whose will be abandoned. Thanks to the tyranny of low expectations, this is all humdrum background noise of the modern world. We don’t expect to understand things down to their roots. Everything is turning in to magic words and automatic recommendations.

William Gibson's catch-phrase comes to mind: "the street finds its own uses for things." I don't think he was talking about understanding things down to their roots? It's an urban aesthetic. Nature is far away and not what a young, ambitious street musician is interested in.

I'm wondering who should be considered really in touch with reality. We do have lots more cameras than we used to. You can have a good time looking around in Google Earth at places you're unlikely to visit. Astronomers have better telescopes, though it's seldom necessary to visit them in person, and in some cases impossible. There's more and better satellite data.

Maybe we should encourage people to visit more places on foot, but on the other hand, tourism isn't known for making places better. Too many people visiting a place in person causes environmental damage. You don't want them straying from the path, since they will trample things and may even get lost and have to be rescued. You probably want people working at an archeological site to be under professional supervision.

So the question is, if you go and look for yourself, are you adding any value?

What are some roles beyond professional scientist and tourist that need more people?

Expand full comment

>> If Mooglebook’s executive team unanimously agree that its activities are harmful, and they want to get out of the advertising business and pivot the whole company to rescuing abused beagles, they cannot do that. They would be fired by the board immediately. If the board agreed, they would be fired by the shareholders.

Unfortunately, Chapman went up a level of abstraction and lost some key details. This is not how Facebook works. The controlling shareholder for Facebook is Zuckerberg, and he can rename the company and plow money into VR if he likes. Similarly for Google, though the controlling shareholders aren't saying anything in public.

>> The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs Those are subordinate in turn to the AI the company depends on for its survival. If enemies of Mooglebook’s AI—activists, regulators, competitors—try to harm it, the institution can’t not do everything in its power to defend it. As, in fact, it is currently doing.

For Facebook, Google, and now Twitter, the institution isn't in charge, specific people are, and we can name them. Although, that's not the end of the story either. They can often be frustrated by their companies' institutional constraints and their willingness to break things varies. And there are outside constraints.

I think it makes sense to be wary of power, but we should also be wary of folk theories about how power works because they're often mistaken. To get serious about power, we would need to investigate how it really works, using concrete examples.

I'm not curious enough to put in that much effort, though I do enjoy reading Matt Levine's stories about corporate governance gone wrong.

Expand full comment

Hi, this is great. Thought I'd make a side point about elephants. You point to a tweet with a picture with a caption: "This is what an elephant herd is supposed to look like."

But here's a blog by someone who should know, saying no, it's not, that's an abnormal picture of elephants: https://markdeeble.wordpress.com/2014/05/18/haunted-by-a-photograph/

(Found by reading the tweet replies. Otherwise I'd never know, since I'm no elephant expert.)

So maybe it's a better example for Chasing the Treasure Fox, or just an example of the importance of context? We need people who know things that go beyond the dataset.

Expand full comment