Information Networks & the AI Takeover

The first entry in my 2025 book journal took a few more words than fit in that format, so adding them here. I’m always looking for more good reads; please share!

There’s a lot in Nexus about AI taking over the world, and Harari has some pretty impressive stuff to say about that. But for me the most novel part of the book is the framework of information flow that he develops on the way to that part. He’s not the most concise guy; my (surely flawed) summation is:

  1. Human progress is characterized by a quest for truth and order. Truth helps us manipulate the world more effectively, and order allows us to live in larger and larger groups without killing each other.
  2. Information is the raw material for both of these. But information is not truth and doesn’t necessarily lead to truth. E.g., information can be used by scientists to uncover new truths, but it can also be used to propagandize a population into collective beliefs — whether those beliefs are true or not.
  3. There are two broad classes of societies: those that rely on an infallible higher power (e.g., the Bible or Stalin) and those that do not (e.g., Ancient Athens or the United States). The former prioritize order over truth; the latter rely on competing mechanisms of self-correction to balance the two.
  4. Advances in information technology have made it ever-easier to share information, which has had a significant impact on which sorts of societies are more effective at balancing truth and order. These advances have benefited both democratic and autocratic models in different ways at different times
  5. Artificial Intelligence is not a new information technology; it’s a new form of life that in many ways is superior to ours (primarily around information recall and pattern recognition) but with different motivations. E.g., it may not have the same regard for the individual that we do. AI participation will have dramatic and unpredictable impacts to how our societies, both democratic and autocratic, operate.

Harari does a remarkable job at building this all up with a ton of historical, real world examples. That alone is worth the cost of entry. His jump to AI taking over the world seems a bit disconnected — I struggled to see the thread leading from one to the other, until I looked at it in terms of fallibility.

We’re increasingly used to giving computers authority over important stuff. And this can come with negative consequences — Harari’s prime example for this is Facebook’s role in the violence against the Rohingya in Myanmar. In hindsight that picture is clear: (1) Facebook coded its feed algorithms to prioritize engagement; (2) outrage increases engagement: (3) the algorithm overwhelmingly picked inflammatory (and largely false) content to show folks in Myanmar.

This is a trap that anybody who has ever tried to “manage with data” will recognize — you get what you ask for. It’s not uniquely an “AI” problem at all; how many mid-level managers have received short-term kudos for firing essential employees in the name of cost-cutting? Or been promoted for hitting sales targets based on volume by giving discounts that kill margin?

The Myanmar/FB issue wasn’t AI, it was a poor metric coded by human engineers. But Harari is right that the more we consider AI as an infallible agent in society, the more its motivations (metrics) matter. And it’s a compounding problem — we are increasingly asking AI to create metrics that build on top of its underlying implicit values.

An example close to my heart is recruiting algorithms. It is a fact of history that many, many more men have been hired into software jobs than women (and thus, by volume, more successful engineers are men). If we ask AI to do a first screen of candidates, it’s for sure going to notice this and bias its decisions towards hiring men. Because we never explained that this bias was a problem, it simply does the job we asked it to.

Presumably we could solve this if we created the perfect set of underlying motivations in the first place — we could reward the AI during training for finding historical bias and compensating. That’s basically what we do as humans with diversity programs (maligned as they are these days), and we’ll clearly have to do the same with AI.

Bottom line: AI is no less fallible than humans — but it can screw up at a scale far beyond what humans can accomplish. Can we create the right checks and balances before AI becomes self-reinforcing and we lose control of the process? Because surely that will happen.

And of course the answer is, who the heck knows. But Harari does a great job making us think about it and face the reality — so worth the read. Highly recommended for both the setup/framework and the AI thoughts. Just be prepared to read a LOT of words.