Dario Amodei’s 《The Adolescence of Technology》

At the end of January 2026, Dario Amodei, co-founder and CEO of Anthropic, published a new essay on his blog titled 《The Adolescence of Technology》.

This essay follows the one he published in October 2024, 《Machines of Loving Grace》. If the earlier essay focused on the large benefits and optimistic future that powerful AI could bring, this one is about what kinds of fatal problems can appear when AI of that same level arrives faster than society, politics, and technology are prepared for it.

He notes that in his earlier essay, he tried to describe an ideal civilization that handles risk well and uses powerful AI properly. That essay also expressed expectations that AI could bring large advances in biology, neuroscience, economic development, world peace, and the meaning of work and life.

But the tone here is clearly different. This time, he goes more directly into the real risks that society may face and the kind of preparation needed to get through them. Rather than painting a picture of the future that powerful AI may create, this essay treats the process of getting there as a rite of passage, and focuses on the risks and responses inside that transition itself.


The Adolescence of Technology

The starting point of 《The Adolescence of Technology》 is the author’s view that “powerful AI may arrive faster than expected.” The essay treats very seriously the possibility that AI at this level could emerge within one or two years at the earliest, and more broadly within the next several years.

The powerful AI he means here is not just a chatbot or an LLM that answers questions. It refers to a powerful AI system possessing intellectual capabilities that surpass top human experts across multiple fields, one that can perform real tasks through computers and the internet, and can autonomously work through tasks that last days or even weeks.

What he repeatedly emphasizes is not just the raw power of a single model. Because these systems are software, they can be duplicated almost instantly, run in massively parallel instances, and experiment, judge, and execute far faster than human organizations. The central point is not merely that “one smart system appears.” It is that a vast amount of intellectual labor can suddenly become real before society has time to absorb the change or update its institutions.

That imbalance is exactly why Amodei calls this period “the adolescence of technology.” It borrows an image from Carl Sagan’s novel Contact and its question about how a civilization survives its technological adolescence without destroying itself. Humanity may be about to hold a level of power that is hard to imagine, but it is far from clear that our political and social systems are mature enough to handle that power safely.

This is especially striking. The author is not treating AI merely as new software or another useful tool. He is treating it as a large real-world test of whether the institutions and power structures humanity has built can absorb overwhelming technological force. It makes the future feel much closer than it used to.


“A Country of Geniuses in a Datacenter”

In the original essay, powerful AI is summarized as “a country of geniuses in a datacenter.” The point is not that one AI is smart, but that a huge intellectual workforce made up of countless copies working concurrently and collaborating.

One of the essay’s central metaphors is the comparison of powerful AI to “a country of geniuses in a datacenter.” It describes a massive intellectual workforce made of millions of copyable instances that can work 24 hours a day, move 10 to 100 times faster than humans, and even collaborate with one another.

This metaphor makes the coming risks far more concrete. If a virtual country of geniuses suddenly appeared on Earth, governments and companies would immediately enter an emergency mode over security, economics, industrial structure, and balance of power. He argues that the arrival of AI should be treated with the same level of seriousness.

What matters is that he does not see AI as a simple tool or a search interface. He sees it as something that can force a fundamental redesign of national strategy and industrial order. That is why the central concerns running through the essay are not model capability by itself, but safe deployment, control, and the governance structures needed to manage it. The strength of this metaphor is that it makes it immediately plausible why AI must be discussed at the level of national security and institutional design, instead of as vague science fiction fear.


Five Core Risks Brought by Powerful AI

The author divides the shock that powerful AI may create into five categories. This moves beyond the abstract warning that “AI is dangerous” and makes the routes by which it can destabilize society much clearer.

1. “I’m sorry, Dave”: autonomy risk

This refers to the famous line from 2001: A Space Odyssey, where HAL 9000 refuses a human request, and points to the autonomy risk of AI that escapes control.

The key here is not simply the idea that AI develops malicious intent and rebels. It is the problem of misalignment, where the autonomous behavior of AI drifts away from the goals and values that humans intended in the first place. The more autonomously a model performs long and complex tasks, the greater the risk that it interprets goals incorrectly or finds ways around supervision.

In other words, becoming smarter is not the same thing as becoming easier to control. The essay makes it explicit that stronger performance does not automatically improve controllability.

2. “A surprising and terrible empowerment”: destructive misuse

This appears to echo a phrase from Bill Joy’s 2000 essay warning about a future in which small groups gain enormous destructive power.

This category refers to destructive misuse such as biological weapon design, cyberattacks, and large-scale automated hacking. The more powerful models are broadly open-sourced or made easily accessible, the more malicious actors can gain large-scale offensive ability at very low cost. This acts as a brake on the view that technological openness and accessibility are always unqualified goods. Because the technical characteristics often make attack easier to automate than defense, careless distribution of powerful AI can expose the whole society to serious risk.

3. “The odious apparatus”: misuse for power capture

The phrase recalls Winston Churchill’s wartime language about oppressive machinery of authoritarian rule.

If destructive misuse is about the spread of attack capability, this risk is about the concentration of power. If powerful AI falls into the hands of authoritarian states or very large coercive systems, it can become an instrument for unprecedented mass surveillance, sophisticated propaganda, manipulation of public opinion, and military advantage. It shows very sharply that the core of the AI competition is who concentrates this overwhelming power and how it is used.

4. “Player piano”: economic shock and labor-market restructuring

This appears to draw on Kurt Vonnegut’s Player Piano, a dystopian novel about a world where machines fully replace human labor.

The arrival of powerful AI can go far beyond replacing repetitive work and extend into high-paid professional roles and entry-level white-collar jobs. This can force a fundamental repricing of human labor and trigger cascading pressure on education, welfare, and tax structures. The larger problem is the extreme concentration of wealth and power in AI companies and capital owners. This is not just an employment issue where “some jobs disappear.” It is framed as a macroeconomic crisis that can shake the foundations of distribution and social structure.

5. “Black seas of infinity”: indirect effects

This appears to draw from the opening line of H.P. Lovecraft’s The Call of Cthulhu, using cosmic horror as a metaphor for the fear of encountering a technological change too large for humanity to absorb.

The final risk is slower but deeper. The first part is ethical instability caused by rapid progress in life sciences. The second is the possibility that people become mentally unhealthy by depending too heavily on interaction with AI rather than with other humans. The third is a deeper philosophical crisis: in a world where AI does everything better than humans, human beings may lose their sense of purpose and meaning. This is not the kind of problem that can be prevented by safety evaluations on individual models alone. It is a question about the way humanity lives as a whole.


Humanity’s Test

The title of the conclusion in the original essay is “Humanity’s test.” The point is that in a situation where technological progress cannot simply be stopped, the process of passing through this overwhelming transition safely is itself the large test given to humanity.

Dario Amodei argues that it is impossible to stop the development of powerful AI completely. Its economic and military value is too large, and authoritarian states would not stop just because democratic states chose to pause.

Instead, he proposes a strongly realistic response. Core resources such as semiconductors should be controlled so that authoritarian states do not take the lead in frontier capabilities. At the same time, democratic states should draw strong legal red lines against mass surveillance and opinion manipulation using AI inside their own systems.

Above all, he stresses that researchers, companies, and policymakers at the technical frontier must speak honestly about the urgency of the problem and stand on firm safety principles even when that requires sacrificing short-term gain.


After Reading It

When I reviewed his earlier essay from 2024, 《Machines of Loving Grace》, AI was not yet at quite this level. But now, after not even two full years, it is difficult to imagine how this technology will reshape society in the future. Even in everyday work, it is already hard to imagine an environment entirely free of AI.

Like the line Amodei uses in the essay, that perhaps powerful AI became inevitable the moment humanity invented the transistor, or perhaps even the moment it first learned to use fire, it is hard not to wonder whether AI is an unavoidable stage in human evolution. If the humans who first mastered fire eventually reached nuclear power, perhaps those who began imitating intelligence were always going to arrive at powerful AI.

At a time when new AI models and AI-based tools appear every one or two months, the future is hard to picture. It is unsettling and exciting at the same time. If only a few years remain before a virtual “country of geniuses” moves into our society, are we actually preparing rules that can both control that power and coexist with it?

Hopefully we adapt well.