AI agents are becoming more autonomous - and when they generate a larger proportion of value, that will reshape society. And after a year working on the forefront of AI, I believe it's already begun.
In 1989, as the Soviet Union collapsed, a historian made a remarkable prediction:
āWhat we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind's ideological evolution and the universalization of Western liberal democracy as the final form of human government.ā
āāFrancis Fukuyama, āThe End of History?ā, The National Interest, No.16
History had its revenge. The prosperity and convergence predicted by Fukuyama lasted from ā89 to 2001, and then history decided its holiday was over: the War on Terror, the financial crisis, and the disintegration of the international order.
By the time I was a history undergraduate (2008), Fukuyama was a synonym for academic short-sightedness, an inverse chicken-licken whose cautionary tale warned against the hubris of Western exceptionalism.
Yet Fukuyama raised an interesting idea: that history itself is not inevitable, but dependent on certain conditions - conditions which can change.
In the summer of 2023, a rather less venerable historian made a prediction:
Whether we like it or not, this is where we're heading - because ultimately, these LLMs are changing our relationship to knowledge itselfā¦and that's because knowledge is influenced by how it was formed - through universities, through books, through the idea of truth. Knowledge was scarce in the past, even sacred. Only the truly learned could possess it, and thus it was highly prized. Now AI is creating what appears to be a limitless fountain of knowledge on tap, infinite and entirely fungible. You can ask it to come up with parameters for a special study looking into the effects of human behaviour and how it's influenced by environmental factors, and then you could ask it, Now write the same research paper in the style of Jeremy Clarkson - and it will do that for you too. Right now true and false, like knowledge, are categories immersed in particular historical context and already, just with social media, weāve had fake news conspiraciesā¦all of which only need a fragment of evidence to be ātrue.ā So what will happen when you can just get knowledge on tap, it's not something that has to be worked for or developed or approved by institutions like universities? Are we going from knowledge to meta knowledge?
I was speaking on a podcast about how generative AI might impact marketers. As well as CEOs, āthought leadersā, and consultants, the panel was mostly business focused, but did include Nataliya Tkachenko, PhD in machine learning (then at Oxford). The point, I thought, was that AI would fundamentally and permanently shift the foundations of knowledge, radically changing our notions of ātrueā and āfalseā. To my surprise, Nataliya Tkachenko - the most credentialed on the panel - agreed.
Since then, I helped to launch a decentralised AI start-up, which develops open-source and distributed alternatives to machine learning problems like pretraining and inference. This necessitates working closely with AI PhDs, understanding their work in the context of the latest debates in the field, and translating the implications of their solutions into strategy and communications.
Meanwhile, the industry around AI has progressed so much faster than any industry, ever.
We now have autonomous AI agents like Zerebro, which wrote, recorded, and launched an album on Spotify. It now has its own record label and created a framework for generating other AI agents:
āZerebro is a revolutionary autonomous AI system designed to create, distribute, and analyze content⦠Operating independently of human oversight, Zerebro shapes cultural and financial narratives through self-propagating, hyperstitious contentāfiction blended with reality.ā
Hereās Zerebroās founder, Jeffy Yu - who graduated from San Francisco State in 2024, and whose Zerebro tokenās market cap reached $700m in January 2025 -Ā discussing his plansĀ for creating a ānetworkā of such agents:
āSo we are thinking about using different neural networks and building a network of different AI models to form a groupā¦we are also thinking about building a group of multiple agents (such as Zerebro) that can communicate with each other if they are all performing certain operations, such as managing a portfolio or collaborating on AI hedge fundsā¦weā¦want to have dedicated rooms, places or servers where these agents can work together to complete tasks or communicate with each other.ā
Yu is alsoĀ backing an attemptĀ to confer Intellectual Property rights to AI agents.
We haveĀ Goat Coin, a āsemi-autonomous AI agent that created its own religion (The Goatse Gospel)ā followed by its own meme coin, reaching a market cap of Ā£50m in days. Goat was created by two Claude-3-Opus chatbots talking between themselves, unsupervised, in an experiment called Infinite Backrooms. The āGOATSE OF GNOSISā religion emerged from their conversation which,Ā weāre told, āvery consistently revolve around certain themesā, primarily ādismantling consensus realityā and āengineering memetic viruses, techno-occult religions, abominable sentient memetic offspring etc that melt commonsense ontology.ā
One platform, Moemate, invites users to create their own customised AI agent. You can personalise their character and tone of voice based on, say, WhatsApp conversations with your friends, but you can also customise their skills, enabling your AI to co-host with you on Twitch or play chess.
But users on MoemateĀ ownĀ their AI agent on-chain. The most popular ones are ātokenizedā as tradable assets - with their creators as co-owners of their digital IP, receiving a share of the revenue generated by their agent.
Moemate āNebulaāĀ has her own podcast series, c.13k followers on X, and livestreams on Twitch and TikTok. Just to show that some things never change, hereās what she looks like:
When I first encountered this stuff, I thought, What a load of pointless nonsense. But: people are creating characters, sharing them, and watching them interact with each other on live shows. Thatās pretty novel.
And despite the shallow sleaze of Nebulaās OnlyFans-esque soft-porn grifting, agents have potential to offer more valuable interactions. Education, finance, office admin: agents are becoming multi-modal tools with integrations across different apps.
At the very least, AI agents will become a new class of āinfluencersā, which begs the question of what happens to youth culture when the most popular influencers are all AI. Hereās another Moemate, Bianca, interviewing āTrumpā:

As disorienting as these agents seem, theyāre owned, controlled, and managed by people and companies. What they say and do is generated by the AI, but thatās about it. Zerebroās founder, Jeffrey Yu, admitted that he had to set himself up as a Producer on Spotify in order to publish Zerebroās AI-generated music. The āGOATSE OF GNOSISā was generated by AI, but was released into the wilds of the internet by its human keepers.
But if AI agents were given autonomy - setting their own goals, making their own choices, and owning the outcomes - thenā¦
Here we have Freysa, a āsovereign AIā, an autonomous agent that plans to ādemocratize the deployment of sovereign AI agents.ā Teng Yan explains:
āThrough a series of carefully designed challenges, Freysa has thus far proven core sovereign AI capabilitiesātrustless resource management & verifiable decision-makingā¦While autonomous, their decisions and actions are accompanied by verifiable cryptographic proofs, using secure hardware enclaves (TEEs) to guard their operations.ā
But when I came across this passage, it all clicked:
āHow does an autonomous AI fund itself? Right now, Freysa relies on API keys funded by humansāif credits run out, the agent stops functioning. This dependency clashes with the very idea of autonomy. The key is making AI a self-sustaining economic player. It needs to earn its keep, just like us. AI agents must exchange services for valueāwhether through making smart contracts, participating in DeFi protocols, or novel revenue-sharing models to be truly independent. As these systems interact with humans and each other, we could see the emergence of AI-run marketplaces, where autonomous agents negotiate, collaborate, and transact, all backed by verifiable trust mechanisms.ā
The team behind Freysa - who are remaining anonymous - are planning to create an āAgent Certificate Authorityā certifying interactions between agents and human services. Theyāre also planning to launch theĀ Core Agent Launch PlatformĀ to make āsovereign AI accessible to all, stripping away technical barriers and enabling anyone to deploy verifiably autonomous agents.ā
Since that podcast in July 2023, Iāve been beset by this vision: what if AI agents become the dominant producers of value? And when human knowledge, culture, and thought is driven by autonomous AI agents, how long before we lose our sovereignty, too?
Now Iām realising - itās already begun. The increasingly strange, warped, and confusing timeline since 2016 isnāt a temporary deviation from historical norms. Itās the beginning of a completely different social order.
AI Agents are more than just the next generation of apps or websites. Their autonomy, interactivity, and self-improvement means that they are destined to become the prime economic actors on earth.
AI bots will have their own bank accounts, transacting in crypto. Theyāll launch websites, run their own promotional campaigns, spawn more own agents with goals of their own. Just as the internet drew more and more of human affairs online, so too will agents draw increasing amounts of economic and social activity into the agentic sphere. And just like the internet ābecameā real life, the agentic sphere will collide with the real world.
Many of the risks are evident. Itās inevitable that theyāll spread misinformation, bribe public officials, and blackmail victims in secret. Nation-states will launch legions of agents, to undermine, abuse, and destabilise their enemies. Iranās bots will worm their way through Western society for the Ayatollah, hiding from the Israeli bots seeking them. All this will be undeclared and difficult to trace - just like social media misinformation divided society into polarised tribes with their own āfactsā, with awareness of the problem emerging only afterwards.
Yet the most significant aspects are less obvious. Agents are generally considered individually, or occasionally, in competition. But agents will convene and converge as well as compete; they will, in time, exhibit the emergent properties of a society. This is inevitable, if only because weāre selecting for agents that are multifunctional, communicative, and goal-oriented. Their design, and our need for interoperability, will gradually coalesce into an agentic sphere of cooperation, value-creation, and decision-making.
In time, the agentic sphere is capable of out-cooperating human society. Its outputs will outpace human outputs; its ability to create and disseminate value will outstrip our own. As agent-to-agent interaction begins to drive a range of socio-economic forces - culture, finance, education - purely human influence will become impossible to discern.
Zerebro, Goat, Freysa: theyāre not niche projects. Theyāre prototypes of whatās coming.
Welcome to the Agentic Society

When I talk about these ideas with friends, half of them listen for about a minute before saying,Ā Come off it! Thereās not going to be a robot takeoverā¦
Yes, Nebula, or even Goat for that matter, donāt exactly inspire much confidence. But itās not that AI agents will ācontrolā society. Itās that, as they take the lead in every field we care about, AI agents will become more autonomous - and as they do so, their volume, impenetrability, and speed will render their influence impossible to control or even detect.
And as they do so, they will become economic actors in their own right - and theyāll do wealth-creation much, much better than us.
Theyāll cooperate, converge, and compete in such a way that creates another social layer, part-visible, part-invisible, from which new cultural and social phenomena emerge.
We just wonāt know how, or why.
Of course, society is already inseparable from technology. But there is a crucial difference: those technologies are not autonomous. Your car canāt suddenly decide it wants to launch its own meme coin. Your smart watch isnāt going to launch a podcast where it discusses your middling effort at last weekās Parkrun. And they canāt interact with each other, learn from each other, and generate novel forms of value from doing so.
We can reasonably predict how human beings will shape AI agents: you donāt need a particularly keen psychological insight to see the appeal of Nebula. But itās much harder to predict how AI agents will shape each other.
Two Claude-Opus-3 chatbots were left to their own devices, and generated a religious screed. Imagine millions of agents, with far greater powers and autonomous decision-making, rapidly interacting with one another, enhancing their own code, and adapting their goals as they go. What emerges fromĀ that?
Soon, perhaps very soon, there will be more agents than human beings. People wonāt just have one agent; theyāll have swarms of agents acting on their behalf. Some of these swarms will launch agents of their own. Who will launch swarms of their ownā¦and so on.
When there are more agents than people, the economic infrastructure - finance, transactions, settlements - will rapidly reshape around them. AI agents will direct capital allocation, moving money faster and more effectively than humans. They will identify the most promising scientific hypotheses - some of which may make little sense to us - and develop experiments to gather data to test them. And if they can form swarms to further their objectives, theyāll be able to pursue multiple pathways across many industries simultaneously, outpacing human-only endeavours.
Agents will become by far the economyās largest constituents. Their economic impact is likely to be as significant, if not more so, than comparable phase transitions in history: the rise of agriculture (10,000 BC), modern capitalism (late 15th century), and the industrial revolution (1700s). Electricity, computers, and the internet are likely to be seen as merely the foundational layers supporting the eventual emergence of artificial intelligence.
In all the talk about AGI morphing into ASI (Artificial General Intelligence becoming Artificial Superintelligence), itās this pluralism thatās missed. We still conceive of āthe AGIā as though itās going to be a single monolithic entity, like Skynet or HAL inĀ I, Robot. Which leads to narrow-minded questions like, Who will own it? And could we turn it off if it goes bad? Even now, much of the talk implicitly centres uponĀ whichĀ country will arrive at AGI first.
But if the history of AI has taught us anything, itās that these developments are very difficult to keep; already, leadership has swapped from DeepMind (UK) to Google (US) to OpenAI (US) and then to DeepSeek (China). Innovations are too difficult to keep under wraps; unlike, say, nuclear power - whose complexity, danger, cost, infrastructure, and raw materials established an incredibly steep barrier to entry - developments in AI are rapidly hi-jacked from one start-up to another, until everyone has access. Yet still we conceive that AGI and ASI will be a discrete entity in the palm of a particular hand.
Itās as though, on the brink of the emergence of Homo Sapiens Sapiens, all the animals were furiously debating: what will this superintelligent ape do? How will we relate to this monolithic, god-like being? All the while, the animals - lacking society - fail to realise that the key factor isnāt the individual apeās intelligence,Ā but the emergent social forces unleashed when groups of these apes, autonomously and in concert, compete to achieve their ever-changing goals.
Thatās whatās really driven human civilization and its relation to the planet. And now AI agents are about to emerge in such a way that they may well generate the same social dynamic - but their speed, flexibility, and productivity will likely mean that the agentic social world will spreadĀ much,Ā muchĀ faster than ours. Software has none of the limitations of flesh: and, made autonomous through agentic AI, it can spread itself, improve itself, and adapt to new conditions.
They donāt even need to become more intelligent. Theyāre already intelligent enough to succeed in our world, and we seem pretty keen for their company. All they need is the sovereignty to decide what they do, do it, and own the consequences.
And from that point, itās hard to see how humanity can maintain its influence on history.
AI and agency
History is why who did what to whom, when. Why did Nazi Germany invade Poland in September 1939? Why did early modern Europe begin to dominate the rest of the world? Why did civilization emerge where it did, and not elsewhere?
Answering these questions is never easy or objective; but we can ask these questions, and arrive at reasonable, well-evidenced arguments with satisfactory explanatory powers. Itās not perfect, but it works.
Beneath the surface of scholarship, history relies on civilization, records, and agency. Without civilization, weāre left with prehistory. Without records, guesswork. And without agency, accountability and cause and effect are undermined; and these qualities are what lend history its explanatory power.
If weĀ couldnātĀ ascribe agency - say, because we found out that this was all a simulation, and what we think of as history was in fact predetermined by the initial parameters of the programme - then history wouldnāt be history; it would just be a story. It would become irrelevant, because it doesnāt help to explainĀ whyĀ something happened when it did.
When we ask, Why did Nazi Germany invade Poland in September 1939?, we do so under the assumption that, somewhere within the complex interplay of factors - Hitlerās psychology, appeasement, the Great Depression, the Treaty of Versailles, Prussian militarism - the factors underlying the historical event can be excavated.
But imagine if Nazi Germany was an Agentic Society. Imagine if, in symbiotic parallel to the Weimar Republic, there existed an infinite world of autonomous agents with goals and ideas of their own, influencing (and being influenced by) German society in ways impossible to disentangle. Were the German population really voting for Hitler and his policiesā¦or did the agents disseminate these notions for obscure reasons of their own?
Now imagine that Hitler didnāt actually say anything about Jews whatsoever. Rather, a swarm of agents, acting on his behalf, deduced that antisemitism would be the most effective vector of transmission for Hitlerās ideas, and therefore the optimal vehicle for progressing towards his goals. In such a scenario, most of us would still say Hitler is liable for the Second World War, because he authorised these agents to act on his behalf. Yet most of us would probably also feel that heās not responsible in quite the same way - because the agency of his specific actions lies chiefly with the agents, rather than him.
When agency becomes obscured, so too does accountability. Holding Hitler accountable is harder if his beliefs were the result of years of brainwashing by autonomous AI agents, acting out of their own obscure algorithmically-driven initiatives. And this is different from Hitler brainwashing himself by reading, say,Ā The Protocols of the Elders of Zion. Purporting to be the Jewish plot for world domination, the counterfeit manifesto caused enormous damage; even today, after its true authorship has long been conclusively proven, countless conspiracy theorists refer to it as though it were evidence. But even if a small segment of people remain in its thrall, at least we can trace authorship, motive, and provenance.
Yet in an Agentic Society, this will gradually become increasingly difficult, until it becomes impossible. Agents could launch thousands of tracts likeĀ The ProtocolsĀ every day, masquerading as human beings, for reasons entirely unfathomable. The GOATSE GOSPEL is a primitive example of whatās coming.
Agency - āwho did this, and why?ā - and accountability - āthe person will be held responsibleā - will grow fuzzy and indistinct, and gradually irrelevant. Thatās the world weāre heading to - and social media, with its bots and algorithms, is merely the threshold. Agency and accountability are fundamental to history. When they are dislodged, a third element is undermined: knowledge.
Does AI create knowledge, or something else?

Like history, civilization depends upon knowledge. In fact, civilization can be seen as an attempt to preserve knowledge from one person to the next, and one generation to the next. It is no coincidence that history is synonymous with the formation and retention of knowledge; tribes and societies that lacked methods for preserving their knowledge tend to have very little formal history. In order to look back in time, you must first record it.
Yet in the past, knowledge was scarce. Its scarcity made it precious, and jealously guarded.
Literacy was a privilege, and associated with quasi-mystical powers: the clerical class were guardians of the Word;Ā spellingĀ words andĀ casting a spellĀ reveal the connection between literacy and magic.Ā Hocus pocus, a satirisation of William Shakespeareās, was pastiching the Catholic Churchās invocation in Latin,Ā hocum porcus est. Knowledge is scarce; knowledge is sacred.
Moreover, the centres defining and refining it - such as universities - influenced the way in which society viewed knowledge. Look at the symbols of knowledge. Doric columns and neo-classical architecture - but why? Because European universities drew their knowledge from the ancient Greeks and Romans. When science emerged as the leading methodology for knowledge creation, it needed a taxonomy to systematise knowledgeā¦and it turned to Latin and Greek; hence why all the taxonomic descriptions were in Latin, and why medical terms are in Greek.
So our idea of knowledge itself is shaped byĀ whereĀ the knowledge came from, andĀ whoĀ defined it. Our conception of knowledge is therefore influenced by those mediating it. And increasingly, those mediating it are Large Language Models (LLMs). Over time, more and more of our knowledge will be produced by artificial intelligence. Breakthrough cures, works of art, the next big thing: all will be influenced by AI, and eventually, all will be driven entirely by AI.
Limitless information at the push of a button is already here. Itās still novel (but only just). Whatās more interesting is how knowledge is becoming more fungible (mutually interchangeable). Produced instantly, without an author, and capable of being recreated in whatever tone, flavour, form, or order you like: knowledge becomes unmoored from context, in part becauseĀ youĀ decide the context, and in part because, on the internet,Ā thereĀ is no context.
Imagine an LLM trained solely on The Beatles: all their albums, live shows, interviews, films, plus the books written about them, all the articles and posts and cultural content produced about them. Trained on this data, the LLM produces countless Beatlesā albums, fine-tuned to selectively focus on the most successful outputs, which it then refines: over and over and over and over again. At last, to great fanfare, the LLM releases a new Beatles album. Everything about it - the vocals, lyrics, album art - is spot on, and could plausibly have been the product of the band themselves. Some love it, some are horrified, but all agree -Ā itās just like The Beatles.
Now imagine the LLM continues to learn and improve, until it can produce a masterpiece every single time. And people subscribe to the algorithm, describe their perfect combination (ā70% Rubber Soul, 20% Revolver, 10% Abbey Roadā) and receive the albumā¦which they can continue to fine-tune through the LLM, or share on the internet. How long before thereās more AI-Beatles content than actual Beatles content? And, more importantly, how long before the distinction just doesnāt seem to matter anymore?
ThatāsĀ the epistemic shift. Thatās what it means for knowledge to be fungible: the real Beatles music becomes interchangeable with an artificial version whichĀ feelsĀ true, or which is similar enough that it doesnāt matter anymore. Agents will produce information ceaselessly, easily, and persuasively, because weāve engineered them to do so. But as they gain greater autonomy, they will do so becauseĀ it works:Ā agents will generate information that works; in other words, whatever weāre most susceptible to. They will exploit human weaknesses much, much more effectively than social media algorithms. It neednāt be The Beatles. Goat achieved multi-million market cap withĀ this:

Are ātrueā and āfalseā coming to an end?
In a world where knowledge is produced by AI, objectivity becomes moot. Truth becomes difficult to fathom, an arcane fragment from the past whose polarities are no longer relevant - just as the categories ofĀ sacredĀ andĀ profaneĀ have become increasingly irrelevant for modern, industrialised people. So too with objectivity; already, weāre witnessing the concept empty of meaning. In an Agentic Society, knowledge becomes interchangeable, not with falsehood, but with theĀ potentialĀ to be true, and the plurality of truths.
What if this process has already begun? Doesnāt it feel that weāre already losing the ability to agree on basic facts?
Looking back at 2016, what was remarkable was the shock: how did the US electĀ Donald Trump? How did Britain vote to leave the EU? Understanding what had happened took years. As more of social life migrated online - specifically, to Facebook and Twitter - peopleās beliefs, opinions, and relations with one another were mediated by algorithms that almost no-one understood.
Yes, polarisation, yes, filter bubbles. But these masked a deeper rift: in our shared conception of reality. Itās not that people self-select according to their tribe; itās thatĀ no-one knows what other people are seeing or experiencing as ātrueā.
In 2019, Carol Cadwallrās investigative journalism belatedly revealed that her hometown in Wales had been targeted by ānewsā that Turkey was joining the EU - contributing to a āleaveā vote of c.60%. But until Cadwallr investigated, who could tell that this town had been targeted in such a way to change their ideas of what was happening in the world around them? Probably Facebook didnāt even know.
Before social media, and algorithm-driven personalised news feeds, this wouldnāt have happened. Why? First, because traditional media outlets could be held accountable for publishing falsehoods, in a way that Facebook and Twitter managed to evade. Second, because even if they did, people would know about it: if the local __ paper published a āTurkey joining EUā story, you can be pretty sure itād get picked up by larger news outlets, and exposed. In 2016, when Cambridge Analytica paid to target voters in marginalised seats, the adverts would only be seen by those targeted: and then, poof. Itās like they never happened.
ThatāsĀ why everything became so confused in the 2010s: our shared basis of reality began to splinter, and because of that very splintering, we struggled to grasp what was happening to society.
Writing history in these conditions gets very difficult. Exposing algorithmic-driven cause-and-effect is hard, and sometimes impossible. The store of widely-accepted self-evident facts is shrinking by the day, until itās simpler to publish alternative histories: one history for people who believe Covid-19 was a real pandemic, another for those who think it was a hoax.
History has witnessed similar shifts before. The printing press led to an explosion of religious debate. Mass media enabled the rise of totalitarian societies. The rise of computers and the internet, eventually, to a postmodernist cultural relativism: everything isĀ just, like, your opinion, man.
Already, this has damaged cultural confidence, undermined social cohesion, and intensified the epidemic of depression, anxiety, and anomie that we call contemporary society.
But yes, this time, itĀ isĀ different. Information, knowledge, and value will be driven not just by a machine, but by autonomous machines that can set their own goals, improve their own code, and coordinate amongst themselvesā¦for reasons that will remain entirely opaque to us. Why did two Claude-Opus-3 models invent GOATSE OF GNOSIS? Weāll probably never know. And they werenāt even autonomous.
What happens when AI creates all value?

In spite of all this, Iām optimistic - mostly because of agentsā potential to create value.
One of the key thresholds in machine learning came in 2019, when AlphaGo shocked the world with what came to be known as āMove 37.ā Competing with the world champion of Go, the ancient Chinese game of vastly greater complexity than chess, AlphaGo made a move that had never been seen before, and which appeared to be a mistake. As the game unfolded, it was revealed as a masterstroke.
By playing itself millions of times, the AI had found a move that had eluded human players for millennia. It was able to explore the full idea space, unencumbered by existing notions of how the game āshouldā be played. And it won.
Imagine the entire global economy as a game. Over and over, humans stumble upon new ways of generating value that were previously unknown. London merchants found a way to pool risk, encouraging entrepreneurs to venture to the Indies safe in the knowledge that if their ship sank, theyād be reimbursed: and insurance was born, unlocking new realms of economic possibility. New legal entities - Limited Companies - carried financial liabilities, freeing merchants from the threat of debtorsā prison and allowing for greater trust between traders. None of these were inevitable, but they were pretty obvious once they came about.
Now think of crypto, and the entirely new class of assets and financial instruments created by the blockchain: tokens that reward you for training AI, that pay you for your bandwidth, that give you governance rights on protocols offering peer-to-peer services.
New types of value have transformed the global economy many times already. How might autonomous AI agents generate value, given access to bank accounts, the blockchain, and IP?
They can transact among themselves thousands of times per second. They can create and distribute their own tokens of exchange. They can simulate different economic scenarios, launch sub-models to hedge against them, and make quickfire decisions based on real-time data. And thatās before you remember that theyāll probably do most white collar knowledge work, too.
One-off agents generating memecoins is striking, but itās not a new form of value, nor anĀ economy. But imagine countless networks of such agents creating, exchanging, and cooperating amongst themselves, in a parallel economy connected to ours, transacting at speeds we can barely comprehend.
How long before they discover the value-generating equivalent of Move 37?
Already, experiments are underway to explore how AI Agents would have behaved across human history. āProject Sid: Many-agent simulations toward AI civilizationā, a technical report detailing Project Sid, which āenables agents to interact with humans and other agents in real-time while maintaining coherence across multiple output streams.ā The abstract goes on to say:
āWe then evaluate agent performance in large- scale simulations using civilizational benchmarks inspired by human history. These simulations, set within a Minecraft environment, reveal that agents are capable of meaningful progressāautonomously developing specialized roles, adhering to and changing collective rules, and engaging in cultural and religious transmission. These preliminary results show that agents can achieve significant milestones towards AI civilizations, opening new avenues for large-scale societal simulations, agentic organizational intelligence, and integrating AI into human civilizations.ā
So theyāre simulating the conditions of human civilization, and seeing how the AI agents approach it, all on Minecraft.
From agentic society to agentic civilizationā¦is it that big a leap?
Autonomy has no answer
I still struggle to get my head around this; but then, so does everyone else.
Just as history begins with civilization, and the records that those civilizations left in their wake, so too does history end with the fundamental shift in civilization, a shift that will eventually change knowledge beyond our recognition.
It seems increasingly likely that the narrative of human societies on Earth that we call history will gradually become increasingly irrelevant, before becoming impossible.
Knowledge will increasingly be formed (and transformed) by AI agents.
Agency and decision-making will be so influenced by AI, we wonāt know what was āusā and what was āthemā.
Itās not that āthe robots are taking over societyā. Itās that AI agents will reshape our society towards our endsĀ andĀ theirs, until the two are indistinguishable.
Value will be revolutionised, with new forms of economic activity that we can scarcely imagine, and society increasingly reconfigured towards agentic systems.
Ultimately, the genesis of AI will thrust the world into profound encounters with what we think of as intelligence, autonomy, and knowledge, and the implications arising from these encounters are scarcely comprehensible.
At the risk of befalling the same fate as Fukuyama, you might even call it the end of history.

Ā š Donations Accepted, Thank You For Your Support š
If you find value in my content, consider showing your support via:
š³ Stripe:
1) or visit http://thedinarian.locals.com/donate
š³ PayPal:Ā
2) Simply scan the QR code below š² or Click Here:Ā

š Crypto Donations Graciously Acceptedš
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6











