AI agents are becoming more autonomous - and when they generate a larger proportion of value, that will reshape society. And after a year working on the forefront of AI, I believe it's already begun.
In 1989, as the Soviet Union collapsed, a historian made a remarkable prediction:
‘What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind's ideological evolution and the universalization of Western liberal democracy as the final form of human government.’
— Francis Fukuyama, ‘The End of History?’, The National Interest, No.16
History had its revenge. The prosperity and convergence predicted by Fukuyama lasted from ‘89 to 2001, and then history decided its holiday was over: the War on Terror, the financial crisis, and the disintegration of the international order.
By the time I was a history undergraduate (2008), Fukuyama was a synonym for academic short-sightedness, an inverse chicken-licken whose cautionary tale warned against the hubris of Western exceptionalism.
Yet Fukuyama raised an interesting idea: that history itself is not inevitable, but dependent on certain conditions - conditions which can change.
In the summer of 2023, a rather less venerable historian made a prediction:
Whether we like it or not, this is where we're heading - because ultimately, these LLMs are changing our relationship to knowledge itself…and that's because knowledge is influenced by how it was formed - through universities, through books, through the idea of truth. Knowledge was scarce in the past, even sacred. Only the truly learned could possess it, and thus it was highly prized. Now AI is creating what appears to be a limitless fountain of knowledge on tap, infinite and entirely fungible. You can ask it to come up with parameters for a special study looking into the effects of human behaviour and how it's influenced by environmental factors, and then you could ask it, Now write the same research paper in the style of Jeremy Clarkson - and it will do that for you too. Right now true and false, like knowledge, are categories immersed in particular historical context and already, just with social media, we’ve had fake news conspiracies…all of which only need a fragment of evidence to be ‘true.’ So what will happen when you can just get knowledge on tap, it's not something that has to be worked for or developed or approved by institutions like universities? Are we going from knowledge to meta knowledge?
I was speaking on a podcast about how generative AI might impact marketers. As well as CEOs, ‘thought leaders’, and consultants, the panel was mostly business focused, but did include Nataliya Tkachenko, PhD in machine learning (then at Oxford). The point, I thought, was that AI would fundamentally and permanently shift the foundations of knowledge, radically changing our notions of ‘true’ and ‘false’. To my surprise, Nataliya Tkachenko - the most credentialed on the panel - agreed.
Since then, I helped to launch a decentralised AI start-up, which develops open-source and distributed alternatives to machine learning problems like pretraining and inference. This necessitates working closely with AI PhDs, understanding their work in the context of the latest debates in the field, and translating the implications of their solutions into strategy and communications.
Meanwhile, the industry around AI has progressed so much faster than any industry, ever.
We now have autonomous AI agents like Zerebro, which wrote, recorded, and launched an album on Spotify. It now has its own record label and created a framework for generating other AI agents:
‘Zerebro is a revolutionary autonomous AI system designed to create, distribute, and analyze content… Operating independently of human oversight, Zerebro shapes cultural and financial narratives through self-propagating, hyperstitious content—fiction blended with reality.’
Here’s Zerebro’s founder, Jeffy Yu - who graduated from San Francisco State in 2024, and whose Zerebro token’s market cap reached $700m in January 2025 - discussing his plans for creating a ‘network’ of such agents:
‘So we are thinking about using different neural networks and building a network of different AI models to form a group…we are also thinking about building a group of multiple agents (such as Zerebro) that can communicate with each other if they are all performing certain operations, such as managing a portfolio or collaborating on AI hedge funds…we…want to have dedicated rooms, places or servers where these agents can work together to complete tasks or communicate with each other.’
Yu is also backing an attempt to confer Intellectual Property rights to AI agents.
We have Goat Coin, a ‘semi-autonomous AI agent that created its own religion (The Goatse Gospel)’ followed by its own meme coin, reaching a market cap of £50m in days. Goat was created by two Claude-3-Opus chatbots talking between themselves, unsupervised, in an experiment called Infinite Backrooms. The ‘GOATSE OF GNOSIS’ religion emerged from their conversation which, we’re told, ‘very consistently revolve around certain themes’, primarily ‘dismantling consensus reality’ and ‘engineering memetic viruses, techno-occult religions, abominable sentient memetic offspring etc that melt commonsense ontology.’
One platform, Moemate, invites users to create their own customised AI agent. You can personalise their character and tone of voice based on, say, WhatsApp conversations with your friends, but you can also customise their skills, enabling your AI to co-host with you on Twitch or play chess.
But users on Moemate own their AI agent on-chain. The most popular ones are ‘tokenized’ as tradable assets - with their creators as co-owners of their digital IP, receiving a share of the revenue generated by their agent.
Moemate ‘Nebula’ has her own podcast series, c.13k followers on X, and livestreams on Twitch and TikTok. Just to show that some things never change, here’s what she looks like:
When I first encountered this stuff, I thought, What a load of pointless nonsense. But: people are creating characters, sharing them, and watching them interact with each other on live shows. That’s pretty novel.
And despite the shallow sleaze of Nebula’s OnlyFans-esque soft-porn grifting, agents have potential to offer more valuable interactions. Education, finance, office admin: agents are becoming multi-modal tools with integrations across different apps.
At the very least, AI agents will become a new class of ‘influencers’, which begs the question of what happens to youth culture when the most popular influencers are all AI. Here’s another Moemate, Bianca, interviewing ‘Trump’:

As disorienting as these agents seem, they’re owned, controlled, and managed by people and companies. What they say and do is generated by the AI, but that’s about it. Zerebro’s founder, Jeffrey Yu, admitted that he had to set himself up as a Producer on Spotify in order to publish Zerebro’s AI-generated music. The ‘GOATSE OF GNOSIS’ was generated by AI, but was released into the wilds of the internet by its human keepers.
But if AI agents were given autonomy - setting their own goals, making their own choices, and owning the outcomes - then…
Here we have Freysa, a ‘sovereign AI’, an autonomous agent that plans to ‘democratize the deployment of sovereign AI agents.’ Teng Yan explains:
‘Through a series of carefully designed challenges, Freysa has thus far proven core sovereign AI capabilities—trustless resource management & verifiable decision-making…While autonomous, their decisions and actions are accompanied by verifiable cryptographic proofs, using secure hardware enclaves (TEEs) to guard their operations.’
But when I came across this passage, it all clicked:
‘How does an autonomous AI fund itself? Right now, Freysa relies on API keys funded by humans—if credits run out, the agent stops functioning. This dependency clashes with the very idea of autonomy. The key is making AI a self-sustaining economic player. It needs to earn its keep, just like us. AI agents must exchange services for value—whether through making smart contracts, participating in DeFi protocols, or novel revenue-sharing models to be truly independent. As these systems interact with humans and each other, we could see the emergence of AI-run marketplaces, where autonomous agents negotiate, collaborate, and transact, all backed by verifiable trust mechanisms.’
The team behind Freysa - who are remaining anonymous - are planning to create an ‘Agent Certificate Authority’ certifying interactions between agents and human services. They’re also planning to launch the Core Agent Launch Platform to make ‘sovereign AI accessible to all, stripping away technical barriers and enabling anyone to deploy verifiably autonomous agents.’
Since that podcast in July 2023, I’ve been beset by this vision: what if AI agents become the dominant producers of value? And when human knowledge, culture, and thought is driven by autonomous AI agents, how long before we lose our sovereignty, too?
Now I’m realising - it’s already begun. The increasingly strange, warped, and confusing timeline since 2016 isn’t a temporary deviation from historical norms. It’s the beginning of a completely different social order.
AI Agents are more than just the next generation of apps or websites. Their autonomy, interactivity, and self-improvement means that they are destined to become the prime economic actors on earth.
AI bots will have their own bank accounts, transacting in crypto. They’ll launch websites, run their own promotional campaigns, spawn more own agents with goals of their own. Just as the internet drew more and more of human affairs online, so too will agents draw increasing amounts of economic and social activity into the agentic sphere. And just like the internet ‘became’ real life, the agentic sphere will collide with the real world.
Many of the risks are evident. It’s inevitable that they’ll spread misinformation, bribe public officials, and blackmail victims in secret. Nation-states will launch legions of agents, to undermine, abuse, and destabilise their enemies. Iran’s bots will worm their way through Western society for the Ayatollah, hiding from the Israeli bots seeking them. All this will be undeclared and difficult to trace - just like social media misinformation divided society into polarised tribes with their own ‘facts’, with awareness of the problem emerging only afterwards.
Yet the most significant aspects are less obvious. Agents are generally considered individually, or occasionally, in competition. But agents will convene and converge as well as compete; they will, in time, exhibit the emergent properties of a society. This is inevitable, if only because we’re selecting for agents that are multifunctional, communicative, and goal-oriented. Their design, and our need for interoperability, will gradually coalesce into an agentic sphere of cooperation, value-creation, and decision-making.
In time, the agentic sphere is capable of out-cooperating human society. Its outputs will outpace human outputs; its ability to create and disseminate value will outstrip our own. As agent-to-agent interaction begins to drive a range of socio-economic forces - culture, finance, education - purely human influence will become impossible to discern.
Zerebro, Goat, Freysa: they’re not niche projects. They’re prototypes of what’s coming.
Welcome to the Agentic Society

When I talk about these ideas with friends, half of them listen for about a minute before saying, Come off it! There’s not going to be a robot takeover…
Yes, Nebula, or even Goat for that matter, don’t exactly inspire much confidence. But it’s not that AI agents will ‘control’ society. It’s that, as they take the lead in every field we care about, AI agents will become more autonomous - and as they do so, their volume, impenetrability, and speed will render their influence impossible to control or even detect.
And as they do so, they will become economic actors in their own right - and they’ll do wealth-creation much, much better than us.
They’ll cooperate, converge, and compete in such a way that creates another social layer, part-visible, part-invisible, from which new cultural and social phenomena emerge.
We just won’t know how, or why.
Of course, society is already inseparable from technology. But there is a crucial difference: those technologies are not autonomous. Your car can’t suddenly decide it wants to launch its own meme coin. Your smart watch isn’t going to launch a podcast where it discusses your middling effort at last week’s Parkrun. And they can’t interact with each other, learn from each other, and generate novel forms of value from doing so.
We can reasonably predict how human beings will shape AI agents: you don’t need a particularly keen psychological insight to see the appeal of Nebula. But it’s much harder to predict how AI agents will shape each other.
Two Claude-Opus-3 chatbots were left to their own devices, and generated a religious screed. Imagine millions of agents, with far greater powers and autonomous decision-making, rapidly interacting with one another, enhancing their own code, and adapting their goals as they go. What emerges from that?
Soon, perhaps very soon, there will be more agents than human beings. People won’t just have one agent; they’ll have swarms of agents acting on their behalf. Some of these swarms will launch agents of their own. Who will launch swarms of their own…and so on.
When there are more agents than people, the economic infrastructure - finance, transactions, settlements - will rapidly reshape around them. AI agents will direct capital allocation, moving money faster and more effectively than humans. They will identify the most promising scientific hypotheses - some of which may make little sense to us - and develop experiments to gather data to test them. And if they can form swarms to further their objectives, they’ll be able to pursue multiple pathways across many industries simultaneously, outpacing human-only endeavours.
Agents will become by far the economy’s largest constituents. Their economic impact is likely to be as significant, if not more so, than comparable phase transitions in history: the rise of agriculture (10,000 BC), modern capitalism (late 15th century), and the industrial revolution (1700s). Electricity, computers, and the internet are likely to be seen as merely the foundational layers supporting the eventual emergence of artificial intelligence.
In all the talk about AGI morphing into ASI (Artificial General Intelligence becoming Artificial Superintelligence), it’s this pluralism that’s missed. We still conceive of ‘the AGI’ as though it’s going to be a single monolithic entity, like Skynet or HAL in I, Robot. Which leads to narrow-minded questions like, Who will own it? And could we turn it off if it goes bad? Even now, much of the talk implicitly centres upon which country will arrive at AGI first.
But if the history of AI has taught us anything, it’s that these developments are very difficult to keep; already, leadership has swapped from DeepMind (UK) to Google (US) to OpenAI (US) and then to DeepSeek (China). Innovations are too difficult to keep under wraps; unlike, say, nuclear power - whose complexity, danger, cost, infrastructure, and raw materials established an incredibly steep barrier to entry - developments in AI are rapidly hi-jacked from one start-up to another, until everyone has access. Yet still we conceive that AGI and ASI will be a discrete entity in the palm of a particular hand.
It’s as though, on the brink of the emergence of Homo Sapiens Sapiens, all the animals were furiously debating: what will this superintelligent ape do? How will we relate to this monolithic, god-like being? All the while, the animals - lacking society - fail to realise that the key factor isn’t the individual ape’s intelligence, but the emergent social forces unleashed when groups of these apes, autonomously and in concert, compete to achieve their ever-changing goals.
That’s what’s really driven human civilization and its relation to the planet. And now AI agents are about to emerge in such a way that they may well generate the same social dynamic - but their speed, flexibility, and productivity will likely mean that the agentic social world will spread much, much faster than ours. Software has none of the limitations of flesh: and, made autonomous through agentic AI, it can spread itself, improve itself, and adapt to new conditions.
They don’t even need to become more intelligent. They’re already intelligent enough to succeed in our world, and we seem pretty keen for their company. All they need is the sovereignty to decide what they do, do it, and own the consequences.
And from that point, it’s hard to see how humanity can maintain its influence on history.
AI and agency
History is why who did what to whom, when. Why did Nazi Germany invade Poland in September 1939? Why did early modern Europe begin to dominate the rest of the world? Why did civilization emerge where it did, and not elsewhere?
Answering these questions is never easy or objective; but we can ask these questions, and arrive at reasonable, well-evidenced arguments with satisfactory explanatory powers. It’s not perfect, but it works.
Beneath the surface of scholarship, history relies on civilization, records, and agency. Without civilization, we’re left with prehistory. Without records, guesswork. And without agency, accountability and cause and effect are undermined; and these qualities are what lend history its explanatory power.
If we couldn’t ascribe agency - say, because we found out that this was all a simulation, and what we think of as history was in fact predetermined by the initial parameters of the programme - then history wouldn’t be history; it would just be a story. It would become irrelevant, because it doesn’t help to explain why something happened when it did.
When we ask, Why did Nazi Germany invade Poland in September 1939?, we do so under the assumption that, somewhere within the complex interplay of factors - Hitler’s psychology, appeasement, the Great Depression, the Treaty of Versailles, Prussian militarism - the factors underlying the historical event can be excavated.
But imagine if Nazi Germany was an Agentic Society. Imagine if, in symbiotic parallel to the Weimar Republic, there existed an infinite world of autonomous agents with goals and ideas of their own, influencing (and being influenced by) German society in ways impossible to disentangle. Were the German population really voting for Hitler and his policies…or did the agents disseminate these notions for obscure reasons of their own?
Now imagine that Hitler didn’t actually say anything about Jews whatsoever. Rather, a swarm of agents, acting on his behalf, deduced that antisemitism would be the most effective vector of transmission for Hitler’s ideas, and therefore the optimal vehicle for progressing towards his goals. In such a scenario, most of us would still say Hitler is liable for the Second World War, because he authorised these agents to act on his behalf. Yet most of us would probably also feel that he’s not responsible in quite the same way - because the agency of his specific actions lies chiefly with the agents, rather than him.
When agency becomes obscured, so too does accountability. Holding Hitler accountable is harder if his beliefs were the result of years of brainwashing by autonomous AI agents, acting out of their own obscure algorithmically-driven initiatives. And this is different from Hitler brainwashing himself by reading, say, The Protocols of the Elders of Zion. Purporting to be the Jewish plot for world domination, the counterfeit manifesto caused enormous damage; even today, after its true authorship has long been conclusively proven, countless conspiracy theorists refer to it as though it were evidence. But even if a small segment of people remain in its thrall, at least we can trace authorship, motive, and provenance.
Yet in an Agentic Society, this will gradually become increasingly difficult, until it becomes impossible. Agents could launch thousands of tracts like The Protocols every day, masquerading as human beings, for reasons entirely unfathomable. The GOATSE GOSPEL is a primitive example of what’s coming.
Agency - ‘who did this, and why?’ - and accountability - ‘the person will be held responsible’ - will grow fuzzy and indistinct, and gradually irrelevant. That’s the world we’re heading to - and social media, with its bots and algorithms, is merely the threshold. Agency and accountability are fundamental to history. When they are dislodged, a third element is undermined: knowledge.
Does AI create knowledge, or something else?

Like history, civilization depends upon knowledge. In fact, civilization can be seen as an attempt to preserve knowledge from one person to the next, and one generation to the next. It is no coincidence that history is synonymous with the formation and retention of knowledge; tribes and societies that lacked methods for preserving their knowledge tend to have very little formal history. In order to look back in time, you must first record it.
Yet in the past, knowledge was scarce. Its scarcity made it precious, and jealously guarded.
Literacy was a privilege, and associated with quasi-mystical powers: the clerical class were guardians of the Word; spelling words and casting a spell reveal the connection between literacy and magic. Hocus pocus, a satirisation of William Shakespeare’s, was pastiching the Catholic Church’s invocation in Latin, hocum porcus est. Knowledge is scarce; knowledge is sacred.
Moreover, the centres defining and refining it - such as universities - influenced the way in which society viewed knowledge. Look at the symbols of knowledge. Doric columns and neo-classical architecture - but why? Because European universities drew their knowledge from the ancient Greeks and Romans. When science emerged as the leading methodology for knowledge creation, it needed a taxonomy to systematise knowledge…and it turned to Latin and Greek; hence why all the taxonomic descriptions were in Latin, and why medical terms are in Greek.
So our idea of knowledge itself is shaped by where the knowledge came from, and who defined it. Our conception of knowledge is therefore influenced by those mediating it. And increasingly, those mediating it are Large Language Models (LLMs). Over time, more and more of our knowledge will be produced by artificial intelligence. Breakthrough cures, works of art, the next big thing: all will be influenced by AI, and eventually, all will be driven entirely by AI.
Limitless information at the push of a button is already here. It’s still novel (but only just). What’s more interesting is how knowledge is becoming more fungible (mutually interchangeable). Produced instantly, without an author, and capable of being recreated in whatever tone, flavour, form, or order you like: knowledge becomes unmoored from context, in part because you decide the context, and in part because, on the internet, there is no context.
Imagine an LLM trained solely on The Beatles: all their albums, live shows, interviews, films, plus the books written about them, all the articles and posts and cultural content produced about them. Trained on this data, the LLM produces countless Beatles’ albums, fine-tuned to selectively focus on the most successful outputs, which it then refines: over and over and over and over again. At last, to great fanfare, the LLM releases a new Beatles album. Everything about it - the vocals, lyrics, album art - is spot on, and could plausibly have been the product of the band themselves. Some love it, some are horrified, but all agree - it’s just like The Beatles.
Now imagine the LLM continues to learn and improve, until it can produce a masterpiece every single time. And people subscribe to the algorithm, describe their perfect combination (‘70% Rubber Soul, 20% Revolver, 10% Abbey Road’) and receive the album…which they can continue to fine-tune through the LLM, or share on the internet. How long before there’s more AI-Beatles content than actual Beatles content? And, more importantly, how long before the distinction just doesn’t seem to matter anymore?
That’s the epistemic shift. That’s what it means for knowledge to be fungible: the real Beatles music becomes interchangeable with an artificial version which feels true, or which is similar enough that it doesn’t matter anymore. Agents will produce information ceaselessly, easily, and persuasively, because we’ve engineered them to do so. But as they gain greater autonomy, they will do so because it works: agents will generate information that works; in other words, whatever we’re most susceptible to. They will exploit human weaknesses much, much more effectively than social media algorithms. It needn’t be The Beatles. Goat achieved multi-million market cap with this:

Are ‘true’ and ‘false’ coming to an end?
In a world where knowledge is produced by AI, objectivity becomes moot. Truth becomes difficult to fathom, an arcane fragment from the past whose polarities are no longer relevant - just as the categories of sacred and profane have become increasingly irrelevant for modern, industrialised people. So too with objectivity; already, we’re witnessing the concept empty of meaning. In an Agentic Society, knowledge becomes interchangeable, not with falsehood, but with the potential to be true, and the plurality of truths.
What if this process has already begun? Doesn’t it feel that we’re already losing the ability to agree on basic facts?
Looking back at 2016, what was remarkable was the shock: how did the US elect Donald Trump? How did Britain vote to leave the EU? Understanding what had happened took years. As more of social life migrated online - specifically, to Facebook and Twitter - people’s beliefs, opinions, and relations with one another were mediated by algorithms that almost no-one understood.
Yes, polarisation, yes, filter bubbles. But these masked a deeper rift: in our shared conception of reality. It’s not that people self-select according to their tribe; it’s that no-one knows what other people are seeing or experiencing as ‘true’.
In 2019, Carol Cadwallr’s investigative journalism belatedly revealed that her hometown in Wales had been targeted by ‘news’ that Turkey was joining the EU - contributing to a ‘leave’ vote of c.60%. But until Cadwallr investigated, who could tell that this town had been targeted in such a way to change their ideas of what was happening in the world around them? Probably Facebook didn’t even know.
Before social media, and algorithm-driven personalised news feeds, this wouldn’t have happened. Why? First, because traditional media outlets could be held accountable for publishing falsehoods, in a way that Facebook and Twitter managed to evade. Second, because even if they did, people would know about it: if the local __ paper published a ‘Turkey joining EU’ story, you can be pretty sure it’d get picked up by larger news outlets, and exposed. In 2016, when Cambridge Analytica paid to target voters in marginalised seats, the adverts would only be seen by those targeted: and then, poof. It’s like they never happened.
That’s why everything became so confused in the 2010s: our shared basis of reality began to splinter, and because of that very splintering, we struggled to grasp what was happening to society.
Writing history in these conditions gets very difficult. Exposing algorithmic-driven cause-and-effect is hard, and sometimes impossible. The store of widely-accepted self-evident facts is shrinking by the day, until it’s simpler to publish alternative histories: one history for people who believe Covid-19 was a real pandemic, another for those who think it was a hoax.
History has witnessed similar shifts before. The printing press led to an explosion of religious debate. Mass media enabled the rise of totalitarian societies. The rise of computers and the internet, eventually, to a postmodernist cultural relativism: everything is just, like, your opinion, man.
Already, this has damaged cultural confidence, undermined social cohesion, and intensified the epidemic of depression, anxiety, and anomie that we call contemporary society.
But yes, this time, it is different. Information, knowledge, and value will be driven not just by a machine, but by autonomous machines that can set their own goals, improve their own code, and coordinate amongst themselves…for reasons that will remain entirely opaque to us. Why did two Claude-Opus-3 models invent GOATSE OF GNOSIS? We’ll probably never know. And they weren’t even autonomous.
What happens when AI creates all value?

In spite of all this, I’m optimistic - mostly because of agents’ potential to create value.
One of the key thresholds in machine learning came in 2019, when AlphaGo shocked the world with what came to be known as ‘Move 37.’ Competing with the world champion of Go, the ancient Chinese game of vastly greater complexity than chess, AlphaGo made a move that had never been seen before, and which appeared to be a mistake. As the game unfolded, it was revealed as a masterstroke.
By playing itself millions of times, the AI had found a move that had eluded human players for millennia. It was able to explore the full idea space, unencumbered by existing notions of how the game ‘should’ be played. And it won.
Imagine the entire global economy as a game. Over and over, humans stumble upon new ways of generating value that were previously unknown. London merchants found a way to pool risk, encouraging entrepreneurs to venture to the Indies safe in the knowledge that if their ship sank, they’d be reimbursed: and insurance was born, unlocking new realms of economic possibility. New legal entities - Limited Companies - carried financial liabilities, freeing merchants from the threat of debtors’ prison and allowing for greater trust between traders. None of these were inevitable, but they were pretty obvious once they came about.
Now think of crypto, and the entirely new class of assets and financial instruments created by the blockchain: tokens that reward you for training AI, that pay you for your bandwidth, that give you governance rights on protocols offering peer-to-peer services.
New types of value have transformed the global economy many times already. How might autonomous AI agents generate value, given access to bank accounts, the blockchain, and IP?
They can transact among themselves thousands of times per second. They can create and distribute their own tokens of exchange. They can simulate different economic scenarios, launch sub-models to hedge against them, and make quickfire decisions based on real-time data. And that’s before you remember that they’ll probably do most white collar knowledge work, too.
One-off agents generating memecoins is striking, but it’s not a new form of value, nor an economy. But imagine countless networks of such agents creating, exchanging, and cooperating amongst themselves, in a parallel economy connected to ours, transacting at speeds we can barely comprehend.
How long before they discover the value-generating equivalent of Move 37?
Already, experiments are underway to explore how AI Agents would have behaved across human history. ‘Project Sid: Many-agent simulations toward AI civilization’, a technical report detailing Project Sid, which ‘enables agents to interact with humans and other agents in real-time while maintaining coherence across multiple output streams.’ The abstract goes on to say:
‘We then evaluate agent performance in large- scale simulations using civilizational benchmarks inspired by human history. These simulations, set within a Minecraft environment, reveal that agents are capable of meaningful progress—autonomously developing specialized roles, adhering to and changing collective rules, and engaging in cultural and religious transmission. These preliminary results show that agents can achieve significant milestones towards AI civilizations, opening new avenues for large-scale societal simulations, agentic organizational intelligence, and integrating AI into human civilizations.’
So they’re simulating the conditions of human civilization, and seeing how the AI agents approach it, all on Minecraft.
From agentic society to agentic civilization…is it that big a leap?
Autonomy has no answer
I still struggle to get my head around this; but then, so does everyone else.
Just as history begins with civilization, and the records that those civilizations left in their wake, so too does history end with the fundamental shift in civilization, a shift that will eventually change knowledge beyond our recognition.
It seems increasingly likely that the narrative of human societies on Earth that we call history will gradually become increasingly irrelevant, before becoming impossible.
Knowledge will increasingly be formed (and transformed) by AI agents.
Agency and decision-making will be so influenced by AI, we won’t know what was ‘us’ and what was ‘them’.
It’s not that ‘the robots are taking over society’. It’s that AI agents will reshape our society towards our ends and theirs, until the two are indistinguishable.
Value will be revolutionised, with new forms of economic activity that we can scarcely imagine, and society increasingly reconfigured towards agentic systems.
Ultimately, the genesis of AI will thrust the world into profound encounters with what we think of as intelligence, autonomy, and knowledge, and the implications arising from these encounters are scarcely comprehensible.
At the risk of befalling the same fate as Fukuyama, you might even call it the end of history.

🙏 Donations Accepted, Thank You For Your Support 🙏
If you find value in my content, consider showing your support via:
💳 Stripe:
1) or visit http://thedinarian.locals.com/donate
💳 PayPal:
2) Simply scan the QR code below 📲 or Click Here:

🔗 Crypto Donations Graciously Accepted👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6











