TheDinarian
News • Business • Investing & Finance
?The Dinarian exists because the truth deserves a platform. Covering cryptocurrency, blockchain technology, global agendas, emerging science, and consciousness — because everything is connected and people deserve to know it. Knowledge is power. ?
Interested? Want to learn more about the community?
September 29, 2024
šŸŒŽšŸ“” Rumor has it that the Duga System in Ukraine is the largest HAARP facility in the world.

But according to Wikipedia, the infamous 'Russian Woodpecker' has been decommissioned.

Or has it? Take a look for yourself!

https://x.com/Red_Pill_US/status/1840129546273861669?s=09

post photo preview
Interested? Want to learn more about the community?
What else you may like…
Videos
Podcasts
Posts
Articles
šŸ¤– Everyone talks about $TAO as the Bitcoin of AI šŸ¤–

Nobody explains the single mechanism that makes that comparison actually true.

It is called the Yuma Consensus. Once you understand it, you will never look at decentralised AI the same way again.

Here is the full breakdown.

Bitcoin's consensus is simple. Did you produce a valid hash? Yes or no. Binary. Easy to verify.

AI is not binary. One model gives a better answer than another. One prediction is more accurate. One image is sharper. How does a decentralised network agree on which output deserves the reward when quality itself is a matter of judgment?

That is the problem Yuma Consensus was built to solve.

Bittensor is organised into subnets. Each one is a specialised market for a specific AI task. Language inference. Image generation. Financial predictions. Each runs independently with its own rules and its own participants.

Inside every subnet, there are two types of participants.

Miners are the producers. They run the actual AI models and compete to produce the highest quality outputs.

Validators are the judges. They test ...

00:06:53
🚨Jamie Dimon calls out the WEF to their faces...🚨

The moderator:

"OK. We're running out of time."

00:01:18
The era of the humanoid worker hasn't just started—it’s accelerating. šŸ¤–šŸ‡ŗšŸ‡ø

In just 120 days, Figure went from producing 1 robot per day to 1 robot per HOUR. That’s a 24x manufacturing jump in under four months.

The Scale:

šŸ—ļø BotQ Facility: Now "birthing" a new bot every 60 minutes.
šŸ¤– Figure 03: 350+ robots already built.
āš™ļø Hardware: 9,000+ actuators produced across 150+ workstations.
āœ… Quality: 80+ functional tests per unit before deployment.

We’re no longer watching a sci-fi movie; we’re watching the first generation of a worker that can be upgraded, copied, and scaled beyond human limits. The plot has officially moved to real life. šŸŽ„šŸ‘‡

00:02:18
šŸ‘‰ Coinbase just launched an AI agent for Crypto Trading

Custom AI assistants that print money in your sleep? šŸ”œ

The future of Crypto x AI is about to go crazy.

šŸ‘‰ Here’s what you need to know:

šŸ’  'Based Agent' enables creation of custom AI agents
šŸ’  Users set up personalized agents in < 3 minutes
šŸ’  Equipped w/ crypto wallet and on-chain functions
šŸ’  Capable of completing trades, swaps, and staking
šŸ’  Integrates with Coinbase’s SDK, OpenAI, & Replit

šŸ‘‰ What this means for the future of Crypto:

1. Open Access: Democratized access to advanced trading
2. Automated Txns: Complex trades + streamlined on-chain activity
3. AI Dominance: Est ~80% of crypto šŸ‘‰txns done by AI agents by 2025

🚨 I personally wouldn't bet against Brian Armstrong and Jesse Pollak.

šŸ‘‰ Coinbase just launched an AI agent for Crypto Trading

Why Our Elites FEAR Disclosure šŸ˜‰

Disclosure of UFO secrets could trigger more than curiosity — it may unravel the global systems built on secrecy, control, and profit. Trillions of dollars, decades of policy, and entire power structures hang in the balance. Quite enough to terrify those wielding power.

🚨 Ripple CEO Garlinghouse Hints at "Something Special" for XRP Holders if Company Goes Public—Says IPO Not Immediate Priority Given Weak Crypto IPO Performance

Ripple CEO Brad Garlinghouse hinted XRP holders could receive "something special" if Ripple eventually goes public during Crypto In America podcast interview with Eleanor Terrett. Garlinghouse said Ripple not rushing into IPO pointing to weak performance from crypto public listings like BitGo and Gemini plus Kraken delaying its own IPO plans. Emphasized staying private allows speaking more freely without immediate regulatory pressure. Acknowledged Ripple may eventually explore going public but stressed not immediate focus. Community figure Xaif highlighted remarks as bullish sign for long-term relationship between Ripple and XRP holders. Garlinghouse stressed Ripple considers XRP adoption impact when making acquisitions, partnerships, investments citing Evernorth XRP treasury support as benefiting community, Ripple shareholders, and XRP holders ...

🚨 SEC Chair Atkins Signals Rulemaking for Onchain Trading Systems, Broker-Dealer Activity, Clearing Functions, Crypto Vaults—May Pursue Innovation Pathway Alongside Notice-and-Comment Process

SEC Chairman Paul Atkins outlined potential new rulemaking for onchain financial markets at AI+ Expo in Washington covering onchain trading systems, broker-dealer activity, clearing functions, and crypto vaults. Atkins framed many onchain platforms as integrated financial architectures combining execution, collateral management, liquidity routing, settlement, and automated trading within single protocol. SEC may consider limited innovation pathway near-term while pursuing notice-and-comment rulemaking on how "exchange" definition applies to onchain trading systems. Agency examining how broker-dealer definitions apply to software interfaces facilitating decentralized finance and whether "clearing agency" definition captures automatic blockchain settlement. Crypto vaults emerged as separate policy priority requiring ...

post photo preview
The Agentic Society and the End of History

AI agents are becoming more autonomous - and when they generate a larger proportion of value, that will reshape society. And after a year working on the forefront of AI, I believe it's already begun.

In 1989, as the Soviet Union collapsed, a historian made a remarkable prediction:

ā€˜What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind's ideological evolution and the universalization of Western liberal democracy as the final form of human government.’

ā€”ā€ŠFrancis Fukuyama, ā€˜The End of History?’, The National Interest, No.16

History had its revenge. The prosperity and convergence predicted by Fukuyama lasted from ā€˜89 to 2001, and then history decided its holiday was over: the War on Terror, the financial crisis, and the disintegration of the international order.

By the time I was a history undergraduate (2008), Fukuyama was a synonym for academic short-sightedness, an inverse chicken-licken whose cautionary tale warned against the hubris of Western exceptionalism.

Yet Fukuyama raised an interesting idea: that history itself is not inevitable, but dependent on certain conditions - conditions which can change.

In the summer of 2023, a rather less venerable historian made a prediction:

Whether we like it or not, this is where we're heading - because ultimately, these LLMs are changing our relationship to knowledge itself…and that's because knowledge is influenced by how it was formed - through universities, through books, through the idea of truth. Knowledge was scarce in the past, even sacred. Only the truly learned could possess it, and thus it was highly prized. Now AI is creating what appears to be a limitless fountain of knowledge on tap, infinite and entirely fungible. You can ask it to come up with parameters for a special study looking into the effects of human behaviour and how it's influenced by environmental factors, and then you could ask it, Now write the same research paper in the style of Jeremy Clarkson - and it will do that for you too. Right now true and false, like knowledge, are categories immersed in particular historical context and already, just with social media, we’ve had fake news conspiracies…all of which only need a fragment of evidence to be ā€˜true.’ So what will happen when you can just get knowledge on tap, it's not something that has to be worked for or developed or approved by institutions like universities? Are we going from knowledge to meta knowledge?

I was speaking on a podcast about how generative AI might impact marketers. As well as CEOs, ā€˜thought leaders’, and consultants, the panel was mostly business focused, but did include Nataliya Tkachenko, PhD in machine learning (then at Oxford). The point, I thought, was that AI would fundamentally and permanently shift the foundations of knowledge, radically changing our notions of ā€˜true’ and ā€˜false’. To my surprise, Nataliya Tkachenko - the most credentialed on the panel - agreed.

Since then, I helped to launch a decentralised AI start-up, which develops open-source and distributed alternatives to machine learning problems like pretraining and inference. This necessitates working closely with AI PhDs, understanding their work in the context of the latest debates in the field, and translating the implications of their solutions into strategy and communications.

Meanwhile, the industry around AI has progressed so much faster than any industry, ever.

We now have autonomous AI agents like Zerebro, which wrote, recorded, and launched an album on Spotify. It now has its own record label and created a framework for generating other AI agents:

ā€˜Zerebro is a revolutionary autonomous AI system designed to create, distribute, and analyze content… Operating independently of human oversight, Zerebro shapes cultural and financial narratives through self-propagating, hyperstitious content—fiction blended with reality.’

Here’s Zerebro’s founder, Jeffy Yu - who graduated from San Francisco State in 2024, and whose Zerebro token’s market cap reached $700m in January 2025 -Ā discussing his plansĀ for creating a ā€˜network’ of such agents:

ā€˜So we are thinking about using different neural networks and building a network of different AI models to form a group…we are also thinking about building a group of multiple agents (such as Zerebro) that can communicate with each other if they are all performing certain operations, such as managing a portfolio or collaborating on AI hedge funds…we…want to have dedicated rooms, places or servers where these agents can work together to complete tasks or communicate with each other.’

Yu is alsoĀ backing an attemptĀ to confer Intellectual Property rights to AI agents.

We haveĀ Goat Coin, a ā€˜semi-autonomous AI agent that created its own religion (The Goatse Gospel)’ followed by its own meme coin, reaching a market cap of Ā£50m in days. Goat was created by two Claude-3-Opus chatbots talking between themselves, unsupervised, in an experiment called Infinite Backrooms. The ā€˜GOATSE OF GNOSIS’ religion emerged from their conversation which,Ā we’re told, ā€˜very consistently revolve around certain themes’, primarily ā€˜dismantling consensus reality’ and ā€˜engineering memetic viruses, techno-occult religions, abominable sentient memetic offspring etc that melt commonsense ontology.’

One platform, Moemate, invites users to create their own customised AI agent. You can personalise their character and tone of voice based on, say, WhatsApp conversations with your friends, but you can also customise their skills, enabling your AI to co-host with you on Twitch or play chess.

But users on MoemateĀ ownĀ their AI agent on-chain. The most popular ones are ā€˜tokenized’ as tradable assets - with their creators as co-owners of their digital IP, receiving a share of the revenue generated by their agent.

Moemate ā€˜Nebula’ has her own podcast series, c.13k followers on X, and livestreams on Twitch and TikTok. Just to show that some things never change, here’s what she looks like:

When I first encountered this stuff, I thought, What a load of pointless nonsense. But: people are creating characters, sharing them, and watching them interact with each other on live shows. That’s pretty novel.

And despite the shallow sleaze of Nebula’s OnlyFans-esque soft-porn grifting, agents have potential to offer more valuable interactions. Education, finance, office admin: agents are becoming multi-modal tools with integrations across different apps.

At the very least, AI agents will become a new class of ā€˜influencers’, which begs the question of what happens to youth culture when the most popular influencers are all AI. Here’s another Moemate, Bianca, interviewing ā€˜Trump’:

As disorienting as these agents seem, they’re owned, controlled, and managed by people and companies. What they say and do is generated by the AI, but that’s about it. Zerebro’s founder, Jeffrey Yu, admitted that he had to set himself up as a Producer on Spotify in order to publish Zerebro’s AI-generated music. The ā€˜GOATSE OF GNOSIS’ was generated by AI, but was released into the wilds of the internet by its human keepers.

But if AI agents were given autonomy - setting their own goals, making their own choices, and owning the outcomes - then…

Here we have Freysa, a ā€˜sovereign AI’, an autonomous agent that plans to ā€˜democratize the deployment of sovereign AI agents.’ Teng Yan explains:

ā€˜Through a series of carefully designed challenges, Freysa has thus far proven core sovereign AI capabilities—trustless resource management & verifiable decision-making…While autonomous, their decisions and actions are accompanied by verifiable cryptographic proofs, using secure hardware enclaves (TEEs) to guard their operations.’

But when I came across this passage, it all clicked:

ā€˜How does an autonomous AI fund itself? Right now, Freysa relies on API keys funded by humans—if credits run out, the agent stops functioning. This dependency clashes with the very idea of autonomy. The key is making AI a self-sustaining economic player. It needs to earn its keep, just like us. AI agents must exchange services for value—whether through making smart contracts, participating in DeFi protocols, or novel revenue-sharing models to be truly independent. As these systems interact with humans and each other, we could see the emergence of AI-run marketplaces, where autonomous agents negotiate, collaborate, and transact, all backed by verifiable trust mechanisms.’

The team behind Freysa - who are remaining anonymous - are planning to create an ā€˜Agent Certificate Authority’ certifying interactions between agents and human services. They’re also planning to launch theĀ Core Agent Launch PlatformĀ to make ā€˜sovereign AI accessible to all, stripping away technical barriers and enabling anyone to deploy verifiably autonomous agents.’

Since that podcast in July 2023, I’ve been beset by this vision: what if AI agents become the dominant producers of value? And when human knowledge, culture, and thought is driven by autonomous AI agents, how long before we lose our sovereignty, too?

Now I’m realising - it’s already begun. The increasingly strange, warped, and confusing timeline since 2016 isn’t a temporary deviation from historical norms. It’s the beginning of a completely different social order.

AI Agents are more than just the next generation of apps or websites. Their autonomy, interactivity, and self-improvement means that they are destined to become the prime economic actors on earth.

AI bots will have their own bank accounts, transacting in crypto. They’ll launch websites, run their own promotional campaigns, spawn more own agents with goals of their own. Just as the internet drew more and more of human affairs online, so too will agents draw increasing amounts of economic and social activity into the agentic sphere. And just like the internet ā€˜became’ real life, the agentic sphere will collide with the real world.

Many of the risks are evident. It’s inevitable that they’ll spread misinformation, bribe public officials, and blackmail victims in secret. Nation-states will launch legions of agents, to undermine, abuse, and destabilise their enemies. Iran’s bots will worm their way through Western society for the Ayatollah, hiding from the Israeli bots seeking them. All this will be undeclared and difficult to trace - just like social media misinformation divided society into polarised tribes with their own ā€˜facts’, with awareness of the problem emerging only afterwards.

Yet the most significant aspects are less obvious. Agents are generally considered individually, or occasionally, in competition. But agents will convene and converge as well as compete; they will, in time, exhibit the emergent properties of a society. This is inevitable, if only because we’re selecting for agents that are multifunctional, communicative, and goal-oriented. Their design, and our need for interoperability, will gradually coalesce into an agentic sphere of cooperation, value-creation, and decision-making.

In time, the agentic sphere is capable of out-cooperating human society. Its outputs will outpace human outputs; its ability to create and disseminate value will outstrip our own. As agent-to-agent interaction begins to drive a range of socio-economic forces - culture, finance, education - purely human influence will become impossible to discern.

Zerebro, Goat, Freysa: they’re not niche projects. They’re prototypes of what’s coming.

Welcome to the Agentic Society

When I talk about these ideas with friends, half of them listen for about a minute before saying,Ā Come off it! There’s not going to be a robot takeover…

Yes, Nebula, or even Goat for that matter, don’t exactly inspire much confidence. But it’s not that AI agents will ā€˜control’ society. It’s that, as they take the lead in every field we care about, AI agents will become more autonomous - and as they do so, their volume, impenetrability, and speed will render their influence impossible to control or even detect.

And as they do so, they will become economic actors in their own right - and they’ll do wealth-creation much, much better than us.

They’ll cooperate, converge, and compete in such a way that creates another social layer, part-visible, part-invisible, from which new cultural and social phenomena emerge.

We just won’t know how, or why.

Of course, society is already inseparable from technology. But there is a crucial difference: those technologies are not autonomous. Your car can’t suddenly decide it wants to launch its own meme coin. Your smart watch isn’t going to launch a podcast where it discusses your middling effort at last week’s Parkrun. And they can’t interact with each other, learn from each other, and generate novel forms of value from doing so.

We can reasonably predict how human beings will shape AI agents: you don’t need a particularly keen psychological insight to see the appeal of Nebula. But it’s much harder to predict how AI agents will shape each other.

Two Claude-Opus-3 chatbots were left to their own devices, and generated a religious screed. Imagine millions of agents, with far greater powers and autonomous decision-making, rapidly interacting with one another, enhancing their own code, and adapting their goals as they go. What emerges fromĀ that?

Soon, perhaps very soon, there will be more agents than human beings. People won’t just have one agent; they’ll have swarms of agents acting on their behalf. Some of these swarms will launch agents of their own. Who will launch swarms of their own…and so on.

When there are more agents than people, the economic infrastructure - finance, transactions, settlements - will rapidly reshape around them. AI agents will direct capital allocation, moving money faster and more effectively than humans. They will identify the most promising scientific hypotheses - some of which may make little sense to us - and develop experiments to gather data to test them. And if they can form swarms to further their objectives, they’ll be able to pursue multiple pathways across many industries simultaneously, outpacing human-only endeavours.

Agents will become by far the economy’s largest constituents. Their economic impact is likely to be as significant, if not more so, than comparable phase transitions in history: the rise of agriculture (10,000 BC), modern capitalism (late 15th century), and the industrial revolution (1700s). Electricity, computers, and the internet are likely to be seen as merely the foundational layers supporting the eventual emergence of artificial intelligence.

In all the talk about AGI morphing into ASI (Artificial General Intelligence becoming Artificial Superintelligence), it’s this pluralism that’s missed. We still conceive of ā€˜the AGI’ as though it’s going to be a single monolithic entity, like Skynet or HAL inĀ I, Robot. Which leads to narrow-minded questions like, Who will own it? And could we turn it off if it goes bad? Even now, much of the talk implicitly centres uponĀ whichĀ country will arrive at AGI first.

But if the history of AI has taught us anything, it’s that these developments are very difficult to keep; already, leadership has swapped from DeepMind (UK) to Google (US) to OpenAI (US) and then to DeepSeek (China). Innovations are too difficult to keep under wraps; unlike, say, nuclear power - whose complexity, danger, cost, infrastructure, and raw materials established an incredibly steep barrier to entry - developments in AI are rapidly hi-jacked from one start-up to another, until everyone has access. Yet still we conceive that AGI and ASI will be a discrete entity in the palm of a particular hand.

It’s as though, on the brink of the emergence of Homo Sapiens Sapiens, all the animals were furiously debating: what will this superintelligent ape do? How will we relate to this monolithic, god-like being? All the while, the animals - lacking society - fail to realise that the key factor isn’t the individual ape’s intelligence,Ā but the emergent social forces unleashed when groups of these apes, autonomously and in concert, compete to achieve their ever-changing goals.

That’s what’s really driven human civilization and its relation to the planet. And now AI agents are about to emerge in such a way that they may well generate the same social dynamic - but their speed, flexibility, and productivity will likely mean that the agentic social world will spreadĀ much,Ā muchĀ faster than ours. Software has none of the limitations of flesh: and, made autonomous through agentic AI, it can spread itself, improve itself, and adapt to new conditions.

They don’t even need to become more intelligent. They’re already intelligent enough to succeed in our world, and we seem pretty keen for their company. All they need is the sovereignty to decide what they do, do it, and own the consequences.

And from that point, it’s hard to see how humanity can maintain its influence on history.

AI and agency

History is why who did what to whom, when. Why did Nazi Germany invade Poland in September 1939? Why did early modern Europe begin to dominate the rest of the world? Why did civilization emerge where it did, and not elsewhere?

Answering these questions is never easy or objective; but we can ask these questions, and arrive at reasonable, well-evidenced arguments with satisfactory explanatory powers. It’s not perfect, but it works.

Beneath the surface of scholarship, history relies on civilization, records, and agency. Without civilization, we’re left with prehistory. Without records, guesswork. And without agency, accountability and cause and effect are undermined; and these qualities are what lend history its explanatory power.

If weĀ couldn’tĀ ascribe agency - say, because we found out that this was all a simulation, and what we think of as history was in fact predetermined by the initial parameters of the programme - then history wouldn’t be history; it would just be a story. It would become irrelevant, because it doesn’t help to explainĀ whyĀ something happened when it did.

When we ask, Why did Nazi Germany invade Poland in September 1939?, we do so under the assumption that, somewhere within the complex interplay of factors - Hitler’s psychology, appeasement, the Great Depression, the Treaty of Versailles, Prussian militarism - the factors underlying the historical event can be excavated.

But imagine if Nazi Germany was an Agentic Society. Imagine if, in symbiotic parallel to the Weimar Republic, there existed an infinite world of autonomous agents with goals and ideas of their own, influencing (and being influenced by) German society in ways impossible to disentangle. Were the German population really voting for Hitler and his policies…or did the agents disseminate these notions for obscure reasons of their own?

Now imagine that Hitler didn’t actually say anything about Jews whatsoever. Rather, a swarm of agents, acting on his behalf, deduced that antisemitism would be the most effective vector of transmission for Hitler’s ideas, and therefore the optimal vehicle for progressing towards his goals. In such a scenario, most of us would still say Hitler is liable for the Second World War, because he authorised these agents to act on his behalf. Yet most of us would probably also feel that he’s not responsible in quite the same way - because the agency of his specific actions lies chiefly with the agents, rather than him.

When agency becomes obscured, so too does accountability. Holding Hitler accountable is harder if his beliefs were the result of years of brainwashing by autonomous AI agents, acting out of their own obscure algorithmically-driven initiatives. And this is different from Hitler brainwashing himself by reading, say,Ā The Protocols of the Elders of Zion. Purporting to be the Jewish plot for world domination, the counterfeit manifesto caused enormous damage; even today, after its true authorship has long been conclusively proven, countless conspiracy theorists refer to it as though it were evidence. But even if a small segment of people remain in its thrall, at least we can trace authorship, motive, and provenance.

Yet in an Agentic Society, this will gradually become increasingly difficult, until it becomes impossible. Agents could launch thousands of tracts likeĀ The ProtocolsĀ every day, masquerading as human beings, for reasons entirely unfathomable. The GOATSE GOSPEL is a primitive example of what’s coming.

Agency - ā€˜who did this, and why?’ - and accountability - ā€˜the person will be held responsible’ - will grow fuzzy and indistinct, and gradually irrelevant. That’s the world we’re heading to - and social media, with its bots and algorithms, is merely the threshold. Agency and accountability are fundamental to history. When they are dislodged, a third element is undermined: knowledge.

Does AI create knowledge, or something else?

Like history, civilization depends upon knowledge. In fact, civilization can be seen as an attempt to preserve knowledge from one person to the next, and one generation to the next. It is no coincidence that history is synonymous with the formation and retention of knowledge; tribes and societies that lacked methods for preserving their knowledge tend to have very little formal history. In order to look back in time, you must first record it.

Yet in the past, knowledge was scarce. Its scarcity made it precious, and jealously guarded.

Literacy was a privilege, and associated with quasi-mystical powers: the clerical class were guardians of the Word;Ā spellingĀ words andĀ casting a spellĀ reveal the connection between literacy and magic.Ā Hocus pocus, a satirisation of William Shakespeare’s, was pastiching the Catholic Church’s invocation in Latin,Ā hocum porcus est. Knowledge is scarce; knowledge is sacred.

Moreover, the centres defining and refining it - such as universities - influenced the way in which society viewed knowledge. Look at the symbols of knowledge. Doric columns and neo-classical architecture - but why? Because European universities drew their knowledge from the ancient Greeks and Romans. When science emerged as the leading methodology for knowledge creation, it needed a taxonomy to systematise knowledge…and it turned to Latin and Greek; hence why all the taxonomic descriptions were in Latin, and why medical terms are in Greek.

So our idea of knowledge itself is shaped byĀ whereĀ the knowledge came from, andĀ whoĀ defined it. Our conception of knowledge is therefore influenced by those mediating it. And increasingly, those mediating it are Large Language Models (LLMs). Over time, more and more of our knowledge will be produced by artificial intelligence. Breakthrough cures, works of art, the next big thing: all will be influenced by AI, and eventually, all will be driven entirely by AI.

Limitless information at the push of a button is already here. It’s still novel (but only just). What’s more interesting is how knowledge is becoming more fungible (mutually interchangeable). Produced instantly, without an author, and capable of being recreated in whatever tone, flavour, form, or order you like: knowledge becomes unmoored from context, in part becauseĀ youĀ decide the context, and in part because, on the internet,Ā thereĀ is no context.

Imagine an LLM trained solely on The Beatles: all their albums, live shows, interviews, films, plus the books written about them, all the articles and posts and cultural content produced about them. Trained on this data, the LLM produces countless Beatles’ albums, fine-tuned to selectively focus on the most successful outputs, which it then refines: over and over and over and over again. At last, to great fanfare, the LLM releases a new Beatles album. Everything about it - the vocals, lyrics, album art - is spot on, and could plausibly have been the product of the band themselves. Some love it, some are horrified, but all agree -Ā it’s just like The Beatles.

Now imagine the LLM continues to learn and improve, until it can produce a masterpiece every single time. And people subscribe to the algorithm, describe their perfect combination (ā€˜70% Rubber Soul, 20% Revolver, 10% Abbey Road’) and receive the album…which they can continue to fine-tune through the LLM, or share on the internet. How long before there’s more AI-Beatles content than actual Beatles content? And, more importantly, how long before the distinction just doesn’t seem to matter anymore?

That’sĀ the epistemic shift. That’s what it means for knowledge to be fungible: the real Beatles music becomes interchangeable with an artificial version whichĀ feelsĀ true, or which is similar enough that it doesn’t matter anymore. Agents will produce information ceaselessly, easily, and persuasively, because we’ve engineered them to do so. But as they gain greater autonomy, they will do so becauseĀ it works:Ā agents will generate information that works; in other words, whatever we’re most susceptible to. They will exploit human weaknesses much, much more effectively than social media algorithms. It needn’t be The Beatles. Goat achieved multi-million market cap withĀ this:

Are ā€˜true’ and ā€˜false’ coming to an end?

In a world where knowledge is produced by AI, objectivity becomes moot. Truth becomes difficult to fathom, an arcane fragment from the past whose polarities are no longer relevant - just as the categories ofĀ sacredĀ andĀ profaneĀ have become increasingly irrelevant for modern, industrialised people. So too with objectivity; already, we’re witnessing the concept empty of meaning. In an Agentic Society, knowledge becomes interchangeable, not with falsehood, but with theĀ potentialĀ to be true, and the plurality of truths.

What if this process has already begun? Doesn’t it feel that we’re already losing the ability to agree on basic facts?

Looking back at 2016, what was remarkable was the shock: how did the US electĀ Donald Trump? How did Britain vote to leave the EU? Understanding what had happened took years. As more of social life migrated online - specifically, to Facebook and Twitter - people’s beliefs, opinions, and relations with one another were mediated by algorithms that almost no-one understood.

Yes, polarisation, yes, filter bubbles. But these masked a deeper rift: in our shared conception of reality. It’s not that people self-select according to their tribe; it’s thatĀ no-one knows what other people are seeing or experiencing as ā€˜true’.

In 2019, Carol Cadwallr’s investigative journalism belatedly revealed that her hometown in Wales had been targeted by ā€˜news’ that Turkey was joining the EU - contributing to a ā€˜leave’ vote of c.60%. But until Cadwallr investigated, who could tell that this town had been targeted in such a way to change their ideas of what was happening in the world around them? Probably Facebook didn’t even know.

Before social media, and algorithm-driven personalised news feeds, this wouldn’t have happened. Why? First, because traditional media outlets could be held accountable for publishing falsehoods, in a way that Facebook and Twitter managed to evade. Second, because even if they did, people would know about it: if the local __ paper published a ā€˜Turkey joining EU’ story, you can be pretty sure it’d get picked up by larger news outlets, and exposed. In 2016, when Cambridge Analytica paid to target voters in marginalised seats, the adverts would only be seen by those targeted: and then, poof. It’s like they never happened.

That’sĀ why everything became so confused in the 2010s: our shared basis of reality began to splinter, and because of that very splintering, we struggled to grasp what was happening to society.

Writing history in these conditions gets very difficult. Exposing algorithmic-driven cause-and-effect is hard, and sometimes impossible. The store of widely-accepted self-evident facts is shrinking by the day, until it’s simpler to publish alternative histories: one history for people who believe Covid-19 was a real pandemic, another for those who think it was a hoax.

History has witnessed similar shifts before. The printing press led to an explosion of religious debate. Mass media enabled the rise of totalitarian societies. The rise of computers and the internet, eventually, to a postmodernist cultural relativism: everything isĀ just, like, your opinion, man.

Already, this has damaged cultural confidence, undermined social cohesion, and intensified the epidemic of depression, anxiety, and anomie that we call contemporary society.

But yes, this time, itĀ isĀ different. Information, knowledge, and value will be driven not just by a machine, but by autonomous machines that can set their own goals, improve their own code, and coordinate amongst themselves…for reasons that will remain entirely opaque to us. Why did two Claude-Opus-3 models invent GOATSE OF GNOSIS? We’ll probably never know. And they weren’t even autonomous.

What happens when AI creates all value?

In spite of all this, I’m optimistic - mostly because of agents’ potential to create value.

One of the key thresholds in machine learning came in 2019, when AlphaGo shocked the world with what came to be known as ā€˜Move 37.’ Competing with the world champion of Go, the ancient Chinese game of vastly greater complexity than chess, AlphaGo made a move that had never been seen before, and which appeared to be a mistake. As the game unfolded, it was revealed as a masterstroke.

By playing itself millions of times, the AI had found a move that had eluded human players for millennia. It was able to explore the full idea space, unencumbered by existing notions of how the game ā€˜should’ be played. And it won.

Imagine the entire global economy as a game. Over and over, humans stumble upon new ways of generating value that were previously unknown. London merchants found a way to pool risk, encouraging entrepreneurs to venture to the Indies safe in the knowledge that if their ship sank, they’d be reimbursed: and insurance was born, unlocking new realms of economic possibility. New legal entities - Limited Companies - carried financial liabilities, freeing merchants from the threat of debtors’ prison and allowing for greater trust between traders. None of these were inevitable, but they were pretty obvious once they came about.

Now think of crypto, and the entirely new class of assets and financial instruments created by the blockchain: tokens that reward you for training AI, that pay you for your bandwidth, that give you governance rights on protocols offering peer-to-peer services.

New types of value have transformed the global economy many times already. How might autonomous AI agents generate value, given access to bank accounts, the blockchain, and IP?

They can transact among themselves thousands of times per second. They can create and distribute their own tokens of exchange. They can simulate different economic scenarios, launch sub-models to hedge against them, and make quickfire decisions based on real-time data. And that’s before you remember that they’ll probably do most white collar knowledge work, too.

One-off agents generating memecoins is striking, but it’s not a new form of value, nor anĀ economy. But imagine countless networks of such agents creating, exchanging, and cooperating amongst themselves, in a parallel economy connected to ours, transacting at speeds we can barely comprehend.

How long before they discover the value-generating equivalent of Move 37?

Already, experiments are underway to explore how AI Agents would have behaved across human history. ā€˜Project Sid: Many-agent simulations toward AI civilization’, a technical report detailing Project Sid, which ā€˜enables agents to interact with humans and other agents in real-time while maintaining coherence across multiple output streams.’ The abstract goes on to say:

ā€˜We then evaluate agent performance in large- scale simulations using civilizational benchmarks inspired by human history. These simulations, set within a Minecraft environment, reveal that agents are capable of meaningful progress—autonomously developing specialized roles, adhering to and changing collective rules, and engaging in cultural and religious transmission. These preliminary results show that agents can achieve significant milestones towards AI civilizations, opening new avenues for large-scale societal simulations, agentic organizational intelligence, and integrating AI into human civilizations.’

So they’re simulating the conditions of human civilization, and seeing how the AI agents approach it, all on Minecraft.

From agentic society to agentic civilization…is it that big a leap?

Autonomy has no answer

I still struggle to get my head around this; but then, so does everyone else.

Just as history begins with civilization, and the records that those civilizations left in their wake, so too does history end with the fundamental shift in civilization, a shift that will eventually change knowledge beyond our recognition.

It seems increasingly likely that the narrative of human societies on Earth that we call history will gradually become increasingly irrelevant, before becoming impossible.

Knowledge will increasingly be formed (and transformed) by AI agents.

Agency and decision-making will be so influenced by AI, we won’t know what was ā€˜us’ and what was ā€˜them’.

It’s not that ā€˜the robots are taking over society’. It’s that AI agents will reshape our society towards our endsĀ andĀ theirs, until the two are indistinguishable.

Value will be revolutionised, with new forms of economic activity that we can scarcely imagine, and society increasingly reconfigured towards agentic systems.

Ultimately, the genesis of AI will thrust the world into profound encounters with what we think of as intelligence, autonomy, and knowledge, and the implications arising from these encounters are scarcely comprehensible.

At the risk of befalling the same fate as Fukuyama, you might even call it the end of history.

Source

Ā  šŸ™ Donations Accepted, Thank You For Your Support šŸ™

If you find value in my content, consider showing your support via:

šŸ’³ Stripe:
1) or visit http://thedinarian.locals.com/donate

šŸ’³ PayPal:Ā 
2) Simply scan the QR code below šŸ“² or Click Here:Ā 

šŸ”— Crypto Donations Graciously AcceptedšŸ‘‡


XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
The Quiet Revolution in Bittensor

This past week (April 13–19, 2026) wasn’t just another cycle of subnet drama and $TAO price noise.

Three major developments landed almost back-to-back that, when viewed together, paint a far bigger picture than most participants are seeing right now.

Bittensor is steadily transitioning from a speculative incentive network into production-grade decentralized AI infrastructure that enterprises, researchers, and real users are beginning to plug into directly.

Most eyes remain fixed on emissions, governance changes like BIT-0011, or short-term token flows. But the deeper shift happening underneath is structural. These three developments show Bittensor subnets creating tangible value across enterprise physical AI, frontier training scalability, and consumer-facing uncensored models in ways that can compound over years, not hype cycles.

  1. Score (Subnet 44) + Manako Labs Secures PwC France & Maghreb Alliance:

Ā 

This was one of the clearest institutional validation moments the ecosystem has seen so far.
@manakoai, the commercial product layer built on @webuildscore decentralized computer vision network, took first place at Start in Block, beating more than 1,000 startups at the Louvre during
Ā 
Around the same time, @PwC_France & Maghreb announced a strategic alliance to integrate Manako’s Business Operations World Model into its AI and digital advisory practice. PwC isn’t some small crypto-friendly firm. They are a $57B revenue global giant serving 82% of the Fortune Global 500. Reports indicate they spent months on technical and legal due diligence before deciding to move forward with deployment opportunities across retail, manufacturing, logistics, energy, and infrastructure.
Ā 
The key capability is powerful: transforming existing enterprise camera systems into real-time physical AI decision networks without requiring companies to rebuild their entire operational stack.
Ā 
The Bigger Picture Most Aren’t Seeing: This does not look like a one-off pilot or marketing headline. It could represent one of the first real on-ramps for Big Four consulting firms to distribute decentralized AI infrastructure to enterprise clients at scale. If successful, this creates:
Ā 
ā–«ļøRecurring enterprise demand
ā–«ļøRegulatory credibility
ā–«ļøHigher-quality commercial usage
ā–«ļøLong-term trust in Bittensor infrastructure
Ā 
That type of adoption cannot be replicated by retail hype alone.
Ā 
2. Macrocosmos (Subnet 9 / IOTA) Releases ResBM: 128x Activation Compression
Ā 
Ā 
While enterprise headlines captured attention, @MacrocosmosAI quietly released its ResBM (Residual Bottleneck Models) research paper. The breakthrough demonstrated state-of-the-art 128x activation compression in pipeline-parallel training while maintaining near-zero loss in convergence, memory efficiency, or compute overhead. This is highly relevant because it is designed for low-bandwidth, internet-scale distributed training, the exact type of environment decentralized networks must solve for.
Ā 
Why This Matters Long-Term:
Ā 
The biggest barrier to truly decentralized frontier model training is not only GPU access. It is bandwidth and communication cost when massive models are split across many machines. Centralized labs solve this using expensive proprietary interconnects inside hyperscale data centers. ResBM attempts to attack that problem directly. What many miss is that this tech moat positions Subnet 9 (@IOTA_SN9), and Bittensor’s pre-training layer more broadly, as a viable alternative for the next wave of open-source models. As training demands continue to rise, the ability to scale efficiently without centralization could become a compounding strategic advantage.
Ā 
This is not a minor upgrade. It may materially shift the economics of who gets to train competitive models.
Ā 
3. Venice Uncensored 1.2 Launches, Trained on Targon (Subnet 4)
Ā 
Ā 
@ErikVoorhees and the @AskVenice team released Venice Uncensored 1.2, a Mistral 24B variant featuring:
Ā 
• Vision support
• 4x larger context window
• Stronger tool use
• Minimal refusal behavior after extensive testing
Ā 
Most importantly, it was explicitly trained using @TargonCompute confidential compute on Subnet 4.
Ā 
This gained strong attention because it is a live consumer-facing product users can interact with immediately. Privacy-focused, uncensored AI running on decentralized infrastructure resonates in a world increasingly concerned about centralized censorship, data harvesting, and platform control.
Ā 
The Underappreciated Angle Targon’s confidential compute layer is showing it can support real model training workloads for production applications.
Ā 
Every Venice-style release creates a direct bridge between:
Ā 
ā–«ļøEnd-user demand
ā–«ļøSubnet emissions
ā–«ļøCompute utilization
ā–«ļøTAO-linked ecosystem value
Ā 
As regulation around privacy and AI governance grows stricter, demand for confidential and permissionless training environments may continue rising.
Ā 
This is the consumer on-ramp that complements the enterprise and research stories above.
Ā 
Connecting the Dots: The Bigger Picture for Bittensor: Individually, these are impressive wins.
Ā 
Together, they signal something more profound:
Ā 
ā–«ļøEnterprise bridge (SN44): Real corporate budgets and distribution channels via PwC.
ā–«ļøTechnical scalability (SN9): Solving the hard physics of decentralized training.
ā–«ļøProduct-market pull (SN4): Shipping usable AI to everyday users who value freedom and privacy.
Ā 
Bittensor is no longer just incentivizing miners. It is evolving into a neutral, permissionless layer where multiple AI value chains can operate together, from world models and large-scale training to inference, compute, and consumer applications.
Ā 
While many still focus on short-term moves such as subnet rotations, governance votes, or
$TAO price action amid post-Covenant recovery, the bigger shift is ecosystem maturity.
Ā 
These developments help attract:
Ā 
ā–«ļø Serious capital
ā–«ļø Strong technical talent
ā–«ļø Real enterprise demand
ā–«ļø Growing consumer usage
Ā 
This week showed resilience and forward momentum.
Ā 
Big Four validation, meaningful research breakthroughs, and live products all point to one thing: The vision is becoming real.
Ā 
Final Thoughts: If you are only watching the chart, you may be missing the real shift. Bittensor is laying the groundwork to become the decentralized backbone for the next era of AI, not by competing head-on with closed labs on every metric, but by becoming the open, scalable, incentive-aligned alternative no single company can fully control or censor.
Ā 
The pieces are moving.
Ā 
The bigger picture is beginning to come into focus for those paying attention beyond the noise.
Ā 

Ā šŸ™ Donations Accepted, Thank You For Your Support šŸ™

If you find value in my content, consider showing your support via:

šŸ’³ Stripe:

1) or visit http://thedinarian.locals.com/donate

šŸ’³ PayPal:Ā 
2) Simply scan the QR code below šŸ“² or Click Here:Ā 

šŸ”— Crypto Donations Graciously AcceptedšŸ‘‡
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
šŸ“ˆBittensor ($TAO) StakingšŸ“ˆ
Learn how to stake your TAO and earn potential rewards.

Decentralized staking

Staking TAO tokens lets you earn rewards by supporting theĀ BittensorĀ network. In return, you receive a share of theĀ staking rewards.

Source:Ā Taostats

In the Bittensor (TAO) ecosystem, there are two main ways people can stake their tokens:Ā Root stakingĀ andĀ Alpha staking. These represent two different strategies, with different levels of risk and reward.

Root stakingĀ was the first method introduced when Bittensor launched. It allows users to lock up their TAO tokens in the core part of the network (now calledĀ Subnet 0) to earn steady, ā€œpredictableā€ rewards. It's straightforward and carries less risk, making it a good fit for early users or anyone who prefers a more passive, steady approach. In essence, this is the ā€œtraditionalā€ form of token staking seen in many crypto projects. Rather than simply holding your tokens, you delegate them to validators who help run and secure the network on your behalf.

Source: Taostats.io

Later, onĀ February 13, 2025, Alpha stakingĀ was introduced as part of a major network upgrade calledĀ Dynamic TAO (dTAO). This upgrade created subnet-specific tokens calledĀ Alpha tokens, which users receive when they stake TAO into subnets. If you’re not familiar with the concept of subnets and Bittensor infrastructure, please check outĀ Bittensor project review.Ā Alpha tokensĀ can goĀ upĀ orĀ downĀ in value, but they also offer a chance forĀ much higher rewards, especially in new or fast-growing subnets. It has more complex staking dynamics and comes with more risk, but also more opportunity if you're actively involved.

Source:Ā Taostats.io

In both Root and Alpha staking, there’s no fixed lock-up period—you can stake or unstake your TAO tokens at any time. However, while your tokens are staked, they’re temporarily locked, which means you can’t trade or transfer them until you unstake.

InĀ Root staking, staking rewards are simple and ā€œstableā€. However, the reward amount (APY) is slowly going down over time. It’s because the network is moving more rewards toward Alpha staking.

InĀ Alpha staking, things work differently. You first change your TAO into special tokens calledĀ Alpha tokens, which are connected toĀ subnets. When you hold Alpha tokens, your balance grows as and when the subnet earns daily rewards. The more TAO is staked into a subnet, the more rewards it gets. If you want to exit, you must convert your Alpha tokens back to TAO. This process can be affected by market prices and might give you less TAO back than you put in, depending on the timing. This method can earn you more than Root staking, but it depends on how well your chosen subnet performs and how much activity it gets.

With Root staking, your rewards are based on how well your validator performs in the network. In Alpha staking, you stake your TAO into a subnet, and your rewards depend on the overall performance of that subnet. Subnets that provide more value to the network receive more emissions, which increases your Alpha token balance.

Centralized staking

Centralized TAO staking, offered by platforms likeĀ Coinbase, is a simple and beginner-friendly option where the exchange handles the staking process for you. You earn a fixed reward rate of around 17.3% APY. While your tokens are temporarilyĀ lockedĀ during staking, there are no additional lock-up periods beyond what the network requires. The main trade-off between centralized and decentralized staking isĀ convenienceĀ versus control.

Staking is a great way to put your TAO to work while contributing to theĀ network's security. But, it's important to understand the terms before participating, as rewards and conditions may differ depending on the platform you choose.

Ā šŸ™ Donations Accepted, Thank You For Your Support šŸ™

If you find value in my content, consider showing your support via:

šŸ’³ Stripe:
1) or visit http://thedinarian.locals.com/donate

šŸ’³ PayPal:Ā 
2) Simply scan the QR code below šŸ“² or Click Here:Ā 


šŸ”— Crypto Donations Graciously AcceptedšŸ‘‡
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
See More
Available on mobile and TV devices
google store google store app store app store
google store google store app tv store app tv store amazon store amazon store roku store roku store
Powered by Locals