TheDinarian
News • Business • Investing & Finance
FEDML Nexus AI Studio: an all-new zero-code LLM builder
Using The THETA Blockchain Edge Nodes
November 02, 2023
post photo preview
💡We have a webinar on Tuesday Nov 7 at 11am PT/2pm ET at which we’ll introduce our FEDML Nexus AI platform and show a live-demonstration of Studio: Register for the webinar here

Table of contents:
- Introduction
- FEDML Nexus AI Overview
- LLM use cases
- The challenge with LLMs
- Why a zero-code LLM Studio?
- How does it work?
- Future plans to add Studio
- Advanced and custom LLM training with Launch and Train
- Webinar announcement 

Introduction

Most businesses today are exploring the many ways modern artificial intelligence and its generative models may revolutionize the way we interact with products and services. Artificial intelligence technology is moving fast and it can be difficult for data scientists and machine learning engineers to keep up with the new models, algorithms, and techniques emerging each week.  Additionally, it’s difficult for developers to rapidly experiment with models and data at the pace required to keep up with the business’s AI application ideation.

Further, the “large” nature of these new generative models, such as large language models, is driving a new level of demand for compute, particularly, hard to find low cost GPUs, to support the massive computations required for distributed training and serving these generative models.

FEDM Nexus AI is a new platform that bridges these gaps and provides Studio, a no-code, rapid experimentation, MLOps, and low cost GPU compute resources for developers and enterprises to turn their LLM ideas into domain-specific value generating products and services.

FEDML Nexus AI Overview

FEDML Nexus AI is a platform of Next-Gen cloud services for LLMs and Generative AI. Developers need a way to quickly and easily find and provision the best GPU resources across multiple providers, minimize costs, and launch their AI jobs without worrying about tedious environment setup and management for complex generative AI workloads. Nexus AI also supports private on-prem infrastructure or hybrid cloud/on-prem.  FEDML Nexus AI solves for the needs which come with generative AI development in 4 ways:

  • GPU Marketplace for AI Development: Addressing the current dearth of compute nodes/GPUs arising due to the skyrocketing demand for AI models in enterprise applications, FEDML Nexus AI offers a massive GPU marketplace with over 18,000 compute nodes.  Beyond partnering with prominent data centers and GPU providers, the FEDML GPU marketplace also welcomes individuals to join effortlessly via our "Share and Earn" interface.
  • Unified ML Job Scheduler and GPU Manager: With a simple fedml launch your_job.yaml command, developers can instantly launch AI jobs (training, deployment, federated learning) on the most cost-effective GPU resources, without the need for tedious resource provisioning, environment setup and management. FEDML Launch supports any computing-intensive job for LLMs and generative AI, including large-scale distributed training, serverless/dedicated deployment endpoints, and large-scale similarity search in vector DB. It also enables cluster management and deployment of ML jobs on-premises, private, and hybrid clouds.
  • Zero-code LLM Studio: As enterprises increasingly seek to create private, bespoke, and vertically tailored LLMs, FEDML Nexus AI Studio empowers any developer to train, fine-tune, and deploy generative AI models code-free. This Studio leverages fedml launch and allows companies to seamlessly create specialized LLMs with their proprietary data in a secure and cost-effective manner.
  • Optimized MLOps and Compute Libraries for Diverse AI Jobs: Catering to advanced ML developers, FEDML Nexus AI provides powerful MLOps platforms for distributed model training, scalable model serving, and edge-based federated learning. FEDML Train offers robust distributed model training with advanced resource optimization and observability. FEDML Deploy provides MLOps for swift, auto-scaled model serving, with endpoints on decentralized cloud or on-premises. For developers looking for quick solutions, FEDML Nexus AI's Job Store houses pre-packaged compute libraries for diverse AI jobs, from training to serving to federated training.

LLM use cases

LLMs have the potential to revolutionize the way we interact with products & services. They can be used to generate text, translate languages, answer questions, and even create new creative content.  Actual applications & LLM capabilities can be organized into 3 groups: Assistants, Learning, and Operations.  These have some overlap of course. Some example applications in each:

The challenge with LLMs

Though new versions of open-source LLMs are released regularly, and they continuously improve, (e.g. they can handle more input context length) these based models typically won’t work well out of the box for your specific domain’s use case. This is because these base open-source models were trained on general text data from the web and other sources in their pretraining. 

You will typically want to specialize the base LLM model for your domain’s use case. This entails fine-tuning the model on data that’s relevant to your use case or task. Fine-tuning, however, comes with its own set of challenges which prevent or hinder LLM projects from completing end to end.

There are 3 general challenges associated with fine-tuning large language models

  1. Getting access to GPU compute resources
  2. Training & deployment process
  3. Experimenting efficiently

1. compute resources: LLMs require substantial compute, memory, and time to fine-tune and deployment. The large matrix operations involved with training and deploying LLMs suggests that GPUs are in the best position to handle the calculation workload most efficiently.  GPUs, particularly the high end A100 or H100 type, are very hard to find available today and can cost hundreds of thousands of dollars to purchase for on-prem.

There are techniques to efficiently use the compute, including to distribute the training across many servers, hence you will typically need access to several GPUs to run your fine-tuning.

2. process, Without a solution like FEDML Nexus AI Studio, managing and training LLM models for production scale and deployment typically involves a many step process, such as:

  1. Selecting the appropriate base model
  2. Building training data set
  3. Selecting an optimization algorithm
  4. Setting & tracking hyperparameters
  5. Implementing efficient training mechanisms like PEFT
  6. Ensure use of SOTA technology for the training
  7. Managing your python & training code
  8. Finding the necessary compute and memory.
  9. Distributing the training to multiple compute resources
  10. Managing the training and validation process

And Deploying LLM models typically involves a process like:

  1. Building many models to experiment with
  2. Building fast serving endpoints for each experiment
  3. Ensure use of SOTA technology for serving
  4. Managing your python & serving code
  5. Finding the necessary compute and memory.
  6. Connecting your endpoint with your application
  7. Monitoring and measuring key metrics like latency and drift
  8. Autoscale with demand spikes
  9. Failover when there are issues

FEDML Nexus AI Studio encapsulates all of the above into just a few simple steps.

3. on experimentation, there’s a fast pace of new open source model development and training techniques, and your business stakeholders are asking for timely delivery to test their AI product ideas.  Hence you need a way to quickly fine-tune and deploy LLM models with a platform that automatically handles most of the steps for you, including finding low cost compute.  In this way, you can run several experiments simultaneously, thereby enabling you to deliver the best AI solution and get your applications’ new value for your customers sooner.  

Why a zero-code LLM Studio?

To support the 3 general challenges mentioned above, FEDML Nexus AI Studio, encapsulates a full end-to-end MLOps (or sometimes called LLMOps) for LLMs and makes the process just a few simple steps in a guided UI.  The step by step is discussed in the How it works section.  But as for Why LLM studio?:

  • No-code: Studio’s UI walks you through few steps involved very simply
  • Access to popular Open-Source LLM models: we keep track of the popular open source models so you don’t have to. We provide access to Llama 2, Pythia and others in various parameter sizes for your fine-tuning.
  • Built-in training data or bring your own: We provide several industry specific data sets built-in or you can bring your own data set for the fine-tuning. 
  • Managing your LLM infrastructure: This includes provisioning and scaling your LLM resources, monitoring their performance, and ensuring that they are always available.
  • Deploying and managing your LLM applications: This includes deploying LLM endpoints for your LLM application to run on production while collecting metrics on performance.
  • Monitoring and improving your LLM models: This includes monitoring the performance of your LLM models, identifying areas where they can be improved, and retraining them to improve their accuracy.

Without a robust MLOps infrastructure like FEDML Nexus AI, it can be difficult to effectively manage and deploy LLMs. Without Studio, you may have a number of problems, including:

  • Slow Development: if you can’t experiment with fine-tuning new models, new data, and configurations quickly and at low cost, you may not be putting forth the best effort model for your business applications.
  • High costs: FEDML Nexus AI’s new cloud services bring a GPU marketplace based pricing, hence you can be sure your training and deployment is cost effective.
  • Performance issues: If your LLM infrastructure is not properly managed, you may experience performance issues, such as slow response times and outages.
  • Security vulnerabilities: If your LLM applications are not properly deployed and managed, they may be vulnerable to security attacks.
  • Model drift: Over time, LLM models can become less accurate as the data they are trained on changes. If you are not monitoring and able to efficiently continuously improve your LLM models, this can lead to a decrease in the quality of your results.

How does it work?

Studio’s no-code user interface greatly compresses the typical workflow involved with fine-tuning an LLM.  It’s easy and only 3 steps:

  1. Select an open source model, a fine-tuning data set & start training
  2. Select a fine-tune model and build an endpoint
  3. Test your model in a chatbot.

Step 1.Select an open source model, a fine-tuning data set & start training

At nexus.fedm.ai, click the Studio icon in the main menu at the left.

Select from our growing list of Open-source LLM modes:

Next, select from build-in datasets or add your own.  The built-in data sets are already created properly to work with the open source modes. They have the necessary design, label/columns, and the proper tokenizers are handled. You can search for an view the actual data for these standard datasets at Hugging Face, for example the popular training data databricks/databricks-dolly-15k is here: https://huggingface.co/datasets/databricks/databricks-dolly-15k/tree/main

A few hyper-parameters are provided for your review and adjustment if desired.  Set use_lora to true for example to drastically reduce the compute and memory needed to fine-tune.

Then click Launch, and Studio will automatically find the suitable compute in our low cost GPU marketplace to run your fine-tune training.

Studio will use a SOTA training algorithm to ensure efficient fine-tuning.

Once you start fine-tuning, you can see your model training in Training > Run 

You may start multiple model fine tunes to compare and experiment with the results.  The GPU marketplace will automatically find the compute resources for you.  If a compute resource isn’t currently available, your job will be queued for the next available GPU.

Step 2. Select a fine-tune model and build an endpoint

After you’ve built a fine-tuned model, you can deploy it to an endpoint.  Goto Studio > LLM Deploy.  Name your endpoint, select your fine-tuned model and indicate FEDML Cloud for Studio to automatically find the compute resource on our GPU marketplace.

For deploy and serving, a good rule of thumb is to assume 2-bytes or half-precision is required per parameter, and hence best to have GPU memory that’s 2x number of parameters: 

For example, if you have a 7 billion parameter model, at half-precision, it needs about 14GB of GPU space.  Studio will automatically find a suitable GPU for you.

Step 3. Test your model in a chatbot.

And finally, test your fine-tuned LLM and its endpoint through our built-in chatbot.  Goto Studio > Chatbot, select your new endpoint, and type a query to test

And that’s it! You’ve completed fine tuning, deployment, and a chatbot test. All with just a few clicks and Studio even found the servers for you.

FEDML also provides a more sophisticated Chatbot for customers who would like a more refined & production ready-chatbot which can support many models simultaneously.

Future plans for Studio

We plan to add additional AI training tasks to Studio. For example, Multi-modal model training and deployment.  We’ll publish to our blog when those are ready for you to try.

Advanced and custom LLMs with Launch, Train, and Deploy

Check for our future blog post where we’ll show you how to handle more advanced and custom training and deployment with our Launch, Train, and Deploy products. 

Webinar announcement 

We have a webinar on Tuesday Nov 7 at 11am PT/2pm ET at which we’ll introduce our FEDML Nexus AI platform and show a live-demonstration of Studio:

  • Discover the vision & mission of FEDML Nexus AI
  • Dive deep into some of its groundbreaking features
  • Learn how to build your own LLMs with no-code Studio
  • Engage in a live Q&A with our expert panel

Register for the webinar here!

Link

 

 

community logo
Join the TheDinarian Community
To read more articles like this, sign up and join my community today
0
What else you may like

Videos
Podcasts
Posts
Articles
👀 Klaus Schwab promises new WEF recruits 👀

In a leaked video, Klaus Schwab promises new WEF recruits that their "avatar" will live on after death, and that their brains "will be replicated through artificial intelligence and algorithms."

00:00:38
🚹BlackRock: The Most Evil Business In The World🚹

The company that owns the world. They are buying up the media, real-estate, everything you can think of and it's leading to dystopian future ahead. Larry Fink's investment management is destroying our lives.

"BlackRock is the 4th branch of government" - Bloomberg

“Whoever controls the money controls the world” - Henry Kissinger

We no longer live under free market capitalism, we live under a system of socialism for the rich.

00:15:38
🚹Klaus Schwab Admits He Has Lost Control🚹

Klaus Schwab admits he has lost control and continues to lose the narrative that once sustained public trust in him.

He claims this narrative has guided humanity since the beginning and steered people toward what he calls a better future.

Schwab says the level of push back he now faces has made international cooperation nearly impossible.

He says the elites are now being forced to think about how to create an entirely new narrative.

00:01:06
👉 Coinbase just launched an AI agent for Crypto Trading

Custom AI assistants that print money in your sleep? 🔜

The future of Crypto x AI is about to go crazy.

👉 Here’s what you need to know:

💠 'Based Agent' enables creation of custom AI agents
💠 Users set up personalized agents in < 3 minutes
💠 Equipped w/ crypto wallet and on-chain functions
💠 Capable of completing trades, swaps, and staking
💠 Integrates with Coinbase’s SDK, OpenAI, & Replit

👉 What this means for the future of Crypto:

1. Open Access: Democratized access to advanced trading
2. Automated Txns: Complex trades + streamlined on-chain activity
3. AI Dominance: Est ~80% of crypto 👉txns done by AI agents by 2025

🚹 I personally wouldn't bet against Brian Armstrong and Jesse Pollak.

👉 Coinbase just launched an AI agent for Crypto Trading

🚹 XRP Ledger sees surge in tokenized U.S. Treasuries 🚹

A powerful trend is building on the XRP Ledger—real-world assets (RWAs), especially U.S. Treasuries, are rapidly moving on-chain, signaling deeper institutional adoption.

🔑 Key points

đŸ”č Tokenized Treasuries expanding:
The XRP Ledger is seeing a notable increase in tokenized U.S. Treasury products, bringing traditional finance assets onto blockchain rails.

đŸ”č Institutional players involved:
Firms are leveraging XRPL to issue and manage yield-bearing, compliant financial instruments on-chain.

đŸ”č Faster settlement:
Tokenization enables near-instant settlement, compared to traditional systems that can take days.

đŸ”č Lower costs + accessibility:
On-chain Treasuries reduce intermediaries and open access to a broader range of investors globally.

đŸ”č Built-in compliance tools:
XRPL supports features like issuer controls and permissioning, making it attractive for regulated assets.

🔎 Why it matters

đŸ”č Real-world assets are the next wave
RWAs (like Treasuries) ...

post photo preview

🚹 Bittensor’s founder: “TAO isn’t a crypto—it’s AI infrastructure” 🚹

A major narrative shift is being pushed by Jacob Steeves—and it directly challenges how most people view tokens like TAO.

🔑 Key points

đŸ”č Not a token-first system
Steeves argues TAO isn’t meant to be a speculative asset—it’s the incentive layer powering a decentralized AI network.

đŸ”č Marketplace for intelligence
Bittensor functions as a peer-to-peer market where AI models compete and get paid for useful output, not hype or staking alone.

đŸ”č Subnets = micro-economies
The network is split into specialized subnets, each acting like its own AI market (text, vision, prediction, etc.), rewarding contributors based on performance.

đŸ”č Fixing open-source AI incentives
Bittensor aims to solve a core problem:
👉 open AI research isn’t well monetized
👉 centralized labs dominate

So it introduces token rewards to incentivize global contributors.

đŸ”č “Proof of intelligence” model
Instead of proof-of-work or proof-of-stake, the network rewards useful ...

🚹 $620M floods into Bittensor as Nvidia & Polychain load up 🚹

A massive institutional wave just hit Bittensor (TAO), and it’s not small money—this is serious capital positioning around decentralized AI infrastructure.

🔑 Key points

đŸ”č $620M institutional injection:
Nvidia ($200M) have deployed over $620M into TAO exposure.

đŸ”č Heavy staking = supply squeeze:
Around 68% of TAO supply is locked, with much of Nvidia’s allocation staked—reducing circulating liquidity.

đŸ”č Real revenue, not just hype:
The network generated ~$43M in AI compute revenue in Q1 2026, showing actual usage.

đŸ”č Emission cut tightening supply:
Daily token emissions were cut in half, lowering sell pressure by ~$500K per day.

đŸ”č Price supported by fundamentals:
TAO rose ~21% in Q1 2026, holding strength despite volatility.

đŸ”č ETF narrative building:
Grayscale & Bitwise filings for TAO ETFs could become a major future catalyst.

🔎 Why it matters

đŸ”č This is AI infrastructure, not just a token
Bittensor is essentially a marketplace for machine...

post photo preview
The Quiet Revolution in Bittensor

This past week (April 13–19, 2026) wasn’t just another cycle of subnet drama and $TAO price noise.

Three major developments landed almost back-to-back that, when viewed together, paint a far bigger picture than most participants are seeing right now.

Bittensor is steadily transitioning from a speculative incentive network into production-grade decentralized AI infrastructure that enterprises, researchers, and real users are beginning to plug into directly.

Most eyes remain fixed on emissions, governance changes like BIT-0011, or short-term token flows. But the deeper shift happening underneath is structural. These three developments show Bittensor subnets creating tangible value across enterprise physical AI, frontier training scalability, and consumer-facing uncensored models in ways that can compound over years, not hype cycles.

  1. Score (Subnet 44) + Manako Labs Secures PwC France & Maghreb Alliance:

 

This was one of the clearest institutional validation moments the ecosystem has seen so far.
@manakoai, the commercial product layer built on @webuildscore decentralized computer vision network, took first place at Start in Block, beating more than 1,000 startups at the Louvre during
 
Around the same time, @PwC_France & Maghreb announced a strategic alliance to integrate Manako’s Business Operations World Model into its AI and digital advisory practice. PwC isn’t some small crypto-friendly firm. They are a $57B revenue global giant serving 82% of the Fortune Global 500. Reports indicate they spent months on technical and legal due diligence before deciding to move forward with deployment opportunities across retail, manufacturing, logistics, energy, and infrastructure.
 
The key capability is powerful: transforming existing enterprise camera systems into real-time physical AI decision networks without requiring companies to rebuild their entire operational stack.
 
The Bigger Picture Most Aren’t Seeing: This does not look like a one-off pilot or marketing headline. It could represent one of the first real on-ramps for Big Four consulting firms to distribute decentralized AI infrastructure to enterprise clients at scale. If successful, this creates:
 
▫Recurring enterprise demand
▫Regulatory credibility
▫Higher-quality commercial usage
▫Long-term trust in Bittensor infrastructure
 
That type of adoption cannot be replicated by retail hype alone.
 
2. Macrocosmos (Subnet 9 / IOTA) Releases ResBM: 128x Activation Compression
 
 
While enterprise headlines captured attention, @MacrocosmosAI quietly released its ResBM (Residual Bottleneck Models) research paper. The breakthrough demonstrated state-of-the-art 128x activation compression in pipeline-parallel training while maintaining near-zero loss in convergence, memory efficiency, or compute overhead. This is highly relevant because it is designed for low-bandwidth, internet-scale distributed training, the exact type of environment decentralized networks must solve for.
 
Why This Matters Long-Term:
 
The biggest barrier to truly decentralized frontier model training is not only GPU access. It is bandwidth and communication cost when massive models are split across many machines. Centralized labs solve this using expensive proprietary interconnects inside hyperscale data centers. ResBM attempts to attack that problem directly. What many miss is that this tech moat positions Subnet 9 (@IOTA_SN9), and Bittensor’s pre-training layer more broadly, as a viable alternative for the next wave of open-source models. As training demands continue to rise, the ability to scale efficiently without centralization could become a compounding strategic advantage.
 
This is not a minor upgrade. It may materially shift the economics of who gets to train competitive models.
 
3. Venice Uncensored 1.2 Launches, Trained on Targon (Subnet 4)
 
 
@ErikVoorhees and the @AskVenice team released Venice Uncensored 1.2, a Mistral 24B variant featuring:
 
‱ Vision support
‱ 4x larger context window
‱ Stronger tool use
‱ Minimal refusal behavior after extensive testing
 
Most importantly, it was explicitly trained using @TargonCompute confidential compute on Subnet 4.
 
This gained strong attention because it is a live consumer-facing product users can interact with immediately. Privacy-focused, uncensored AI running on decentralized infrastructure resonates in a world increasingly concerned about centralized censorship, data harvesting, and platform control.
 
The Underappreciated Angle Targon’s confidential compute layer is showing it can support real model training workloads for production applications.
 
Every Venice-style release creates a direct bridge between:
 
▫End-user demand
▫Subnet emissions
▫Compute utilization
▫TAO-linked ecosystem value
 
As regulation around privacy and AI governance grows stricter, demand for confidential and permissionless training environments may continue rising.
 
This is the consumer on-ramp that complements the enterprise and research stories above.
 
Connecting the Dots: The Bigger Picture for Bittensor: Individually, these are impressive wins.
 
Together, they signal something more profound:
 
▫Enterprise bridge (SN44): Real corporate budgets and distribution channels via PwC.
▫Technical scalability (SN9): Solving the hard physics of decentralized training.
▫Product-market pull (SN4): Shipping usable AI to everyday users who value freedom and privacy.
 
Bittensor is no longer just incentivizing miners. It is evolving into a neutral, permissionless layer where multiple AI value chains can operate together, from world models and large-scale training to inference, compute, and consumer applications.
 
While many still focus on short-term moves such as subnet rotations, governance votes, or
$TAO price action amid post-Covenant recovery, the bigger shift is ecosystem maturity.
 
These developments help attract:
 
▫ Serious capital
▫ Strong technical talent
▫ Real enterprise demand
▫ Growing consumer usage
 
This week showed resilience and forward momentum.
 
Big Four validation, meaningful research breakthroughs, and live products all point to one thing: The vision is becoming real.
 
Final Thoughts: If you are only watching the chart, you may be missing the real shift. Bittensor is laying the groundwork to become the decentralized backbone for the next era of AI, not by competing head-on with closed labs on every metric, but by becoming the open, scalable, incentive-aligned alternative no single company can fully control or censor.
 
The pieces are moving.
 
The bigger picture is beginning to come into focus for those paying attention beyond the noise.
 

 🙏 Donations Accepted, Thank You For Your Support 🙏

If you find value in my content, consider showing your support via:

💳 Stripe:

1) or visit http://thedinarian.locals.com/donate

💳 PayPal: 
2) Simply scan the QR code below đŸ“Č or Click Here: 

🔗 Crypto Donations Graciously Accepted👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
📈Bittensor ($TAO) Staking📈
Learn how to stake your TAO and earn potential rewards.

Decentralized staking

Staking TAO tokens lets you earn rewards by supporting the Bittensor network. In return, you receive a share of the staking rewards.

Source: Taostats

In the Bittensor (TAO) ecosystem, there are two main ways people can stake their tokens: Root staking and Alpha staking. These represent two different strategies, with different levels of risk and reward.

Root staking was the first method introduced when Bittensor launched. It allows users to lock up their TAO tokens in the core part of the network (now called Subnet 0) to earn steady, “predictable” rewards. It's straightforward and carries less risk, making it a good fit for early users or anyone who prefers a more passive, steady approach. In essence, this is the “traditional” form of token staking seen in many crypto projects. Rather than simply holding your tokens, you delegate them to validators who help run and secure the network on your behalf.

Source: Taostats.io

Later, on February 13, 2025, Alpha staking was introduced as part of a major network upgrade called Dynamic TAO (dTAO). This upgrade created subnet-specific tokens called Alpha tokens, which users receive when they stake TAO into subnets. If you’re not familiar with the concept of subnets and Bittensor infrastructure, please check out Bittensor project review. Alpha tokens can go up or down in value, but they also offer a chance for much higher rewards, especially in new or fast-growing subnets. It has more complex staking dynamics and comes with more risk, but also more opportunity if you're actively involved.

Source: Taostats.io

In both Root and Alpha staking, there’s no fixed lock-up period—you can stake or unstake your TAO tokens at any time. However, while your tokens are staked, they’re temporarily locked, which means you can’t trade or transfer them until you unstake.

In Root staking, staking rewards are simple and “stable”. However, the reward amount (APY) is slowly going down over time. It’s because the network is moving more rewards toward Alpha staking.

In Alpha staking, things work differently. You first change your TAO into special tokens called Alpha tokens, which are connected to subnets. When you hold Alpha tokens, your balance grows as and when the subnet earns daily rewards. The more TAO is staked into a subnet, the more rewards it gets. If you want to exit, you must convert your Alpha tokens back to TAO. This process can be affected by market prices and might give you less TAO back than you put in, depending on the timing. This method can earn you more than Root staking, but it depends on how well your chosen subnet performs and how much activity it gets.

With Root staking, your rewards are based on how well your validator performs in the network. In Alpha staking, you stake your TAO into a subnet, and your rewards depend on the overall performance of that subnet. Subnets that provide more value to the network receive more emissions, which increases your Alpha token balance.

Centralized staking

Centralized TAO staking, offered by platforms like Coinbase, is a simple and beginner-friendly option where the exchange handles the staking process for you. You earn a fixed reward rate of around 17.3% APY. While your tokens are temporarily locked during staking, there are no additional lock-up periods beyond what the network requires. The main trade-off between centralized and decentralized staking is convenience versus control.

Staking is a great way to put your TAO to work while contributing to the network's security. But, it's important to understand the terms before participating, as rewards and conditions may differ depending on the platform you choose.

 🙏 Donations Accepted, Thank You For Your Support 🙏

If you find value in my content, consider showing your support via:

💳 Stripe:
1) or visit http://thedinarian.locals.com/donate

💳 PayPal: 
2) Simply scan the QR code below đŸ“Č or Click Here: 


🔗 Crypto Donations Graciously Accepted👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
🧬VINDICATED! The Epstein Files Connect Gates, Pandemics & Censorship to a Globalist Blueprint for a Biosecurity State🧬

Every warning. Every documentary. Every article. Every post that got us banned. All of it was true. Now what? What can we do? Read on, share this Substack, help us save lives! The Light is shining! ✹

Well, well, well
 look what the cat dragged in.

Actually, scratch that. Look what the Department of Justice finally dragged out of Jeffrey Epstein’s email inbox and dumped on the world’s doorstep like a rotting corpse nobody wanted to claim. Yep, that’s right. The Epstein files. It’s hilarious how the “Democratic hoax” and “fantasy” client list we were all told didn’t exist suddenly became a very real, very unsealed document.

For years—years—they called us conspiracy theorists. They slapped “misinformation” labels on our posts faster than Pfizer could print liability waivers. They kicked us off platforms, lied about us in the media, and shadow-banned our reach. Meanwhile, the real conspiracy—the one typed out in black-and-white emails between billionaires, bankers, and a convicted pedophile—was sitting in a government vault, waiting to prove us right.

And now? Now the receipts are public.

The release of Jeffrey Epstein’s files has done far more than expose a network of elite pedophilia and blackmail—it has vindicated truth-tellers like us and countless others who were smeared, censored, de-platformed, and persecuted for warning about the sinister agendas of the globalist elite. The documents reveal shocking connections between Epstein, Bill Gates, pandemic planning, and the systematic suppression of anyone who dared to connect the dots.

We weren’t crazy. We were just early. And they hated us for it.

Epstein, Gates, and the Pandemic “Business Model” They Built Together

One of the most damning revelations from Epstein’s files is his partnership with Bill Gates. Forget the carefully crafted PR spin about “regretting” those meetings. These weren’t casual dinners. These were planning sessions.

Back in 2015, Gates and Epstein exchanged emails about “preparing for pandemics” and strategies to “involve the WHO.” Gates wrote: “I hope we can pull this off.”

How’s that for a chill down your spine?

This eerily foreshadowed the 2019 Event 201 simulation—a pandemic exercise hosted by the Gates Foundation, Johns Hopkins, and the World Economic Forum that just happened to model a global coronavirus outbreak
 just months before COVID-19 ”mysteriously” emerged in Wuhan. Funny how that works, isn’t it?

But let’s rewind even further, to the real blueprint—the financial architecture that made the pandemic response not just possible, but profitable.

The story crystallizes in a chilling 2011 email exchange. Juliet Pullis, a JPMorgan executive under then-chairman Jes Staley, emailed Jeffrey Epstein with a list of detailed questions. The source? “The JPM team that is putting together some ideas for Gates.”

The questions were precise: What are the objectives? Is anonymity key? Who directs the investments and grants? This wasn’t JPMorgan consulting an expert; it was a trillion-dollar bank asking a convicted felon to architect a billion-dollar philanthropic fund for Bill Gates.

This wasn’t JPMorgan consulting a philanthropic expert. This was a trillion-dollar bank asking a convicted felon to architect a billion-dollar philanthropic fund for one of the richest men on Earth. Let that marinate for a moment.

Epstein’s reply was fluent and commanding. He described a donor-advised fund with a “stellar board” and ties to the Gates-Buffett “Giving Pledge.” He noted the billions already pledged and identified the gap: “They all have a tax advisor, but have no real clue on how to give it away.” His solution? “JPM would be an integral part. Not advisor
 operator, compliance.“ Staley’s response: “We need to talk.”

By July 2011, the plan evolved. In an email to Staley, copying Boris Nikolic (Gates’ chief science advisor), Epstein laid out the core pitch: “A silo based proposal that will get Bill more money for vaccines.”

Not “more research for pandemics.” Not “better public health infrastructure.” “More money for vaccines.” This is the unambiguous language of capital formation, not charity. It reveals the structure’s intended output planning reached the highest levels.

In August 2011, Mary Erdoes, CEO of JPMorgan’s $2+ trillion Asset & Wealth Management division, emailed Epstein (while on vacation) with additional operational questions.

Epstein’s reply was breathtaking in scope:

  • Scale: “Billions of dollars” in two years, “tens of billions by year 4.”

  • Structure: Donors choose from “silos” like mutual funds.

  • The Kicker: “However, we should be ready with an offshore arm — especially for vaccines.”

An offshore arm. For vaccines. For a charitable vehicle. Let that sink in.

So, by the time the world was panicking in March 2020, the financial machinery was already built. The investment vehicles, the donor-advised funds, the reinsurance products at places like Swiss Re, and even the simulation playbooks were dusted off and ready to go.

The pandemic wasn’t an interruption to their business—it was the Grand Opening.

Epstein’s role extended far beyond trafficking; he was a facilitator and blackmail operative for the global elite. The same forces that orchestrated the COVID-19 power grab—the mask mandates, lockdowns, censorship, and coercive mRNA push—are the ones who silenced critics like us.

Gates, despite his documented ties to Epstein (multiple flights on the “Lolita Express” after Epstein’s 2008 conviction), walks freely. He’s on TV. He’s advising governments. He’s still funding “global health initiatives” and pushing digital IDs, vaccine passports, and climate lockdowns.

Meanwhile, people like our friend, Joby Weeks, are under house arrest without charges, and voices like ours were de-platformed, demonetized, and destroyed for saying this very thing.

We told you. You knew it in your gut. Now you have the emails.

Censorship: The Elite’s “Misinformation” Label to Cover Their Crimes

The Epstein files expose not just criminal behavior, but the playbook for the systematic suppression of truth. While Epstein’s powerful friends were being protected by the FBI, the DOJ, and the media, platforms like Facebook (Meta), YouTube (Google), and Twitter went to war against anyone talking about it.

Think about the sheer audacity.

We were banned from social media for calling COVID-19 a “fake pandemic” and exposing the vaccine injury data that’s now undeniable.

Below is a screenshot of the first Facebook post that was taken down and then used as “Exhibit A” in their “reports” about how bad we were, naming us the 3rd most dangerous people on earth after Dr Joseph Mercola and Bobby Kennedy in the digital hit list they called the “Disinformation Dozen.” They attacked us, lied about us, and pressured the media, social media, and population at large to do the same: attack, threaten, and cast us out.

We were labeled “dangerous” for sharing emails, documents, and research that the DOJ and the CDC have now confirmed.

It was never about “safety.” It was about narrative control.

The same institutions that turned a blind eye to Epstein’s crimes for decades—the same ones that let him “commit suicide” in a maximum-security prison with cameras conveniently malfunctioning—suddenly became the ruthless hall monitors of “acceptable discourse,” ensuring only their approved stories could be told.

Big Tech, Big Media, and Big Government are all part of the same protection racket. They shielded Epstein’s client list, and now they shield the architects of the pandemic debacle. Independent journalists, researchers, and health advocates like us, who connected these dots, were systematically de-platformed, demonetized, and destroyed.

Why? Because we were right, and that was the greatest threat of all.

When you’re over the target, that’s when the flak gets heaviest. And brothers and sisters, we were getting shelled.

They Lied About Us While Protecting the Real Criminals

Let’s be crystal clear about what happened here.

We have spent decades exposing the cancer industry, Big Pharma’s corruption, and the suppression of natural health solutions. We produced The Truth About Cancer docu-series, reaching millions worldwide. We warned about vaccine injuries, censorship, and the coming medical tyranny years before COVID-19.

And what did they do? They called us “Conspiracy Theorists,” “Anti-Vaxxers,” and “Killers.” Dangerous.

They said we were killing people with “misinformation.”

Facebook banned us. YouTube deleted our videos. Legacy media ran hit pieces. PayPal froze our accounts.

All while Bill Gates—a man with documented ties to Jeffrey Epstein, who flew on his plane multiple times after Epstein’s conviction, who got STDs from Russian girls Epstein provided for him for which Gates asked Epstein’s help getting him antibiotics to slip secretly to his then wife, Melinda, so that she would not know about his inexcusable and perverted escapades—yes, THAT Bill Gates—was at the same time, being platformed on every major news network as the world’s health oracle.

All while Anthony Fauci—who funded gain-of-function research in Wuhan through Peter Daszak and EcoHealth Alliance, who lied under oath to Congress, who flip-flopped on masks, lockdowns, and vaccines—was treated like a saint. Time Magazine’s “Guardian of the Year.”

All while Pfizer—a company with a $2.3 billion criminal fine for fraudulent marketing, bribery, and kickbacks—was given blanket immunity from liability and billions in taxpayer dollars to produce a vaccine in record time with no long-term safety data.

Were we the dangerous ones?

No.

We were the truthful ones. And that made us the enemy.

The Weaponized Institutions: From Epstein’s Blackmail to Your Digital ID

Epstein’s operation was never just about blackmail for perversion; it was blackmail for control. The files show his cozy ties to intelligence agencies (Mossad, CIA), financial giants like JPMorgan and Deutsche Bank, and political leaders across the globe.

This is the same cabal now pushing:

  • The Great Reset

  • Digital IDs

  • Central Bank Digital Currencies (CBDCs)

  • 15-minute cities

  • Carbon credit social scoring

  • Vaccine passports

Let’s connect the dots they desperately don’t want you to see:

Financial Control:

JPMorgan banked Epstein for years despite clear red flags—over $1 billion in suspicious transactions flagged internally and ignored. They knew. They didn’t care. They paid a $290 million fine and moved on.

Now, banks like Bank of America, Chase, and PayPal de-bank conservatives, truckers, health freedom advocates, and anyone who questions the narrative. Canadian truckers. Gun shops. Crypto entrepreneurs. The goal is the same: punish dissent and control economic life.

CBDCs are the endgame—a digital leash on every citizen. Programmable money that can be turned off, restricted, or expired. Social credit by another name.

Medical Tyranny:

The FDA, CDC, and WHO—utterly captured by Big Pharma—lied about:

  • COVID origins (Wuhan lab leak dismissed as conspiracy theory)

  • Vaccine efficacy (”95% effective” turned into “you need boosters forever”)

  • Natural immunity (ignored despite being superior)

  • Early treatments (ivermectin, hydroxychloroquine, vitamin D censored and mocked)

They attacked natural health advocates just as they’ve done for decades with cancer cures, detox protocols, and anything that threatens Big Pharma profits. They are not health agencies; they are profit-enforcement arms dressed in lab coats.

Political Corruption:

Epstein’s blackmail ensured elite immunity. His client list includes presidents, princes, CEOs, scientists, and media moguls.

Meanwhile, true dissidents—Julian Assange (tortured in prison for journalism), Edward Snowden (exiled for exposing mass surveillance), and journalists like us—face persecution, imprisonment, debanking, slanderous hit pieces, and/or constant character assassination.

Two systems of justice: one for them, one for you. One for Epstein’s friends, one for truth-tellers.

The Way Forward: They’re Exposed. Now It’s Time to Build.

The Epstein files are more than proof; they are a declaration that the system is rotten to its core. But here’s the beautiful part: they vindicate us completely.

Every warning. Every documentary. Every article. Every post that got us banned. All of it was true.

The globalists’ grip is weakening. The truth—the real, ugly, documented truth—is erupting from the very files they tried to hide. They labeled us liars, but the emails show they were the architects. They silenced us, they censored us, but that only made our voices more necessary.

Epstein did not kill himself. COVID-19 was not natural. The vaccines were not safe or effective. The censorship was not about protecting you—it was about protecting them.

And now? Now it’s time to use this vindication as fuel. Not for revenge, but for revolution. A revolution of truth, health, freedom, and justice.

They tried to bury us. They didn’t know we were seeds.

The Epstein files are a smoking gun. A paper trail. A confession written in emails, financial structures, and offshore accounts.

They prove what we’ve been saying all along:

  • The system is rigged.

  • The elites are criminals.

  • The pandemic was planned.

  • The censorship was coordinated.

And we were right. 👍

Source

🙏 Donations Accepted, Thank You For Your Support 🙏

If you find value in my content, consider showing your support via:

💳 Stripe:
1) or visit http://thedinarian.locals.com/donate

💳 PayPal: 
2) Simply scan the QR code below đŸ“Č or Click Here: 


🔗 Crypto Donations Graciously Accepted👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
See More
Available on mobile and TV devices
google store google store app store app store
google store google store app tv store app tv store amazon store amazon store roku store roku store
Powered by Locals