TheDinarian
News • Business • Investing & Finance
FEDML Nexus AI Studio: an all-new zero-code LLM builder
Using The THETA Blockchain Edge Nodes
November 02, 2023
post photo preview
šŸ’”We have a webinar on Tuesday Nov 7 at 11am PT/2pm ET at which we’ll introduce our FEDML Nexus AI platform and show a live-demonstration of Studio: Register for the webinar here

Table of contents:
- Introduction
- FEDML Nexus AI Overview
- LLM use cases
- The challenge with LLMs
- Why a zero-code LLM Studio?
- How does it work?
- Future plans to add Studio
- Advanced and custom LLM training with Launch and Train
- Webinar announcementĀ 

Introduction

Most businesses today are exploring the many ways modern artificial intelligence and its generative models may revolutionize the way we interact with products and services. Artificial intelligence technology is moving fast and it can be difficult for data scientists and machine learning engineers to keep up with the new models, algorithms, and techniques emerging each week.Ā  Additionally, it’s difficult for developers to rapidly experiment with models and data at the pace required to keep up with the business’s AI application ideation.

Further, the ā€œlargeā€ nature of these new generative models, such as large language models, is driving a new level of demand for compute, particularly, hard to find low cost GPUs, to support the massive computations required for distributed training and serving these generative models.

FEDM Nexus AI is a new platform that bridges these gaps and provides Studio, a no-code, rapid experimentation, MLOps, and low cost GPU compute resources for developers and enterprises to turn their LLM ideas into domain-specific value generating products and services.

FEDML Nexus AI Overview

FEDML Nexus AI is a platform of Next-Gen cloud services for LLMs and Generative AI. Developers need a way to quickly and easily find and provision the best GPU resources across multiple providers, minimize costs, and launch their AI jobs without worrying about tedious environment setup and management for complex generative AI workloads. Nexus AI also supports private on-prem infrastructure or hybrid cloud/on-prem.Ā  FEDML Nexus AI solves for the needs which come with generative AI development in 4 ways:

  • GPU Marketplace for AI Development:Ā Addressing the current dearth of compute nodes/GPUs arising due to the skyrocketing demand for AI models in enterprise applications, FEDML Nexus AI offers a massive GPU marketplace with over 18,000 compute nodes.Ā  Beyond partnering with prominent data centers and GPU providers, the FEDML GPU marketplace also welcomes individuals to join effortlessly via our "Share and Earn" interface.
  • Unified ML Job Scheduler and GPU Manager:Ā With a simple fedml launch your_job.yaml command, developers can instantly launch AI jobs (training, deployment, federated learning) on the most cost-effective GPU resources, without the need for tedious resource provisioning, environment setup and management. FEDML Launch supports any computing-intensive job for LLMs and generative AI, including large-scale distributed training, serverless/dedicated deployment endpoints, and large-scale similarity search in vector DB. It also enables cluster management and deployment of ML jobs on-premises, private, and hybrid clouds.
  • Zero-code LLM Studio:Ā As enterprises increasingly seek to create private, bespoke, and vertically tailored LLMs, FEDML Nexus AI Studio empowers any developer to train, fine-tune, and deploy generative AI models code-free. This Studio leverages fedml launch and allows companies to seamlessly create specialized LLMs with their proprietary data in a secure and cost-effective manner.
  • Optimized MLOps and Compute Libraries for Diverse AI Jobs:Ā Catering to advanced ML developers, FEDML Nexus AI provides powerful MLOps platforms for distributed model training, scalable model serving, and edge-based federated learning. FEDML Train offers robust distributed model training with advanced resource optimization and observability. FEDML Deploy provides MLOps for swift, auto-scaled model serving, with endpoints on decentralized cloud or on-premises. For developers looking for quick solutions, FEDML Nexus AI's Job Store houses pre-packaged compute libraries for diverse AI jobs, from training to serving to federated training.

LLM use cases

LLMs have the potential to revolutionize the way we interact with products & services. They can be used to generate text, translate languages, answer questions, and even create new creative content.Ā  Actual applications & LLM capabilities can be organized into 3 groups: Assistants, Learning, and Operations.Ā  These have some overlap of course. Some example applications in each:

The challenge with LLMs

Though new versions of open-source LLMs are released regularly, and they continuously improve, (e.g. they can handle more input context length) these based models typically won’t work well out of the box for your specific domain’s use case. This is because these base open-source models were trained on general text data from the web and other sources in their pretraining.Ā 

You will typically want to specialize the base LLM model for your domain’s use case. This entails fine-tuning the model on data that’s relevant to your use case or task. Fine-tuning, however, comes with its own set of challenges which prevent or hinder LLM projects from completing end to end.

There are 3 general challenges associated with fine-tuning large language models

  1. Getting access to GPU compute resources
  2. Training & deployment process
  3. Experimenting efficiently

1. compute resources: LLMs require substantial compute, memory, and time to fine-tune and deployment. The large matrix operations involved with training and deploying LLMs suggests that GPUs are in the best position to handle the calculation workload most efficiently.Ā  GPUs, particularly the high end A100 or H100 type, are very hard to find available today and can cost hundreds of thousands of dollars to purchase for on-prem.

There are techniques to efficiently use the compute, including to distribute the training across many servers, hence you will typically need access to several GPUs to run your fine-tuning.

2. process, Without a solution like FEDML Nexus AI Studio, managing and training LLM models for production scale and deployment typically involves a many step process, such as:

  1. Selecting the appropriate base model
  2. Building training data set
  3. Selecting an optimization algorithm
  4. Setting & tracking hyperparameters
  5. Implementing efficient training mechanisms like PEFT
  6. Ensure use of SOTA technology for the training
  7. Managing your python & training code
  8. Finding the necessary compute and memory.
  9. Distributing the training to multiple compute resources
  10. Managing the training and validation process

And Deploying LLM models typically involves a process like:

  1. Building many models to experiment with
  2. Building fast serving endpoints for each experiment
  3. Ensure use of SOTA technology for serving
  4. Managing your python & serving code
  5. Finding the necessary compute and memory.
  6. Connecting your endpoint with your application
  7. Monitoring and measuring key metrics like latency and drift
  8. Autoscale with demand spikes
  9. Failover when there are issues

FEDML Nexus AI Studio encapsulates all of the above into just a few simple steps.

3. on experimentation, there’s a fast pace of new open source model development and training techniques, and your business stakeholders are asking for timely delivery to test their AI product ideas.Ā  Hence you need a way to quickly fine-tune and deploy LLM models with a platform that automatically handles most of the steps for you, including finding low cost compute.Ā  In this way, you can run several experiments simultaneously, thereby enabling you to deliver the best AI solution and get your applications’ new value for your customers sooner.Ā Ā 

Why a zero-code LLM Studio?

To support the 3 general challenges mentioned above, FEDML Nexus AI Studio, encapsulates a full end-to-end MLOps (or sometimes called LLMOps) for LLMs and makes the process just a few simple steps in a guided UI.Ā  The step by step is discussed in the How it works section.Ā  But as for Why LLM studio?:

  • No-code:Ā Studio’s UI walks you through few steps involved very simply
  • Access to popular Open-Source LLM models:Ā we keep track of the popular open source models so you don’t have to. We provide access to Llama 2, Pythia and others in various parameter sizes for your fine-tuning.
  • Built-in training data or bring your own:Ā We provide several industry specific data sets built-in or you can bring your own data set for the fine-tuning.Ā 
  • Managing your LLM infrastructure:Ā This includes provisioning and scaling your LLM resources, monitoring their performance, and ensuring that they are always available.
  • Deploying and managing your LLM applications:Ā This includes deploying LLM endpoints for your LLM application to run on production while collecting metrics on performance.
  • Monitoring and improving your LLM models:Ā This includes monitoring the performance of your LLM models, identifying areas where they can be improved, and retraining them to improve their accuracy.

Without a robust MLOps infrastructure like FEDML Nexus AI, it can be difficult to effectively manage and deploy LLMs. Without Studio, you may have a number of problems, including:

  • Slow Development:Ā if you can’t experiment with fine-tuning new models, new data, and configurations quickly and at low cost, you may not be putting forth the best effort model for your business applications.
  • High costs: FEDML Nexus AI’s new cloud services bring a GPU marketplace based pricing, hence you can be sure your training and deployment is cost effective.
  • Performance issues:Ā If your LLM infrastructure is not properly managed, you may experience performance issues, such as slow response times and outages.
  • Security vulnerabilities:Ā If your LLM applications are not properly deployed and managed, they may be vulnerable to security attacks.
  • Model drift:Ā Over time, LLM models can become less accurate as the data they are trained on changes. If you are not monitoring and able to efficiently continuously improve your LLM models, this can lead to a decrease in the quality of your results.

How does it work?

Studio’s no-code user interface greatly compresses the typical workflow involved with fine-tuning an LLM.Ā  It’s easy and only 3 steps:

  1. Select an open source model, a fine-tuning data set & start training
  2. Select a fine-tune model and build an endpoint
  3. Test your model in a chatbot.

Step 1.Select an open source model, a fine-tuning data set & start training

At nexus.fedm.ai, click the Studio icon in the main menu at the left.

Select from our growing list of Open-source LLM modes:

Next, select from build-in datasets or add your own.Ā  The built-in data sets are already created properly to work with the open source modes. They have the necessary design, label/columns, and the proper tokenizers are handled. You can search for an view the actual data for these standard datasets at Hugging Face, for example the popular training data databricks/databricks-dolly-15k is here:Ā https://huggingface.co/datasets/databricks/databricks-dolly-15k/tree/main

A few hyper-parameters are provided for your review and adjustment if desired.Ā  SetĀ use_loraĀ to true for example to drastically reduce the compute and memory needed to fine-tune.

Then click Launch, and Studio will automatically find the suitable compute in our low cost GPU marketplace to run your fine-tune training.

Studio will use a SOTA training algorithm to ensure efficient fine-tuning.

Once you start fine-tuning, you can see your model training in Training > RunĀ 

You may start multiple model fine tunes to compare and experiment with the results.Ā  The GPU marketplace will automatically find the compute resources for you.Ā  If a compute resource isn’t currently available, your job will be queued for the next available GPU.

Step 2. Select a fine-tune model and build an endpoint

After you’ve built a fine-tuned model, you can deploy it to an endpoint.Ā  Goto Studio > LLM Deploy.Ā  Name your endpoint, select your fine-tuned model and indicate FEDML Cloud for Studio to automatically find the compute resource on our GPU marketplace.

For deploy and serving, a good rule of thumb is to assume 2-bytes or half-precision is required per parameter, and hence best to have GPU memory that’s 2x number of parameters:Ā 

For example, if you have a 7 billion parameter model, at half-precision, it needs about 14GB of GPU space.Ā  Studio will automatically find a suitable GPU for you.

Step 3. Test your model in a chatbot.

And finally, test your fine-tuned LLM and its endpoint through our built-in chatbot.Ā  Goto Studio > Chatbot, select your new endpoint, and type a query to test

And that’s it! You’ve completed fine tuning, deployment, and a chatbot test. All with just a few clicks and Studio even found the servers for you.

FEDML also provides a more sophisticated Chatbot for customers who would like a more refined & production ready-chatbot which can support many models simultaneously.

Future plans for Studio

We plan to add additional AI training tasks to Studio. For example, Multi-modal model training and deployment.Ā  We’ll publish to our blog when those are ready for you to try.

Advanced and custom LLMs with Launch, Train, and Deploy

Check for our future blog post where we’ll show you how to handle more advanced and custom training and deployment with our Launch, Train, and Deploy products.Ā 

Webinar announcementĀ 

We have a webinar on Tuesday Nov 7 at 11am PT/2pm ET at which we’ll introduce our FEDML Nexus AI platform and show a live-demonstration of Studio:

  • Discover the vision & mission of FEDML Nexus AI
  • Dive deep into some of its groundbreaking features
  • Learn how to build your own LLMs with no-code Studio
  • Engage in a live Q&A with our expert panel

Register for the webinar here!

Link

Ā 

Ā 

community logo
Join the TheDinarian Community
To read more articles like this, sign up and join my community today
0
What else you may like…
Videos
Podcasts
Posts
Articles
The Gold Standard āœØļø And The USD šŸ’µ
00:02:30
IMF Admitting Crypto Is Inevitable šŸ’„

When you have the IMF Admitting crypto is inevitable, BlackRock Tokenizing the financial system, the FED hinting at ending QT, Gold doing a parabolic move & the FED hinting at renewed easing.

This isn’t coincidence.
This is strategic coordination.

OP: Vandell33

00:00:47
Listen to this... 🤯

Catherine Fitts, she just revealed that interdimensional beings are pulling the strings in this world šŸ§šŸ˜±šŸ‘½

šŸ‘‰Re-read your religious book, with interdimensional beings in mind and it will all start to make sense... šŸ˜‰

00:00:23
šŸ‘‰ Coinbase just launched an AI agent for Crypto Trading

Custom AI assistants that print money in your sleep? šŸ”œ

The future of Crypto x AI is about to go crazy.

šŸ‘‰ Here’s what you need to know:

šŸ’  'Based Agent' enables creation of custom AI agents
šŸ’  Users set up personalized agents in < 3 minutes
šŸ’  Equipped w/ crypto wallet and on-chain functions
šŸ’  Capable of completing trades, swaps, and staking
šŸ’  Integrates with Coinbase’s SDK, OpenAI, & Replit

šŸ‘‰ What this means for the future of Crypto:

1. Open Access: Democratized access to advanced trading
2. Automated Txns: Complex trades + streamlined on-chain activity
3. AI Dominance: Est ~80% of crypto šŸ‘‰txns done by AI agents by 2025

🚨 I personally wouldn't bet against Brian Armstrong and Jesse Pollak.

šŸ‘‰ Coinbase just launched an AI agent for Crypto Trading

🚨 JOHN BOLLINGER WARNS: ā€œPAY ATTENTION SOONā€ AS CHARTS SIGNAL IMMINENT MAJOR MOVE 🚨

Veteran technical analyst John Bollinger—the creator of the Bollinger Bands indicator—has identified potential ā€œW bottomā€ patterns forming on the charts of Ether (ETH) and Solana (SOL), and advises traders to watch closely for a significant market move.

šŸ”‘ Key Points:

šŸ”¹ W Bottom Setups: Bollinger sees early signs of double-bottom (ā€œWā€) formations in both Ether and Solana, which historically signal bullish reversals and the potential for substantial price advances if confirmed.

šŸ”¹ Bitcoin Lagging—But Watch Closely: While the pattern hasn't emerged on Bitcoin’s chart yet, BTC has posted a ā€œVā€ shaped recovery after a major dip below $104,000, and now sits at the lower end of its recent range. Past market behavior suggests that similar patterns could soon develop for Bitcoin.

šŸ”¹ Historical Precedent: The last time Bollinger issued a comparable alert was July 2024—Bitcoin ...

🚨 SOLCRAVO LAUNCHES XRP SMART CONTRACTS: ENHANCING UTILITY AND YIELD FOR XRP HOLDERS 🚨

SolCravo has launched a new platform delivering smart contracts for XRP, enabling holders to earn on-chain yields and participate in automated asset management without selling or transferring their XRP.

šŸ”‘ Key Points:

šŸ”¹ Core Offering: SolCravo allows users to connect their XRP wallets and engage directly with smart contracts that automate income generation, putting their assets to work while maintaining full self-custody and control. The service is intended to be user-friendly for both new and experienced participants.

šŸ”¹ Multi-Asset Support: While focused on XRP, SolCravo's platform is multi-chain—supporting BTC, ETH, BNB, LTC, SOL, and USDT alongside XRP, making it a centralized hub for asset management and smart contract engagement for leading cryptocurrencies.

šŸ”¹ Contract Tiers: Users can select among several contract options, ranging from a $100 ā€œStarter Contractā€ to a $16,000+ ...

post photo preview

šŸ”„ BINANCE CRACKS DOWN ON BOT FARMS, BANS OVER 600 ACCOUNTS šŸ”„

Binance has taken swift and decisive action to maintain the integrity of its Binance Alpha program, permanently banning over 600 accounts for engaging in fraudulent activity.

The accounts were found to be abusing the platform's reward mechanisms using sophisticated, automated tools, commonly referred to as "bot farms."

šŸ”‘ Key Details:

šŸ”¹ Platform Targeted: Binance Alpha is a section within the Binance ecosystem (often tied to the Binance Web3 Wallet) designed to give users early access to promising, emerging crypto projects and exclusive token generation events (TGEs) through its Alpha Points system.

šŸ”¹ The Violation: The banned accounts were utilizing fraudulent automated tools (scripts, bots, and other non-manual methods) to unfairly "farm" or accumulate Alpha Points and disproportionately claim rewards, effectively cheating the system and undermining the fairness for legitimate ...

post photo preview
post photo preview
New Human Force
Join this Now! YOU have what it takes!

They are in our solar system, and in our event-stream in this Eternal Now.

Officialdom is clueless.

They think we are going to be at WAR with the Aliens.

Officialdom is very stupid.

Aliens is here. It’s not WAR. It’s Contention.

There is a difference.

Officialdom is clueless, still living in the last Millennium.

Aliens is here.

The Field in which we contend is This Eternal Now.

ALL HUMANS LIVE HERE, and ONLY HERE, in this

ETERNAL NOW.

It’s a Field of potentials, of pending Manifestation, this continuous event-stream of karma in which we have always lived our body’s Life.

This Eternal Now has always been our body’s Field of Contention.

The Aliens is here, in our Eternal Now.

Our common, shared, reality that we all continuously co-create now has Aliens.

It’s getting very complex in here.

Officialdom is clueless. They see the Aliens. They are freaking out. They think you are children, when it is their small minds, trapped in a reality that is only grit, mud, and ā€˜random chance’ who are childish.

Officialdom is stupid. They will and are reacting badly. As is their way, they are trying to hide shit from you. Silly grit bound minds don’t realize you can see everything from within the Eternal Now. They have yet to grasp that what they perceive as this Matterium, filled with ā€˜matter’, is but a hardening of our previous (past) internal states of being.

WAR happens in the Matterium.

Contention occurs within this Eternal Now where Consciousness shapes the manifesting event-stream.

YOU know this to be fact. You are a co-creator.

Contention with Aliens is happening in this instant in this Eternal Now.

Officialdom ain’t doing shit. They are still stuck in trying to move matter around to affect unfolding circumstances. That’s redoing the mirror trying to affect the reflection. Dumb fucks….

It’s up to US. To the New Humans. Those of us who live in this Eternal Now. Those of us who see that our body’s Lives (the Chain that cannot be broken) are expressions of the Ontology revealing itself to itself. It’s up to us guys.

We are not an Army. That’s a concept from the past, from before the emergence of the New Humans. We are a Force. A self-organizing collective with leadership resident in each, and every participant.

We are the New Human Force. By the time officialdom starts to speak about the Aliens in near-factual terms, we will already be engaging them in this Eternal Now.

By the time officialdom begins to move matter around (space ships & such) thinking it’s War, we will already be suffering casualties in this Eternal Now. That part is inevitable. It’s how we learn.

By the time officialdom realizes that some shit is going on in places and ways beyond its conception, we will already be pushing our dominance onto our partners in this First Contention, the Aliens. Nage cannot train without Uke.

Just as officialdom is scrambling to research the Ontology, this Eternal Now, and the event-stream, we will be settling terms with our new partners, the Aliens.

Come, join with us. It’s going to be a hellacious Contention.

We ARE the NEW HUMANS!

Together we are the Force that cannot be defeated.

Start YOUR training in this instance of this Eternal NOW.

Consume Neville Goddard videos as though all of human existence depended on YOUR mind and YOUR active, effective, imaginings!

It’s not a question of Mind over Matter as there is only Mind and it cares not for Matter. That’s residue.

Source

šŸ™ Donations Accepted šŸ™

If you find value in my content, consider showing your support via:

šŸ’³ PayPal:Ā 
1) Simply scan the QR code below šŸ“²
2) or visit https://www.paypal.me/thedinarian

šŸ”— Crypto DonationsšŸ‘‡
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
The Great Onboarding: US Government Anchors Global Economy into Web3 via Pyth Network

For years, the crypto world speculated that the next major cycle would be driven by institutional adoption, with Wall Street finally legitimizing Bitcoin through vehicles like ETFs. While that prediction has indeed materialized, a recent development signifies a far more profound integration of Web3 into the global economic fabric, moving beyond mere financial products to the very infrastructure of data itself. The U.S. government has taken a monumental step, cementing Web3's role as a foundational layer for modern data distribution. This door, once opened, is poised to remain so indefinitely.

The U.S. Department of Commerce has officially partnered with leading blockchain oracle providers, Pyth Network and Chainlink, to distribute critical official economic data directly on-chain. This initiative marks a historic shift, bringing immutable, transparent, and auditable data from the federal government itself onto decentralized networks. This is not just a technological upgrade; it's a strategic move to enhance data accuracy, transparency, and accessibility for a global audience.

Specifically, Pyth Network has been selected to publish Gross Domestic Product (GDP) data, starting with quarterly releases going back five years, with plans to expand to a broader range of economic datasets. Chainlink, the other key partner, will provide data feeds from the Bureau of Economic Analysis (BEA), including Real Gross Domestic Product (GDP) and the Personal Consumption Expenditures (PCE) Price Index. This crucial economic information will be made available across a multitude of blockchain networks, including major ecosystems like Ethereum, Avalanche, Base, Bitcoin, Solana, Tron, Stellar, Arbitrum One, Polygon PoS, and Optimism.

This development is closer to science fiction than traditional finance. The same oracle network, Pyth, that secures data for over 350 decentralized applications (dApps) across more than 50 blockchains, processing over $2.5 trillion in total trading volume through its oracles, is now the system of record for the United States' core economic indicators. Pyth's extensive infrastructure, spanning over 107 blockchains and supporting more than 600 applications, positions it as a trusted source for on-chain data. This is not about speculative assets; it's about leveraging proven, robust technology for critical public services.

The significance of this collaboration cannot be overstated. By bringing official statistics on-chain, the U.S. government is embracing cryptographic verifiability and immutable publication, setting a new precedent for how governments interact with decentralized technology. This initiative aligns with broader transparency goals and is supported by Secretary of Commerce Howard Lutnick, positioning the U.S. as a world leader in finance and blockchain innovation. The decision by a federal entity to trust decentralized oracles with sensitive economic data underscores the growing institutional confidence in these networks.

This is the cycle of the great onboarding. The distinction between "Web2" and "Web3" is rapidly becoming obsolete. When government data, institutional flows, and grassroots builders all operate on the same decentralized rails, we are simply talking about the internet—a new iteration, yes, but the internet nonetheless: an immutable internet where data is not only published but also verified and distributed in real-time.

Pyth Network stands as tangible proof that this technology serves a vital purpose. It demonstrates that the industry has moved beyond abstract "crypto tech" to offering solutions that address real-world needs and are now actively sought after and understood by traditional entities. Most importantly, it proves that Web3 is no longer seeking permission; it has received the highest validation a system can receive—the trust of governments and markets alike.

This is not merely a fleeting trend; it's a crowning moment in global adoption. The U.S. government has just validated what many in the Web3 space have been building towards for years: that Web3 is not a sideshow, but a foundational layer for the future. The current cycle will be remembered as the moment the world definitively crossed this threshold, marking the last great opportunity to truly say, "we were early."

šŸ™ Donations Accepted šŸ™

If you find value in my content, consider showing your support via:

šŸ’³ PayPal:Ā 
1) Simply scan the QR code below šŸ“²
2) or visit https://www.paypal.me/thedinarian

šŸ”— Crypto DonationsšŸ‘‡
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
US Dept of Commerce to publish GDP data on blockchain

On Tuesday during a televised White House cabinet meeting, Commerce Secretary Howard Lutnick announced the intention to publish GDP statistics on blockchains. Today Chainlink and Pyth said they were selected as the decentralized oracles to distribute the data.

Lutnick said, ā€œThe Department of Commerce is going to start issuing its statistics on the blockchain because you are the crypto President. And we are going to put out GDP on the blockchain, so people can use the blockchain for data distribution. And then we’re going to make that available to the entire government. So, all of you can do it. We’re just ironing out all the details.ā€

The data includes Real GDP and the PCE Price Index,Ā which reflects changes in the prices of domestic consumer goods and services. The statistics are released monthly and quarterly. The biggest initial use will likely be by on-chain prediction markets. But as more data comes online, such as broader inflation data or interest rates from the Federal Reserve, it could be used to automate various financial instruments. Apart from using the data in smart contracts, sources of tamperproof data šŸ‘‰will become increasingly important for generative AI.

While it would be possible to procure the data from third parties, it is always ideal to get it from the source to ensure its accuracy. Getting data directly from government sources makes it tamperproof, provided the original data feed has not been manipulated before it reaches the oracle.

Source

šŸ™ Donations Accepted šŸ™

If you find value in my content, consider showing your support via:

šŸ’³ PayPal:Ā 
1) Simply scan the QR code below šŸ“²
2) or visit https://www.paypal.me/thedinarian

šŸ”— Crypto
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
See More
Available on mobile and TV devices
google store google store app store app store
google store google store app tv store app tv store amazon store amazon store roku store roku store
Powered by Locals