TheDinarian
News • Business • Investing & Finance
đź’Ą Grass: The First Ever Layer 2 Data Rollup
June 22, 2024
post photo preview

You cant say I didn't warn you! I have been sending you the link for $GRASS for months now. This is the equivelant to a Theta Staking Node, minus the staking.. for now! You can still get in on this before it goes mainstream and the $GRASS token officially launches mainnet on Solana... ~The Dinarian

What Problem Does Grass Solve?

Over the past few weeks, we’ve been releasing content to explain Grass’s role in the AI stack.  As you now know, the protocol performs a number of functions that help builders access web data to train their models with.  This is the crucial first stage of the AI pipeline and the launching point for all development.  

In Grass’s case, residential devices around the world host a network of nodes that scrape and process raw data from the web.  It cleans and converts that data into structured datasets for use in AI training.  And most importantly, it sources web data in a way that involves - and rewards - the participation of nearly a million people around the world.  It single handedly created the category of AI data provisioning, and it’s the reason some of the largest AI companies in the world have chosen to work with us.  It is the Data Layer of AI.

At the same time, we’ve also spent the past few weeks reflecting on the current state of artificial intelligence.  We’ve asked ourselves about the most pressing issues it faces, and as a prominent piece of AI infrastructure ourselves, what we can do to solve them.  

Our conclusion is that the biggest problem in AI right now is a lack of data transparency.  One glance at the news will tell you why.  Ask yourself, why would an AI model equate Elon Musk with Hitler?  Or erase an entire ethnic group from world history?  Was it trained with bad data?  Or worse, with good data selectively chosen to give bad answers?

The answer is, we don’t know.  And we don’t know because there’s no way to know.  We don’t know what data these models were trained on, because no mechanism exists for proving it.  There’s no way for users to verify data provenance, because there’s no way for builders to verify it themselves.

This is the problem that Grass plans to solve, and we’re now building a layer 2 data rollup to solve it.  How, you may ask?

Allow us to explain. 

How A Layer Two Will Establish Data Provenance 

The world needs a method for proving the origin of AI training data, and that’s what Grass is now building.  Soon, every time data is scraped by Grass nodes, metadata will be recorded to verify the website it was scraped from.  This metadata will then be permanently embedded in every dataset, enabling builders to know its source with total certainty.  They can then share this lineage with their users, who can rest easier knowing that the AI models they interact with were not deliberately trained to give misleading answers.  

This will be a big lift and involve a major expansion of our protocol as we prepare for scraping operations to reach tens of millions of web requests per minute.  Each of these will need to be validated, which will take more throughput than any L1 can provide.  That’s why we’re announcing our plan to build a layer 2 solution to handle this significant upgrade to our capabilities.  The L2 will be a sovereign rollup, featuring a ZK processor so that metadata can be batched for validation and used to provide a persistent lineage for every dataset we produce.  This is what it will take for the base layer of all AI development to advance to the next stage.  

The benefits of this are numerous: it will combat data poisoning, empower open source AI, and create a path towards user visibility into the models we interact with every day. 

Below, we'll describe the system’s basic design.

The Architecture of Grass

The easiest way to understand these upgrades is by consulting a diagram of the Grass Data Rollup.  On the left, between Client and Web Server, you see Grass’s network as it’s traditionally been defined.  Clients make web requests, which are sent through a validator and ultimately routed through Grass nodes.  Whichever website the client has requested, its server will respond to the web request, allowing its data to be scraped and sent back up the line. Then it will be cleaned, processed, and prepared for use in training the next generation of AI models.  

Back in the L2 diagram, you’ll see two major additions on the right that will accompany the launch of Grass’s sovereign layer two: The Grass Data Ledger and the ZK processor.  

Each of these has its own function, so we’ll explain them one at a time. 

  • The Grass Data Ledger 

The Grass Data Ledger is where all data is ultimately stored.  It is a permanent ledger of every dataset scraped on Grass, now embedded with metadata to document its lineage from the moment of origin.  Proofs of each dataset’s metadata will be stored on Solana’s settlement layer, and the settlement data itself will also be available through the ledger.  It’s important to note the significance of Grass having a place to store the data it scrapes, though we’ll get to this shortly.  

  • The ZK Processor

As we described above, the purpose of the ZK processor is to assist in recording the provenance of datasets scraped on Grass’s network.  Picture the process.

When a node on the network - in other words, a user with the Grass extension - sends a web request to a given website, it returns an encrypted response including all of the data requested by the node.  For all intents and purposes, this is when our dataset is born, and this is the moment of origin that needs to be documented.  

And this is exactly the moment that is captured when our metadata is recorded.  It contains a number of fields - session keys, the URL of the website scraped, the IP address of the target website, a timestamp of the transaction, and of course the data itself.  This is all the information necessary to know beyond a shadow of a doubt that a given dataset originated from the website it claims to be from, and therefore that a given AI model is properly - and faithfully - trained.  

The ZK processor enters the equation because this data needs to be settled on-chain, yet we don’t want all of it visible to Solana validators.  Moreover, the sheer volume of web requests that will someday be performed on Grass will inevitably overwhelm the throughput capacity of any L1 - even one as capable as Solana.  Grass will soon scale to the point where tens of millions of web requests are performed every minute, and the metadata from every single one of them will need to be settled on-chain.  It’s not conceivably possible to commit these transactions to the L1 without a ZK processor making proofs and batching them first. Hence, the L2 - the only possible way to achieve what we’re setting out to do.    

Now, why is this such a big deal?

Layer Two Benefits 

  • The Data Ledger 

The Data Ledger is significant because it escalates Grass’s expansion into an additional - and fundamentally different - business model.  While the protocol will continue to vet buyers who send their own web requests and scrape their own data on the network, a growing portion of its activity will involve the data already stored on the ledger.  With this capability, Grass can now scrape data strategically curated for use in LLM training and host it on an ever-widening data repository.   

This repository is the data layer of a modular AI stack, from which builders can pick and choose constituent parts to train infinitely differentiated models.  It is a microcosm of the internet itself, supplying training data that is already structured and ready to be ingested by AI.  

  • The ZK Processor 

We’ve already gone into a bit of detail about why the ZK processor matters.  By enabling us to create proofs of the metadata that documents the origin of Grass datasets, it creates a mechanism for builders  and users to verify that AI models were actually trained correctly.  This is a huge deal in itself. 

There is, however, one piece we didn’t mention earlier.  

In addition to documenting the websites from which datasets originated, the metadata also indicates which node on the network it was routed through.  Significantly, this means that whenever a node scrapes the web, they can get credit for their work without revealing any identifying information about themselves.  

Now, why is this important?

It’s important because once you can prove which nodes have done which work, you can start rewarding them proportionately.  Some nodes are more valuable than others.  Some scrape more data than their peers.  And these are exactly the nodes we need to incentivize to continue the breakneck expansion of the network that we’ve seen over the past few months. We believe this mechanism will significantly boost rewards in the most in-demand locations around the world, ultimately encouraging the people of those locales to sign up and exponentially increase the network’s capacity.  

It should go without saying that the larger the network gets, the more capacity we have to scrape and the larger our repository of stored web data will be.  A flywheel will inevitably be produced where more data means we’ll have more to offer AI labs who need training data - thus providing the incentive for the Grass network to keep growing.  

Conclusion

To summarize, most of the high profile issues with AI today stem from a lack of visibility into how models are trained, and we believe this can be addressed by empowering open source AI with a system for verifying data provenance.  Our solution is to build the first ever layer 2 data rollup, which will make it possible to introduce a mechanism for recording metadata documenting the origin of all datasets.  

ZK proofs of this data will be stored on the L1 settlement layer, and the metadata itself will ultimately be tied to its underlying dataset, as these datasets are stored themselves on our own data ledger.  Grass provides the data layer for a modular AI stack, and these developments will lay the groundwork for greater transparency and rewards for node providers that are proportionate to the amount of work they perform.  

This update should help to communicate some of the projects we have on the horizon and clarify the thinking that drives our decision making.  We’re happy to play a part in making AI more transparent, and excited to see the many use cases that will arise for our product going forward.  These upgrades will open up a wide range of opportunities for developers, so if you or your team are interested in building on Grass, please reach out on Discord.  Thanks for your support and do stay tuned.  

community logo
Join the TheDinarian Community
To read more articles like this, sign up and join my community today
0
What else you may like…
Videos
Podcasts
Posts
Articles
September 07, 2025
Utility, Utility, Utility

🚨Robinhood CEO - Vlad Tenev says: “It’s time to move beyond Bitcoin and meme coins into real-world assets!”

For up to date cryptocurrencies available through Robinhood:
https://robinhood.com/us/en/support/articles/coin-availability/

00:00:24
September 06, 2025
3 Companies Control 80% Of U.S. Bankingđź‘€

3 companies. 80% of U.S. banking. You need to know their names.

Watch us break it down in the latest Stronghold 101

00:03:58
September 06, 2025
We Have Been Lied To, For Far To Long!

Impossible Ancient Knowledge That DEBUNKS Our History!

Give them a follow:

Jays info:
@TheProjectUnity on X
youtube.com/c/ProjectUnity

Geoffrey Drumms info:
@TheLandOfChem on X
www.youtube.com/@thelandofchem

00:18:36
👉 Coinbase just launched an AI agent for Crypto Trading

Custom AI assistants that print money in your sleep? 🔜

The future of Crypto x AI is about to go crazy.

👉 Here’s what you need to know:

đź’  'Based Agent' enables creation of custom AI agents
đź’  Users set up personalized agents in < 3 minutes
đź’  Equipped w/ crypto wallet and on-chain functions
đź’  Capable of completing trades, swaps, and staking
💠 Integrates with Coinbase’s SDK, OpenAI, & Replit

👉 What this means for the future of Crypto:

1. Open Access: Democratized access to advanced trading
2. Automated Txns: Complex trades + streamlined on-chain activity
3. AI Dominance: Est ~80% of crypto 👉txns done by AI agents by 2025

🚨 I personally wouldn't bet against Brian Armstrong and Jesse Pollak.

👉 Coinbase just launched an AI agent for Crypto Trading
Pyth Network DAO

Beyond revenue, the Phase 2 proposal asks for the DAO to consider whether and how the network can deliver value back to the community.

This new product could fuel the DAO, and the DAO should consider whether it wants to support buybacks, rewards, and strengthening the network for all stakeholders.

Looking ahead to Phase 3: Total market coverage.

→ 200–300 new symbols added each month
→ 3K+ by year-end, 10K+ in 2026
→ Complete coverage across: trading venues, OTC markets, permissioned & unpermissioned DeFi

Pyth will become the most comprehensive financial data layer in the world.

https://x.com/PythNetwork/status/1963255788698484942

🚨BREAKING: Ledger CTO Charles Guillemet warns of a supply chain attack in the JavaScript ecosystem after an NPM account compromise.

He advises users to carefully verify every transaction if using a hardware wallet, and to avoid on-chain transactions entirely if they don’t.

Stay safe.

https://x.com/CoinDesk/status/1965110299456847944

$ETH ETF outflow of $96,700,000 đź”´ yesterday.

BlackRock sold $192,700,000 in Ethereum.

post photo preview
post photo preview
The Great Onboarding: US Government Anchors Global Economy into Web3 via Pyth Network

For years, the crypto world speculated that the next major cycle would be driven by institutional adoption, with Wall Street finally legitimizing Bitcoin through vehicles like ETFs. While that prediction has indeed materialized, a recent development signifies a far more profound integration of Web3 into the global economic fabric, moving beyond mere financial products to the very infrastructure of data itself. The U.S. government has taken a monumental step, cementing Web3's role as a foundational layer for modern data distribution. This door, once opened, is poised to remain so indefinitely.

The U.S. Department of Commerce has officially partnered with leading blockchain oracle providers, Pyth Network and Chainlink, to distribute critical official economic data directly on-chain. This initiative marks a historic shift, bringing immutable, transparent, and auditable data from the federal government itself onto decentralized networks. This is not just a technological upgrade; it's a strategic move to enhance data accuracy, transparency, and accessibility for a global audience.

Specifically, Pyth Network has been selected to publish Gross Domestic Product (GDP) data, starting with quarterly releases going back five years, with plans to expand to a broader range of economic datasets. Chainlink, the other key partner, will provide data feeds from the Bureau of Economic Analysis (BEA), including Real Gross Domestic Product (GDP) and the Personal Consumption Expenditures (PCE) Price Index. This crucial economic information will be made available across a multitude of blockchain networks, including major ecosystems like Ethereum, Avalanche, Base, Bitcoin, Solana, Tron, Stellar, Arbitrum One, Polygon PoS, and Optimism.

This development is closer to science fiction than traditional finance. The same oracle network, Pyth, that secures data for over 350 decentralized applications (dApps) across more than 50 blockchains, processing over $2.5 trillion in total trading volume through its oracles, is now the system of record for the United States' core economic indicators. Pyth's extensive infrastructure, spanning over 107 blockchains and supporting more than 600 applications, positions it as a trusted source for on-chain data. This is not about speculative assets; it's about leveraging proven, robust technology for critical public services.

The significance of this collaboration cannot be overstated. By bringing official statistics on-chain, the U.S. government is embracing cryptographic verifiability and immutable publication, setting a new precedent for how governments interact with decentralized technology. This initiative aligns with broader transparency goals and is supported by Secretary of Commerce Howard Lutnick, positioning the U.S. as a world leader in finance and blockchain innovation. The decision by a federal entity to trust decentralized oracles with sensitive economic data underscores the growing institutional confidence in these networks.

This is the cycle of the great onboarding. The distinction between "Web2" and "Web3" is rapidly becoming obsolete. When government data, institutional flows, and grassroots builders all operate on the same decentralized rails, we are simply talking about the internet—a new iteration, yes, but the internet nonetheless: an immutable internet where data is not only published but also verified and distributed in real-time.

Pyth Network stands as tangible proof that this technology serves a vital purpose. It demonstrates that the industry has moved beyond abstract "crypto tech" to offering solutions that address real-world needs and are now actively sought after and understood by traditional entities. Most importantly, it proves that Web3 is no longer seeking permission; it has received the highest validation a system can receive—the trust of governments and markets alike.

This is not merely a fleeting trend; it's a crowning moment in global adoption. The U.S. government has just validated what many in the Web3 space have been building towards for years: that Web3 is not a sideshow, but a foundational layer for the future. The current cycle will be remembered as the moment the world definitively crossed this threshold, marking the last great opportunity to truly say, "we were early."

🙏 Donations Accepted 🙏

If you find value in my content, consider showing your support via:

💳 PayPal: 
1) Simply scan the QR code below 📲
2) or visit https://www.paypal.me/thedinarian

🔗 Crypto Donations👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
US Dept of Commerce to publish GDP data on blockchain

On Tuesday during a televised White House cabinet meeting, Commerce Secretary Howard Lutnick announced the intention to publish GDP statistics on blockchains. Today Chainlink and Pyth said they were selected as the decentralized oracles to distribute the data.

Lutnick said, “The Department of Commerce is going to start issuing its statistics on the blockchain because you are the crypto President. And we are going to put out GDP on the blockchain, so people can use the blockchain for data distribution. And then we’re going to make that available to the entire government. So, all of you can do it. We’re just ironing out all the details.”

The data includes Real GDP and the PCE Price Index, which reflects changes in the prices of domestic consumer goods and services. The statistics are released monthly and quarterly. The biggest initial use will likely be by on-chain prediction markets. But as more data comes online, such as broader inflation data or interest rates from the Federal Reserve, it could be used to automate various financial instruments. Apart from using the data in smart contracts, sources of tamperproof data 👉will become increasingly important for generative AI.

While it would be possible to procure the data from third parties, it is always ideal to get it from the source to ensure its accuracy. Getting data directly from government sources makes it tamperproof, provided the original data feed has not been manipulated before it reaches the oracle.

Source

🙏 Donations Accepted 🙏

If you find value in my content, consider showing your support via:

💳 PayPal: 
1) Simply scan the QR code below 📲
2) or visit https://www.paypal.me/thedinarian

đź”— Crypto
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
List Of Cardano Wallets

Well-known and actively maintained wallets supporting the Cardano Blockchain are Eternl, Typhon, Vespr, Yoroi, Lace, ADAlite, NuFi, Daedalus, Gero, LodeWallet, Coin Wallet, ADAWallet, Atomic, Gem Wallet, Trust and Exodus.

Note that in case of issues, usually only queries relating to official wallets can be answered in Cardano groups across telegram/forum. You may need to consult with specific wallet support teams for third party wallets.

Tips

  • Its is important to ensure that you're in sole control of your wallet keys, and that the keys used can be restored via alternate wallet providers if a particular one is non-functional. Hence, put extra attention to Non-Custodial and Compatibility fields.
  • The score column below is strictly a count of checks against each feature listed, the impact of specific feature (and thus, score) is up to reader's descretion.
  • The table represents current state on mainnet network, any future roadmap activities are out-of-scope.
  • Info on individual fields can be found towards the end of the page.
  • Any field that shows partial support (eg: open-source field) does not score the point for that field.

Brief info on fields above

  • Non-Custodial: are wallets where payment as well as stake keys are not shared/reused by wallet provider, and funds can be transparently verified on explorer
  • Compatibility: If the wallet mnemonics/keys can easily (for non-technical user) be used outside of specific wallet provider in major other wallets
  • Stake Control: Freedom to elect stake pool for user to delegate to (in user-friendly way)
  • Transparent Support: Easy approachability of a public interactive - eg: discord/telegram - group (with non-anonymous users) who can help out with support. Twitter/Email supports do not count for a check
  • Voting: Ability to participate in Catalyst voting process
  • Hardware Wallet: Integration with atleast Ledger Nano device
  • Native Assets: Ability to view native assets that belong to wallet
  • dApp Integration: Ability to interact with dApps
  • Stability: represents whether there have been large number of users reporting missing tokens/balance due to wallet backend being out of sync
  • Testnets Support: Ability to easily (for end-user) open wallets in atleast one of the cardano testnet networks
  • Custom Backend Support: Ability to elect a custom backend URL for selecting alternate way to submit transactions transactions created on client machines
  • Single/Multi Address Mode: Ability to use/import Single as well as Multiple Address modes for a wallet
  • Mobile App: Availability on atleast one of the popular mobile platforms
  • Desktop (app,extension,web): Ways to open wallet app on desktop PCs
  • Open Source: Whether the complete wallet (all components) are open source and can be run independently.

Source

🙏 Donations Accepted 🙏

If you find value in my content, consider showing your support via:

💳 PayPal: 
1) Simply scan the QR code below 📲
2) or visit https://www.paypal.me/thedinarian

đź”— Crypto
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

 

Read full Article
See More
Available on mobile and TV devices
google store google store app store app store
google store google store app tv store app tv store amazon store amazon store roku store roku store
Powered by Locals