TheDinarian
News • Business • Investing & Finance
FEDML Nexus AI Studio: an all-new zero-code LLM builder
Using The THETA Blockchain Edge Nodes
November 02, 2023
post photo preview
💡We have a webinar on Tuesday Nov 7 at 11am PT/2pm ET at which we’ll introduce our FEDML Nexus AI platform and show a live-demonstration of Studio: Register for the webinar here

Table of contents:
- Introduction
- FEDML Nexus AI Overview
- LLM use cases
- The challenge with LLMs
- Why a zero-code LLM Studio?
- How does it work?
- Future plans to add Studio
- Advanced and custom LLM training with Launch and Train
- Webinar announcement 

Introduction

Most businesses today are exploring the many ways modern artificial intelligence and its generative models may revolutionize the way we interact with products and services. Artificial intelligence technology is moving fast and it can be difficult for data scientists and machine learning engineers to keep up with the new models, algorithms, and techniques emerging each week.  Additionally, it’s difficult for developers to rapidly experiment with models and data at the pace required to keep up with the business’s AI application ideation.

Further, the “large” nature of these new generative models, such as large language models, is driving a new level of demand for compute, particularly, hard to find low cost GPUs, to support the massive computations required for distributed training and serving these generative models.

FEDM Nexus AI is a new platform that bridges these gaps and provides Studio, a no-code, rapid experimentation, MLOps, and low cost GPU compute resources for developers and enterprises to turn their LLM ideas into domain-specific value generating products and services.

FEDML Nexus AI Overview

FEDML Nexus AI is a platform of Next-Gen cloud services for LLMs and Generative AI. Developers need a way to quickly and easily find and provision the best GPU resources across multiple providers, minimize costs, and launch their AI jobs without worrying about tedious environment setup and management for complex generative AI workloads. Nexus AI also supports private on-prem infrastructure or hybrid cloud/on-prem.  FEDML Nexus AI solves for the needs which come with generative AI development in 4 ways:

  • GPU Marketplace for AI Development: Addressing the current dearth of compute nodes/GPUs arising due to the skyrocketing demand for AI models in enterprise applications, FEDML Nexus AI offers a massive GPU marketplace with over 18,000 compute nodes.  Beyond partnering with prominent data centers and GPU providers, the FEDML GPU marketplace also welcomes individuals to join effortlessly via our "Share and Earn" interface.
  • Unified ML Job Scheduler and GPU Manager: With a simple fedml launch your_job.yaml command, developers can instantly launch AI jobs (training, deployment, federated learning) on the most cost-effective GPU resources, without the need for tedious resource provisioning, environment setup and management. FEDML Launch supports any computing-intensive job for LLMs and generative AI, including large-scale distributed training, serverless/dedicated deployment endpoints, and large-scale similarity search in vector DB. It also enables cluster management and deployment of ML jobs on-premises, private, and hybrid clouds.
  • Zero-code LLM Studio: As enterprises increasingly seek to create private, bespoke, and vertically tailored LLMs, FEDML Nexus AI Studio empowers any developer to train, fine-tune, and deploy generative AI models code-free. This Studio leverages fedml launch and allows companies to seamlessly create specialized LLMs with their proprietary data in a secure and cost-effective manner.
  • Optimized MLOps and Compute Libraries for Diverse AI Jobs: Catering to advanced ML developers, FEDML Nexus AI provides powerful MLOps platforms for distributed model training, scalable model serving, and edge-based federated learning. FEDML Train offers robust distributed model training with advanced resource optimization and observability. FEDML Deploy provides MLOps for swift, auto-scaled model serving, with endpoints on decentralized cloud or on-premises. For developers looking for quick solutions, FEDML Nexus AI's Job Store houses pre-packaged compute libraries for diverse AI jobs, from training to serving to federated training.

LLM use cases

LLMs have the potential to revolutionize the way we interact with products & services. They can be used to generate text, translate languages, answer questions, and even create new creative content.  Actual applications & LLM capabilities can be organized into 3 groups: Assistants, Learning, and Operations.  These have some overlap of course. Some example applications in each:

The challenge with LLMs

Though new versions of open-source LLMs are released regularly, and they continuously improve, (e.g. they can handle more input context length) these based models typically won’t work well out of the box for your specific domain’s use case. This is because these base open-source models were trained on general text data from the web and other sources in their pretraining. 

You will typically want to specialize the base LLM model for your domain’s use case. This entails fine-tuning the model on data that’s relevant to your use case or task. Fine-tuning, however, comes with its own set of challenges which prevent or hinder LLM projects from completing end to end.

There are 3 general challenges associated with fine-tuning large language models

  1. Getting access to GPU compute resources
  2. Training & deployment process
  3. Experimenting efficiently

1. compute resources: LLMs require substantial compute, memory, and time to fine-tune and deployment. The large matrix operations involved with training and deploying LLMs suggests that GPUs are in the best position to handle the calculation workload most efficiently.  GPUs, particularly the high end A100 or H100 type, are very hard to find available today and can cost hundreds of thousands of dollars to purchase for on-prem.

There are techniques to efficiently use the compute, including to distribute the training across many servers, hence you will typically need access to several GPUs to run your fine-tuning.

2. process, Without a solution like FEDML Nexus AI Studio, managing and training LLM models for production scale and deployment typically involves a many step process, such as:

  1. Selecting the appropriate base model
  2. Building training data set
  3. Selecting an optimization algorithm
  4. Setting & tracking hyperparameters
  5. Implementing efficient training mechanisms like PEFT
  6. Ensure use of SOTA technology for the training
  7. Managing your python & training code
  8. Finding the necessary compute and memory.
  9. Distributing the training to multiple compute resources
  10. Managing the training and validation process

And Deploying LLM models typically involves a process like:

  1. Building many models to experiment with
  2. Building fast serving endpoints for each experiment
  3. Ensure use of SOTA technology for serving
  4. Managing your python & serving code
  5. Finding the necessary compute and memory.
  6. Connecting your endpoint with your application
  7. Monitoring and measuring key metrics like latency and drift
  8. Autoscale with demand spikes
  9. Failover when there are issues

FEDML Nexus AI Studio encapsulates all of the above into just a few simple steps.

3. on experimentation, there’s a fast pace of new open source model development and training techniques, and your business stakeholders are asking for timely delivery to test their AI product ideas.  Hence you need a way to quickly fine-tune and deploy LLM models with a platform that automatically handles most of the steps for you, including finding low cost compute.  In this way, you can run several experiments simultaneously, thereby enabling you to deliver the best AI solution and get your applications’ new value for your customers sooner.  

Why a zero-code LLM Studio?

To support the 3 general challenges mentioned above, FEDML Nexus AI Studio, encapsulates a full end-to-end MLOps (or sometimes called LLMOps) for LLMs and makes the process just a few simple steps in a guided UI.  The step by step is discussed in the How it works section.  But as for Why LLM studio?:

  • No-code: Studio’s UI walks you through few steps involved very simply
  • Access to popular Open-Source LLM models: we keep track of the popular open source models so you don’t have to. We provide access to Llama 2, Pythia and others in various parameter sizes for your fine-tuning.
  • Built-in training data or bring your own: We provide several industry specific data sets built-in or you can bring your own data set for the fine-tuning. 
  • Managing your LLM infrastructure: This includes provisioning and scaling your LLM resources, monitoring their performance, and ensuring that they are always available.
  • Deploying and managing your LLM applications: This includes deploying LLM endpoints for your LLM application to run on production while collecting metrics on performance.
  • Monitoring and improving your LLM models: This includes monitoring the performance of your LLM models, identifying areas where they can be improved, and retraining them to improve their accuracy.

Without a robust MLOps infrastructure like FEDML Nexus AI, it can be difficult to effectively manage and deploy LLMs. Without Studio, you may have a number of problems, including:

  • Slow Development: if you can’t experiment with fine-tuning new models, new data, and configurations quickly and at low cost, you may not be putting forth the best effort model for your business applications.
  • High costs: FEDML Nexus AI’s new cloud services bring a GPU marketplace based pricing, hence you can be sure your training and deployment is cost effective.
  • Performance issues: If your LLM infrastructure is not properly managed, you may experience performance issues, such as slow response times and outages.
  • Security vulnerabilities: If your LLM applications are not properly deployed and managed, they may be vulnerable to security attacks.
  • Model drift: Over time, LLM models can become less accurate as the data they are trained on changes. If you are not monitoring and able to efficiently continuously improve your LLM models, this can lead to a decrease in the quality of your results.

How does it work?

Studio’s no-code user interface greatly compresses the typical workflow involved with fine-tuning an LLM.  It’s easy and only 3 steps:

  1. Select an open source model, a fine-tuning data set & start training
  2. Select a fine-tune model and build an endpoint
  3. Test your model in a chatbot.

Step 1.Select an open source model, a fine-tuning data set & start training

At nexus.fedm.ai, click the Studio icon in the main menu at the left.

Select from our growing list of Open-source LLM modes:

Next, select from build-in datasets or add your own.  The built-in data sets are already created properly to work with the open source modes. They have the necessary design, label/columns, and the proper tokenizers are handled. You can search for an view the actual data for these standard datasets at Hugging Face, for example the popular training data databricks/databricks-dolly-15k is here: https://huggingface.co/datasets/databricks/databricks-dolly-15k/tree/main

A few hyper-parameters are provided for your review and adjustment if desired.  Set use_lora to true for example to drastically reduce the compute and memory needed to fine-tune.

Then click Launch, and Studio will automatically find the suitable compute in our low cost GPU marketplace to run your fine-tune training.

Studio will use a SOTA training algorithm to ensure efficient fine-tuning.

Once you start fine-tuning, you can see your model training in Training > Run 

You may start multiple model fine tunes to compare and experiment with the results.  The GPU marketplace will automatically find the compute resources for you.  If a compute resource isn’t currently available, your job will be queued for the next available GPU.

Step 2. Select a fine-tune model and build an endpoint

After you’ve built a fine-tuned model, you can deploy it to an endpoint.  Goto Studio > LLM Deploy.  Name your endpoint, select your fine-tuned model and indicate FEDML Cloud for Studio to automatically find the compute resource on our GPU marketplace.

For deploy and serving, a good rule of thumb is to assume 2-bytes or half-precision is required per parameter, and hence best to have GPU memory that’s 2x number of parameters: 

For example, if you have a 7 billion parameter model, at half-precision, it needs about 14GB of GPU space.  Studio will automatically find a suitable GPU for you.

Step 3. Test your model in a chatbot.

And finally, test your fine-tuned LLM and its endpoint through our built-in chatbot.  Goto Studio > Chatbot, select your new endpoint, and type a query to test

And that’s it! You’ve completed fine tuning, deployment, and a chatbot test. All with just a few clicks and Studio even found the servers for you.

FEDML also provides a more sophisticated Chatbot for customers who would like a more refined & production ready-chatbot which can support many models simultaneously.

Future plans for Studio

We plan to add additional AI training tasks to Studio. For example, Multi-modal model training and deployment.  We’ll publish to our blog when those are ready for you to try.

Advanced and custom LLMs with Launch, Train, and Deploy

Check for our future blog post where we’ll show you how to handle more advanced and custom training and deployment with our Launch, Train, and Deploy products. 

Webinar announcement 

We have a webinar on Tuesday Nov 7 at 11am PT/2pm ET at which we’ll introduce our FEDML Nexus AI platform and show a live-demonstration of Studio:

  • Discover the vision & mission of FEDML Nexus AI
  • Dive deep into some of its groundbreaking features
  • Learn how to build your own LLMs with no-code Studio
  • Engage in a live Q&A with our expert panel

Register for the webinar here!

Link

 

 

community logo
Join the TheDinarian Community
To read more articles like this, sign up and join my community today
0
What else you may like

Videos
Podcasts
Posts
Articles
👉 BlackRock CEO Larry Fink admits he was wrong about crypto.
00:00:45
đŸ‡ș🇾 President Trump says there will be no income tax "at some point in the not-too-distant future."

As I have been telling you for a few years now, ALL Tax has ALWAYS been voluntary, since WWII donations started.

He has to do it this way so there isn't a revolution on the government's hands. If THEY just came out and told you it has always been voluntary, the people would rise up and take to the streets. There would be mass chaos. -Crypto Michael âšĄïžThe Dinarian

00:00:12
🚹 “WHAT HAPPENED IN CRYPTO TODAY” – COINTELEGRAPH’S DAILY WRAP 🚹

Cointelegraph’s live-blog snapshot (edition: 27 Nov 2025) packs the market-moving headlines, on-chain sparks and policy sound-bites that ricocheted through crypto in 24 hrs – from a surprise Basel stablecoin concession to a record open-interest print on BTC futures.

🔑 Key Headlines

đŸ”č Basel Boost: BCBS officially dropped the punitive 1 250 % risk-weight for bank-held stablecoins (Tether, USDC) and replaced it with a tiered 20 %–100 % framework – unleashing a 2.4 B intraday rally in stablecoin issuer tokens and bank-centric DeFi plays.

đŸ”č BTC Open Interest Record: Aggregate perpetual & futures OI hit 53.8 B (Deribit + CME + Binance) – 7 % above April peak – as whales added 1.1 B long exposure ahead of Friday’s 0-DTE expiry; funding flipped +18 % annualised.

đŸ”č Nasdaq Tokenized Equities Live: Nasdaq’s ATS-Clearing hybrid went live with 3 private-company tokens; first trade executed 4.3 M face value in T+0 settlement, marking the first regulated U.S. exchange to custody & ...

00:00:06
👉 Coinbase just launched an AI agent for Crypto Trading

Custom AI assistants that print money in your sleep? 🔜

The future of Crypto x AI is about to go crazy.

👉 Here’s what you need to know:

💠 'Based Agent' enables creation of custom AI agents
💠 Users set up personalized agents in < 3 minutes
💠 Equipped w/ crypto wallet and on-chain functions
💠 Capable of completing trades, swaps, and staking
💠 Integrates with Coinbase’s SDK, OpenAI, & Replit

👉 What this means for the future of Crypto:

1. Open Access: Democratized access to advanced trading
2. Automated Txns: Complex trades + streamlined on-chain activity
3. AI Dominance: Est ~80% of crypto 👉txns done by AI agents by 2025

🚹 I personally wouldn't bet against Brian Armstrong and Jesse Pollak.

👉 Coinbase just launched an AI agent for Crypto Trading

If you're using a Ledger Nano X, Flex, or Stax device, the most recent update has also introduced a Bluetooth pairing issue....

Not to worry, you just need to delete the existing device pairing and re-pair it to get it working again.

https://support.ledger.com/article/15158192560157-zd

post photo preview

LATEST: 🚹 The official Pepe memecoin site has reportedly been compromised to redirect users to malicious links containing Inferno Drainer code, with Blockaid warning users to stay clear until the issue is resolved.
https://x.com/CoinMarketCap/status/1996648256357408978

🚹 UPDATE: CFTC NOW PERMITS SPOT CRYPTO TRADING ON REGISTERED EXCHANGES 🚹

In a landmark first for U.S. digital-asset regulation, the Commodity Futures Trading Commission (CFTC) has officially green-lighted spot crypto trading on federally registered exchanges, starting with Chicago-based Bitnomial this week. The move brings Bitcoin, Ether and other commodity-tokens under the same century-old regulatory umbrella that governs U.S. futures, options and swaps—complete with leverage, unified margin and clearing-house protection.

🔑 Key Breakthroughs

đŸ”č Historic First: Bitnomial’s Designated Contract Market (DCM) and Derivatives Clearing Organization (DCO) will list spot BTC, ETH, XRP, SOL side-by-side with futures & perps—single portfolio margin, net settlement, T+0 delivery.

đŸ”č Federal Umbrella: All orders—retail or institutional—clear through a CFTC-supervised clearing house, eliminating the patch-work of state money-transmitter licences that has kept U.S. leverage platforms ...

post photo preview
XDC Network's acquisition of Contour Network

XDC Network's acquisition of Contour Network marks a silent shift to connect the digital trade infrastructure to real-time, tokenized settlement rails.

In a world where cross-border payments still take days and trap trillions in idle liquidity, integrating Contour’s trade workflows with XDC Network Blockchains' ISO 20022 financial messaging standard to bridge TradFi and Web3 in Trade Finance.

The Current State of Cross-Border Trade Settlements

Cross-border payments remain one of the most inefficient parts of global finance. For decades, companies have inter-dependency with banks and their correspondent banks across the world, forcing them to maintain trillions of dollars in pre-funded nostro and vostro balances — the capital that sits idle while transactions crawl across borders.

Traditional settlement is slow, often 1–5 days, and often with ~2-3% in FX and conversion fees. For every hour a corporation can’t access its own cash increases the cost of financing, tightens liquidity that could be used for other purposes, which in turn slows economic activity.

Before SWIFT, payments were fully manual. Intermediary banks maintained ledgers, and reconciliation across multiple institutions limited speed and volume.

SWIFT reshaped global payments by introducing a secure, standardized messaging infrastructure through ISO 20022 - which quickly became the language of money for 11,000+ institutions in 200 countries.

But SWIFT only fixed the messaging — not the movement. Actual value still moves through slow, capital-intensive correspondent chains.

Regulated and Compliant Stablecoin such as USDC (Circle) solves the part SWIFT never could: instant, on-chain settlement.

Stablecoin Settlement revamping Trade and Tokenization

Stablecoin such as USDC is a digital token pegged to the US Dollar, still the most widely used currency for trade, enabling the movement of funds instantly 24*7 globally - transparently, instantly, and without the need for any intermediaries and the need to lock in trillions of dollars of idle cash.

Tokenized settlement replaces multi-day reconciliation with on-chain finality, reducing:

  • Dependency on intermediaries
  • Operational friction
  • Trillions locked in idle liquidity

For corporates trapped in long working capital cycles, this is transformative.

Digital dollars like USDC make the process simple:

Fiat → Stablecoin → On-Chain Transfer → Fiat

This hybrid model is already widely used across remittances, payouts, and treasury flows.

But one critical piece of global commerce is still lagging:

👉 Trade finance.

The Missing link is still Trade Finance Infrastructure.

While payments innovation has raced ahead, trade finance infrastructure hasn’t kept up. Document flows, letters of credit, and supply-chain financing remain siloed, paper-heavy, and operationally outdated.

This is exactly where the next breakthrough will happen - and why the recent XDC Network acquisition of Contour is a silent revolution.

It transforms to a new era of trade-driven liquidity through an end-to-end digital trade from shipping docs to payment confirmation – one infrastructure that powers all.

The breakthrough won’t come from payments alone — it will come from connecting trade finance to real-time settlement rails.

The XDC + Contour Shift: A Silent Revolution

  • Contour already connects global banks and corporates through digital LCs and digitized trade workflows.
  • XDC Blockchain brings a settlement layer built for speed, tokenization, and institutional-grade interoperability and ISO 20022 messaging compatibility

Contour’s digital letter of credit workflows will be integrated with XDC’s blockchain network to streamline trade documentation and settlement.

Together, they form the first end-to-end digital trade finance network linking:

Documentation → Validation → Settlement all under a single infrastructure.

XDC Ventures (XVC.TECH) is launching a Stable-Coin Lab to work with financial institutions on regulated stablecoin pilots for trade to deepen institutional trade-finance integration through launch of pilots with banks and corporates for regulated stable-coin issuance and settlement.

The Bottom Line

Payments alone won’t transform Global Trade Finance — Trade finance + Tokenized Settlement will.

This is the shift happening underway XDC Network's acquisition of Contour is the quiet catalyst.

Learn how trade finance is being revolutionised:

https://www.reuters.com/press-releases/xdc-ventures-acquires-contour-network-launches-stablecoin-lab-trade-finance-2025-10-22/

Source

🙏 Donations Accepted, Thank You For Your Support 🙏

If you find value in my content, consider showing your support via:

💳 Stripe:
1) or visit http://thedinarian.locals.com/donate

💳 PayPal: 
2) Simply scan the QR code below đŸ“Č or Click Here: 

🔗 Crypto Donations Graciously Accepted👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

 

Read full Article
post photo preview
Inside The Deal That Made Polymarket’s Founder One Of The Youngest Billionaires On Earth🌍

One year ago, the FBI raided Polymarket founder Shayne Coplan’s apartment. Now, the college dropout is a billionaire at age 27.

In July, Jeffrey Sprecher, the 70-year-old billionaire CEO of Intercontinental Exchange, the parent company of the New York Stock Exchange, sat at Manhatta, an upscale restaurant in the financial district overlooking the sprawling New York City skyline from the 60th floor. As a sommelier weaved through tables pouring wine, in walked Shayne Coplan—in a T-shirt and jeans, clutching a plastic water bottle and a paper bag with a bagel he’d picked up en route. Sprecher chuckles as he recalls his first impression of the boyish, eccentric entrepreneur: “An old bald guy that works at the New York Stock Exchange, where we require that you wear a suit and tie, next to a mop-headed guy in a T-shirt that's 27.” But Sprecher was fascinated by Polymarket, Coplan’s blockchain-based prediction market, and after dinner, he made his move: “I asked Shayne if he would consider selling us his company.”

Prediction markets like Polymarket let thousands of ordinary people bet on future events—the unemployment rate, say, or when BitCoin will hit an all-time high. In aggregate, prediction market bets have proven to be something of a crystal ball with the wisdom of the crowd often proving itself more prescient than expert opinion. For instance, Polymarket punters predicted that Trump would prevail in the 2024 presidential election, when many national pundits were sure that Kamala Harris would win.

Coplan initially turned down Sprecher’s buyout offer. But discussions led to negotiations and eventually a deal. In October, Intercontinental announced it had invested $2 billion for an up to 25% stake in the company, bringing the young solo founder the balance he was looking for. “We're consumer, we’re viral, we're culture. They’re finance, they’re headless and they’re infrastructure,” Coplan tells Forbes in a recent interview.

At the same time, Coplan announced investments from other billionaires including Figma’s Dylan Field, Zynga’s Mark Pincus, Uber’s Travis Kalanick and hedge fund manager Glenn Dubin. A longtime Red Hot Chili Peppers fan, Coplan even convinced lead singer Anthony Kiedis to invest after a mutual acquaintance brought the musician to Coplan’s apartment one day. “He's buzzing my door, and I’m like, ‘holy shit,'” Coplan recalls, his bright blue eyes widening. “I love their music. A lot of the inspiration [for my work] comes from the music that I listen to.”

Thanks to the deals, Polymarket’s valuation quickly shot to $9 billion, making the 2025 Under 30 alum the world’s youngest self-made billionaire, with an estimated 11% stake worth $1 billion. His reign was short: twenty days later, he was overtaken as the youngest by the three 22-year-old founders of AI startup Mercor.

Young entrepreneurs are minting ten-figure fortunes faster than ever. In addition to the Mercor trio and Coplan, 15 other Under 30 alumni—including ScaleAI cofounder Lucy Guo, Reddit’s Steve Huffman and Cursor’s cofounders—became billionaires this year, while Guo’s cofounder Alexandr Wang and Robinhood’s Vlad Tenev (both former Under 30 honorees) regained their billionaire status after having fallen out of the ranks.

The budding billionaire has long been fascinated by markets and tech. When he was just 14, Coplan emailed the regional Securities and Exchange Commission office to ask how to create new marketplaces. “I did not get a response, but it’s a really funny email,” he says, grinning playfully as he thinks of his younger self. “It just shows that this stuff takes over a decade of percolating in your mind.”

Two years later, Coplan showed up at the offices of internet startup Genius uninvited after multiple emails of his asking for an internship went ignored. At age 16—at least a decade younger than anyone in that office—he secured his first job after making a memorable impression with his “wild curls” and “encyclopedic knowledge of billionaire tech entrepreneurs.” “If he chooses to become a tech entrepreneur, which seems likely, I have no doubt that we’ll be seeing his name again in the press before long,” Chris Glazek, his manager at the time, wrote in Coplan’s college recommendation letter.

Coplan went on to study computer science at NYU, but dropped out in 2017 to work on various crypto projects that never took off. In 2020, he founded Polymarket to create a solution to the “rampant misinformation” he saw in the world: The company’s first market allowed users to bet on when New York City would reopen amid the pandemic. He soon expanded into elections and pop culture happenings, among other events.

But it didn’t take long for the company to butt heads with regulators. In January 2022, Polymarket paid a $1.4 million fine to the Commodity Futures Trading Commission for offering unregistered markets. It was also ordered to block all U.S. users, but activity on Polymarket skyrocketed particularly during the 2024 U.S. presidential election, with bets totaling $3.6 billion. A week after the election, the FBI raided Coplan's apartment and seized his devices as part of an investigation into a possible violation of this agreement. Shortly after, Coplan posted on his X account that he saw the raid as “a last-ditch effort” from the Biden administration “to go after companies they deem to be associated with political opponents.”

In July, the Department of Justice and CFTC dropped the investigations—after which Sprecher reached out to Coplan for dinner—and less than a week later, Polymarket announced it had acquired CFTC-licensed derivatives exchange QCX to prepare for a compliant U.S. launch. QCX applied to be a federally-registered exchange in 2022—an application that was left dormant for three years before receiving approval less than two weeks before the acquisition was announced. When asked about the timing of the deal, Coplan points to CFTC acting chairwoman Caroline Pham, who President Trump tapped to lead the agency in January. “Caroline deserves a lot of credit for getting every single license that had been paused for no reason approved, as acting chairwoman in less than a year,” he says. Coplan had realized an acquisition might be the only way for Polymarket to legally operate in the U.S. as early as 2021 due to the lengthy federal approval process, a source familiar with the deal told Forbes.

Just two months after the acquisition and days after Donald Trump Jr. joined Polymarket’s advisory board, the company received federal approval to launch in the U.S. (Trump Jr. has also served as a strategic advisor to Polymarket’s main competitor Kalshi since January.)

Polymarket’s rapid rise has drawn critics. Dennis Kelleher, co-founder and CEO of Washington-based financial advocacy group Better Markets, told Forbes in an email that the current administration’s deregulation around prediction markets has unlocked a regulatory “loophole” to enable “unregulated gambling” under the CFTC, “which has zero expertise, capacity or resources to regulate and police these markets.” Kelleher added that with backing from the Trump family “who are directly trying to profit on this new gambling den
 the massive deregulation and crypto hysteria will almost certainly end badly for the American people.”

Investors and businesses are scrambling to seize the moment of deregulation. “We had opportunities to invest in events markets earlier, but there was a lot of risk,” Sprecher says, listing the regulatory changes in favor of crypto and prediction markets under the current administration. “This was the moment to invest if we wanted to still be early in the space.”

In the last few months, Trump’s Truth Social and sportsbook FanDuel, as well as cryptocurrency exchanges Crypto.com, Coinbase and Gemini all announced their own plans to offer prediction markets. Robinhood CEO Vlad Tenev said prediction markets, which were integrated into its platform in March, were helping drive record activity for the retail brokerage in its third quarter earnings call.

“People are starting to realize right now that the opportunities are endless,” says Dubin, the billionaire hedge fund veteran who invested in Polymarket earlier this year. He points to sports betting companies, which have been regulated by states as gambling activity and taxed accordingly. States like New York can tax up to 51% of sportsbooks’ revenue, but federally-regulated prediction markets can bypass state laws, avoiding taxes and operating in all 50 states. With the realization that prediction markets could upend the sports betting industry—which brought in $13.7 billion in revenue in 2024—businesses are quickly jumping on board despite pushback from state gambling regulators. In October, both Polymarket and Kalshi secured partnerships with sportsbook PrizePicks and the National Hockey League, and Polymarket announced exclusive partnerships with sportsbook DraftKings and the Ultimate Fighting Championship.

The disruption won’t be limited to sports betting. Alongside its investment, Intercontinental’s tens of thousands of institutional clients including large hedge funds and over 750 third-party providers of data will soon have access to Polymarket data, as it gets integrated into Intercontinental’s products such as indices to better inform investment decisions. It also hopes to work with Polymarket to work on initiatives around tokenization—or converting financial assets into digital tokens on blockchain technology—to allow traders on Intercontinental’s exchanges to trade more flexibly at all hours of the day, Sprecher says. What’s more, in November, Google Finance announced it would integrate Polymarket and Kalshi data into its search results, while Yahoo Finance also announced an exclusive partnership with Polymarket.

Despite flashy investors, partnerships and a record $2.4 billion of trading volume in November, Polymarket has yet to launch in the U.S. or turn a profit. Coplan and his investors have hinted at ways the company could make money one day—selling its data, charging fees to users, launching a cryptocurrency token (similar to Ethereum or Bitcoin)—but decline to confirm any specifics. For now, the only thing that’s certain is the bet Coplan is making on himself. “Going for it and having it not pan out is an infinitely better outcome than living your life as a what if,” he says.

Standing across from the New York Stock Exchange building, Coplan tilts his head up as he watches a massive banner with Polymarket’s logo get hoisted onto the exterior of the building. It’s been five years since founding. One year since the FBI raid. He’s taking it all in. “Against all odds,” the bright blue banner reads, rippling in the wind alongside three American flags protruding from the building.

Source

🙏 Donations Accepted 🙏

If you find value in my content, consider showing your support via:

💳 Stripe:
1) or visit http://thedinarian.locals.com/donate

💳 PayPal: 
2) Simply scan the QR code below đŸ“Č or Click Here: 

🔗 Crypto Donations Graciously👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
post photo preview
Epstein-Linked Emails Expose Funding Ties to Bitcoin Core Development — Here Is What the Documents Reveal
  • Newly released emails show Jeffrey Epstein helped fund MIT’s Digital Currency Initiative, which supported Bitcoin Core development.
  • The documents also confirm that Leon Black donated to MIT’s Media Lab through Epstein-directed channels.
  • The revelations reshape part of Bitcoin’s early institutional funding history and highlight long-hidden influence from controversial donors.

Newly unsealed emails from the House Oversight Committee have shed fresh light on Jeffrey Epstein’s hidden financial influence inside MIT’s Media Lab — and more importantly, how some of that money flowed into Bitcoin Core development. The correspondence reveals that Joichi Ito, then-director of the MIT Media Lab, relied on Epstein-connected “gift funds” to rapidly launch the Digital Currency Initiative (DCI) in 2015, the research hub that became one of the primary sources of funding for Bitcoin’s core developers.

Emails Show Epstein-Connected Money Helped Launch MIT’s Digital Currency Initiative

In the newly surfaced emails, Ito directly thanked Epstein for the financial help that allowed MIT to “move quickly and win this round,” referring to the formation of DCI — a program explicitly designed to provide long-term support for Bitcoin Core contributors after the collapse of the Bitcoin Foundation. Ito’s forwarded message to Epstein described how the foundation’s implosion left core developers without stable funding, creating an opening for MIT to bring them under its umbrella.

He explained that three major developers — including Wladimir van der Laan and Cory Fields — agreed to join MIT, calling it “a big win for us.” The email also highlighted early support from prominent academics, including cryptographer Ron Rivest and IMF economist Simon Johnson. Epstein simply replied: “gavin is clever.”

Funding Numbers Reveal a Much Larger Financial Trail

MIT publicly claimed that Epstein donated $850,000 to the institution, with $525,000 flowing to the Media Lab. But journalist Ronan Farrow later reported the true figure was closer to $7.5 million — including a $5 million anonymous donation connected to Epstein associate Leon Black. The new emails appear to confirm that Black not only donated, but did so through Epstein’s direction.

One email from Ito to Epstein reads: “We were able to keep the Leon Black money, but the $25K from your foundation is getting bounced by MIT back to ASU.”

 

Epstein responded: “No problem — trying to get more black for you.”

The documents reveal Epstein’s influence reached deeper into Bitcoin circles than previously acknowledged, even including early conversations with Brock Pierce — another figure with documented ties to both Epstein and controversy surrounding early crypto foundations.

MIT’s Internal Concerns and the Fallout

The emails also expose MIT’s internal unease around anonymous or reputationally risky donations. After the scandal broke, Ito resigned in 2019. MIT later tightened donation policies, warning that “everything becomes public” eventually — a statement that now seems prophetic given this week’s disclosures.

Developers like Wladimir van der Laan say they were unaware of the extent of Epstein’s involvement and noted that DCI’s funding transparency “was not great back in the day.” The Media Lab and DCI declined to comment.

Source

🙏 Donations Accepted 🙏

If you find value in my content, consider showing your support via:

💳 Stripe:
1) or visit http://thedinarian.locals.com/donate

💳 PayPal: 
2) Simply scan the QR code below đŸ“Č or visit HERE: 

🔗 Crypto Donations👇
XRP: r9pid4yrQgs6XSFWhMZ8NkxW3gkydWNyQX
XLM: GDMJF2OCHN3NNNX4T4F6POPBTXK23GTNSNQWUMIVKESTHMQM7XDYAIZT
XDC: xdcc2C02203C4f91375889d7AfADB09E207Edf809A6

Read full Article
See More
Available on mobile and TV devices
google store google store app store app store
google store google store app tv store app tv store amazon store amazon store roku store roku store
Powered by Locals