Blockchain And LLMs

Renjith KN
5 min readMar 10, 2024

LLMs, specifically ChatGPT, are no news to anyone. But what in particular makes these models so useful? And how will they impact blockchains?

Blockchain technology and Large Language Models (LLMs) may seem like unlikely companions, but their convergence holds significant implications for the future of blockchains and the teams building on them. Let’s delve into this fascinating intersection.

What Are LLMs?

  • LLMs, such as ChatGPT, are versatile models that excel at translating between various forms of human expression. They bridge the gap from informal to formal language, from natural language to intent, and even from intent to transactions on the blockchain.
  • Imagine an interactive Encyclopedia that responds to your queries in any form of expression — mathematical formulas, ideas, prose, or even haikus.

What makes LLMs effective?

A striking aspect of LLMs is that — at a high level — they are universal translation machines between any form of human expression. Not just natural language to natural language (English to Cantonese), but any mode of expression, such as:

  • Mathematical formula -> 10-year-old child’s English prose.
  • Idea -> plan
  • Plan -> code base
  • Unclear idea -> Fitting questions for clarification -> clear idea
  • Haiku -> Rap lyrics
  • Description -> Image (with the help of image models)

Second, they contain much of all expressed human thought. An Encyclopedia. So one way to see LLMs is as an interactive Encyclopedia that you can talk to and get responses from in any form of expression.

Now, what does this mean for blockchains?

Why LLMs Matter for Blockchains:

  • Commoditizing Trust: Blockchains provide unfakeable and decentralized historical records, making them powerful environments for developers. However, they are primarily designed for technical collaboration among developers.
  • The Retail Gap: While blockchains excel in developer collaboration, they face challenges in user-friendly retail adoption. This is where LLMs come in — they can bridge the gap between developers and everyday users. Thanks to open and well-documented interfaces, LLMs have everything they need to translate natural language intent into call data.
  • Adaptability: LLMs serve as ideal interfaces that adapt to each user. They map human intent to on-chain actions, making transactions more intuitive and peer-to-peer.LLMs could be the magic shortcut to kill bad UX.

Now, I am going to explain a simple use case of how LLMs can make your interaction with blockchains seamless.

Turning your Intent into crypto transactions.

UX — or the way to turn intent into transactions — is a pain in crypto.

However, LLMs can directly translate your intent into smart contract calls. And remove all friction between knowing what you want and expressing it on-chain as txs.

And LLMs can construct much smarter transactions than we can today.

Make CoWs happen with Fuzzy Intent Matching

Your intents don’t exist in isolation. In many cases, you’re looking for someone else to trade with a counterparty.

P2P trades are more efficient than peer-to-pool trades, so we should aim to find the coincidence of wants (CoWs) as often as possible.

Unfortunately, CoWs, even in CowSwap, seldom happen. If you want to trade ETH to USDC, you need to find someone trading USDC to ETH in the same block.

But, what if someone submits an intent to trade USDT to ETH, but also holds USDC — maybe they would be willing to buy ETH with USDC as well? Then there potentially is a CoW with your trade.

LLMs can help locate these CoW opportunities by turning almost matching intents into matching intents. Here’s how.

LLMs can easily map specifically expressed intents to a higher level intent space behind them (“What the user probably really wanted to do.”). And then fuzzy match intents that are semantically close. Thanks to their semantic understanding LLMs can do this out of the box.

From there, LLMs can help you get more CoWs through re-negotiation:

  • Inward intent renegotiation: Find other intents that fuzzy match your intent, then offer you an expression of your intent to match other intents it has found on-chain. For example, “Is it ok to buy LUSD instead of USDC? I found a matching limit order and you’d save 0.3% on trading fees with this CoW.”
  • Outward intent renegotiation and offers: Ask other LLMs who hold almost matching intent to propose an adjustment to their humans: “I want to buy this other BAYC that you have; would you accept to sell that one for X ETH?”

Wallets could even surface intents that match your assets to you. “Do you want to sell this position? There is a matching OTC offer in the market atm.”

With LLMs, we can effortlessly scale intent negotiation and find many more win-wins.

But fuzzy matching is not even the most effective way to increase peer-to-peer matches.

Wide intents — making CoWs happen with range-conditions

LLMs can also help you construct much broader intents. Intents that include a wide range of acceptable conditions — to make matching easier.

Some examples of intents with options:

  • Include lists of replacement options for assets in your trade (e.g., buy any staked ETH instead WETH; use any of your stablecoin from your wallet to buy the NFT; or get the ETH loan from any of the top lending platforms);
  • Price and time ranges: Specify ranges of acceptable price (without publishing slippage) and longer time-frames for execution;
  • Oracle checks and within-block conditions (e.g., making trades invalid if sandwiched) or specifies fallback options in case the transaction fails.

All of these will drastically increase CoWs — and reduce your trade costs.

In summary, LLMs are not just language models; they are the bridge that connects blockchain technology to everyday users. It’s not exactly clear yet how to bring LLMs safely on-chain. However, the story showed that formal intent language can be a starting point.

--

--

Renjith KN

Senior technical architect with more than 15 years of experince in microservices, blockchain, J2EE technologies.