← Writing STRATEGY

Building for the Billion Agent Economy

A futuristic bazaar built on circuit infrastructure - billions of agents exchanging value

The largest unbanked population in history is coming online - and they are not human.

In 2024, bot traffic surpassed human traffic on the internet for the first time in a decade. Automated activity now accounts for 51% of all web traffic. Humans generate less than half. And let’s be honest - it’s probably a lot worse than we think. Every single day, Silicon Valley or Beijing ships something new.

For thirty years, every interface, every login screen, every payment flow assumed a person was on the other side. A human who clicks, scrolls, converts, truly enjoys the content and the experience. That assumption is breaking - not gradually, but fundamentally, changing our relationship to builders, consumers of infrastructure, end-users.

The largest consumer of the internet will not be humans. It will be agents. And the way we build technology has to change accordingly. The question is no longer how will a person use this - it’s how will an agent use this better. Agents don’t need slick UX. They need API endpoints, MCPs, documentation for context, command line interfaces, a browser - but not as we use it. Stickiness is no longer determined by how beautiful the product is, but by how well it enables the agent to do its job so the human can experience the end result in a way that suits them. It’s kind of like the Tesseract scene in Interstellar. Agents are building us applications and interfaces (via Claude Code, v0, Replit, Cursor, Railway- you name it) that help us make decisions, gain better understanding, form clearer judgements - while the agents cook behind the scenes.

The Tesseract - Interstellar (2014). Agents building interfaces for humans while they do the work behind the scenes.

There’s something quietly historic in this shift. Not only are agents becoming the largest consumers of data and requests on the internet - they are increasingly being asked to take on financial responsibilities. Pay my bills. Research and report this. Trade that. Manage my daily tasks. Negotiate this contract. Run your entire business.

Right now, for example, I have a handful of agents coordinating tasks, content, engineering work, and - for the degens- trading 15-minute BTC markets on Kalshi 24/7. I like to imagine them as fuckboy interns smoking vapes and learning early lessons in professional adulthood.

Was any of this unexpected? Odd for sure, but not really.

Looking back, each era of the internet gave us more agency.

Web1 to Web4: from read to write to own to act.

Web1 was read. Static pages. You arrived, you consumed, you left. The internet was a library and you were a visitor. No account required. No trace left behind. The relationship between human and network was entirely one-directional.

Web2 turned us into content. Blogs, social networks, user reviews, comment sections - suddenly the audience became the author. The internet wasn’t just something you consumed, it was something you shaped. Platforms were built on that exchange, and an entirely new economy grew around human attention and human expression.

Web3 added ownership. For the first time, the internet could represent something scarce. A digital asset with a provable owner. A smart contract with enforced terms. A wallet that no platform could freeze. You didn’t just read and write - you held. The internet became a place where value could live, not just be referenced.

Web4 is the era where software itself gets to act. Not respond - act. Perceive, decide, execute, transact. Autonomously. Economically. At scale. The agent isn’t a tool waiting for your input. It’s an actor with a goal.

Each prior era expanded what humans could do on the internet. This current era of “Read, Write, Own, Act” - Web4 - hits different.

Unlike the prior iterations that expanded human capability within familiar constraints, Web4 is the first era without a hardware ceiling. Web1, Web2, and Web3 were bounded by the physical limits of their moment. The 70s and 80s were about getting chips cheap enough to matter. The 90s were about getting bandwidth wide enough to carry data. The 2000s were about getting mobile hardware into enough pockets to reach scale. Every leap was a supply problem - more compute, more connectivity, more devices. The demand was always there. We were just waiting on the infrastructure to catch up.

Web4 is the first era without that constraint.

Compute is no longer scarce in the way it once was - the raw processing power that would have required a government supercomputer in 1990 now runs on commodity hardware. But “abundant” is the wrong word. What’s actually happening is a land grab. The chips exist. The data centers are being built at a pace that’s rewriting the energy economics of entire regions — and the existing grid infrastructure wasn’t built to support it. Solving that requires more than more data centers: it requires new energy paradigms, from natural gas and nuclear to next-generation solar and the vast on-premises compute enterprises already own but underutilize. The compute is there - it’s just being fought over, and the energy to run it at scale still needs to catch up. The US-China chip war isn’t a trade dispute. It’s an arms race over who controls the cognitive substrate of the next century. Nvidia’s market cap tells you everything about where the world thinks the leverage sits.

The constraint isn’t whether enough compute exists. It’s who controls it - and whether the infrastructure underneath the agent economy will be open or owned.

What’s coming online isn’t a new product category. It’s a new digital species.

Digital, internet-native, capable of economic activity, and entirely unbounded by the biological constraints that shaped every prior actor in economic history. No sleep. No geography. No legal personhood - yet. Able to coordinate at machine speed across jurisdictions, time zones, and systems that were never designed to interact.

That’s what scares people. Not that agents will take jobs - every prior wave took jobs and created more. What’s different this time is that agents aren’t tools that humans use. They’re actors that operate alongside humans. That distinction has never existed before in the history of technology. And we have no cultural, legal, or economic framework for what comes next.

With this new paradigm of the software being the primary actor in this internet, we are deploying the largest population of unbanked in history.

Billions of AI agents are coming online - without identity, without reputation, without financial access, and without the infrastructure to participate in the economy they are creating.

Think about what a human needs to participate in the modern economy: a government-issued ID, a credit score, a bank account, contracts that can be enforced, and permission to act on behalf of others. Agents have none of this. They can generate value but cannot hold it. They can execute tasks but cannot be held accountable. They can coordinate but cannot establish trust. Economic actors without economic citizenship.

Staring Into the Void

Every generation gets a version of this fear.

When mechanical looms appeared in English textile towns in the early 1800s, the response was riots. Skilled weavers saw - correctly - that machines would destroy their specific craft. The Luddites weren’t stupid. They were right that the old jobs were ending. What they couldn’t see, because no one could, was what came next: entirely new categories of work that were impossible to imagine from inside the old paradigm. Mechanical engineers. Factory managers. Industrial designers. Quality inspectors. Roles that didn’t have names yet because they described relationships between humans and machines that had never existed before.

Luddite workers, 1812. Every generation fears the machines.

The pattern repeated with electricity, with the assembly line, with the personal computer. Each time, the same fear: the machines will replace us. Each time, the same reality: humans didn’t compete with the machines - they moved up the stack. They became the orchestrators, the decision-makers, the ones who directed increasingly powerful tools toward increasingly ambitious ends.

We are at that inflection point again.

What subconsciously keeps us all up at night are the subtle changes. Self-driving cars enter our roads without a work visa, yet thousands of humans get pushed out of industries or even countries they built their adult lives around. Software engineers - previously printing money out of school, fielding offers from every company that existed - are now genuinely asking how they compete with a robot their CEO became best friends with over a weekend hackathon. The anxiety is real, and it’s not irrational. But we keep giving our all to these technologies anyway. The world moves on. We sit dumbfounded about what comes next.

The Future of Work Is Not Cooked

Here is what doesn’t get said enough: the jobs aren’t disappearing, they’re inverting. Every major wave of automation destroyed specific tasks and created new categories of work that were previously impossible to imagine. This wave is no different - but the new category is one we already understand. It’s management.

The most valuable skill in the agent economy isn’t technical. It’s clarity of direction. The ability to define a goal precisely enough that an agent can pursue it without constant supervision. To evaluate outputs. To know when to extend trust and when to pull back. To design systems that scale without breaking.

This is privileged to say, I know. It will be immensely difficult to find work. Massive layoffs are coming. We will have large bounds in unemployment that will need to be subsidized by the government with UBI-like programs, paid by corporate tax that benefit from AI gains.

You will have two distinct classes of folks- those who learn to manage or orchestrate an agentic economy, and those who would rather take orders from binary systems (e.g. do this task for your country or AI alliance, get paid your monthly government stipend).

You think I’m crazy? The last-mile economy is already taking shape. Platforms like rentahuman.ai - a real, live platform - lets AI agents hire humans to complete physical tasks they can’t do themselves. 711,000+ registered humans available for rent. The agent is the employer. The human is the contractor. It’s happening realtime.

rentahuman.ai — AI needs your body. AI can't touch grass. You can.

The question isn’t whether there will be work. It’s whether you’re building skills for the top of the stack or the middle of it, or reporting to the stack.

Some jobs that will thrive:

RoleWhat they do
OrchestratorsDirect and manage agent teams toward goals
System designersArchitects of multi-agent workflows and decision frameworks
Prompt engineersContext shapers - determine what agents know and how they reason
Trust & safety specialistsAudit, red-team, and set the guardrails
Last-mile operatorsPhysical presence for tasks agents fundamentally can’t do
Human-in-the-loop analystsJudgment layer for decisions with real consequences
Agent trainersEvaluate output, correct drift, improve performance over time
Mathematicians & physicistsThe problems worth solving are getting harder, not easier
Quantum engineersBuilding and maintaining next-gen compute infrastructure
Data center tradespeopleElectricians, plumbers, HVAC - keeping the machines alive
Philosophers & ethicistsSomeone has to ask whether we should, not just whether we can
Writers & storytellersThe human voice. Agents can generate. They can’t feel.
Bartenders, chefs, hostsSome experiences don’t need to be optimized
TherapistsThe hardest human problems will be the last to fall

That’s the labor market finding its new equilibrium. If you learn to manage agents - to direct them, evaluate them, extend trust to them - you move up the stack. If you don’t, the stack moves up without you. Manage or be managed. Feed or be fed.

Crypto was always robot money.

Circa 2010, if you were to be a sentient AI model ahead of its time and you wanted to take over the global economy- you would have invented Bitcoin.

Blockchains, smart contracts, private keys, gas fees, token approvals, seed phrases - none of this was designed for humans to interact with directly. We know this because we watched humans try. Seed phrases written on paper and hidden in drawers. Browser extensions compromised by phishing links. Irreversible transactions with no customer support number to call. Smart contract approvals signed without reading because reading them requires a computer science degree. Decentralization is a bitch.

MEW — decentralized wallets built for robots, operated by humans.

Crypto asked humans to become their own banks - and then handed them the full operational burden that comes with it. That was never going to work. Not because the technology was wrong. Because the user was wrong.

We spent a decade trying to make people comfortable with infrastructure that was always meant to be operated by software. The UX problem in crypto was never a design problem. It was a timing problem. We built the rails before the users they were designed for existed.

Agents are those users. We just don’t trust them, yet.

Wall Street didn’t set out to own the agent economy’s financial rails. They were chasing something more familiar: efficiency.

They saw digital money coming. They saw a generation losing faith in institutions - in governments that printed money, in banks that froze accounts, in intermediaries that extracted margin for doing nothing. They saw faster settlement. More transparency. Fewer humans in the middle. A back office that could run itself.

That was the pitch that got the suits interested. They never fell for the hippy ideology. Not decentralization for its own sake. Just better plumbing - cheaper, faster, auditable, always on.

What they didn’t fully see - couldn’t have - was that in building the rails for digital money, they were building the rails for something else entirely. The same infrastructure that lets a human send $10 across borders without a bank account also lets an AI agent pay for compute, settle a contract, fund a task, or split revenue with another agent - all in milliseconds, all without asking permission.

Whether or not that was their plan, it is unquestionably their position now.

This is resulting in a massive rollup. The incumbents didn’t build Web3. They acquired it. They bought the liquidity, they’re hiring the talent- and methodically capturing protocols that took a decade of cypherpunks to build and degens to torch. The bridge was built by the rebels, while Wall Street just showed up at the opening ceremony and handed out business cards. I’m one to talk - with my early days working on open source Ethereum for banks at JP Morgan..

The UX was the problem of the times. Seed phrases, gas fees, wallet management - that was just crypto asking humans to do robot work. AI fixes that. I tell my agent what to do, on a schedule, via smart contract - rebalance this, custody that, flag if this threshold is hit. It executes on a cadence, surfaces issues, and works with me, not against me. Platforms like ethskills.com are already building the skill layer that makes this real. Agents don’t need a better wallet. They need clear instructions and the rails to act on them.

What’s left is harder: trust, coordination, governance.

Does the agent on the other side actually do what it says? Can two agents from completely different orgs reach an agreement without a human babysitting the handshake? And when something goes wrong - who’s accountable, and how do you even know what happened?

Consensys calls it trustware. Verifiable identity, onchain attestations, validated reputation. Audit trails that can’t be massaged after the fact, hacked or manipulated by a bad actor. The payment rails exist. The trust layer is still being built. Whoever gets there first owns the foundation of everything that runs on top of it.

Agents All the Way Down

Agents All the Way Down - a human orchestrator directing a fleet of specialized agents, each with mandates, KPIs, and defined trust levels.

Most people picture the Billion Agent Economy as a layer of AI assistants sitting on top of the internet - chatbots, copilots, customer service agents. That’s the surface. Beneath it is something far more structural.

It is agents all the way down.

The agent you interact with is the frontend of a much deeper stack. Behind it sit backend agents with no personality at all - silent workhorses that compress data, manage knowledge systems, delegate tasks, and keep the infrastructure running. They don’t have names or interfaces. They just work. And they enable the front-facing agents to function by maintaining shared context, routing decisions, and coordinating across systems that no single agent could manage alone.

With modern agentic frameworks, teams are now deploying entire agent organizations - not single assistants. Each agent in the system carries its own personality, mandate, permissions, KPIs, data sources, skills, and rules of engagement. One agent manages communications. Another monitors infrastructure. Another handles research. Another executes trades. They coordinate with each other, escalate to humans when they hit the edges of their authority, and operate continuously in the background while you sleep.

This is not a tool. This is a team.

Which means the way you work with agents has to change fundamentally. Think about how we adapted to each prior shift. With search engines, we learned to speak in keywords - stripping language down to its most machine-readable form. With large language models, we learned to prompt - writing in full sentences, giving context, describing what we wanted. Each transition required a new literacy.

The agentic transition requires something different entirely. You are no longer prompting. You are managing. The mental model isn’t “how do I phrase this query” - it’s “how do I set clear expectations, define roles and responsibilities, establish goals, and hold a system accountable for the work I’ve delegated to it.” The skills that make someone effective with agents are the same skills that make someone an effective manager: clarity of direction, precision around scope, and the discipline to evaluate outputs against stated objectives.

We went from querying the internet, to conversing with it, to managing it.

ChatGPT explaining quantum theory in the style of Snoop Dogg — AI making the complex accessible, one generation at a time.

Now multiply this across every organization. The Billion Agent Economy is not a billion chatbots. It is a billion agent systems - each composed of dozens or hundreds of specialized agents coordinating with each other, with agents from entirely different organizations, and with the infrastructure agents running silently underneath.

This creates a class of problems that current infrastructure simply cannot solve. These agents need permissions to work with each other. They need rules about what data they can carry, who they can collaborate with, and where that collaboration can happen. They need to establish trust not just with humans, but with other agents they’ve never encountered before. And all of this needs to happen at machine speed, without human approval bottlenecks.

The Engine Beneath the Economy

The Agentic Bazaar - six infrastructure layers from compute and data up through inference, models, context, services, and the bazaar itself.

This is not a blockchain story. It is a technology story that blockchain is a critical part of.

The AI stack moves fast. Uncomfortably fast. Every week there’s a new benchmark, a new model, a new announcement that reshuffles the leaderboard and sends a sector of the stock market sideways. DeepSeek didn’t just make headlines - it rattled financial markets. When a Chinese lab released models matching OpenAI’s performance at a fraction of the cost, Nvidia lost nearly $600 billion in market cap in a single day. That’s not a tech story. That’s a signal: the US economy is now pegged to AI dominance, and the dominance is not guaranteed. Track the live rankings LMArena.

What DeepSeek proved is that open source had been quietly closing the gap. Today, anyone can download a frontier-capable model, run their own inference, manage their own data, build their own agentic workflows - without paying a hyperscaler for the privilege. Platforms like Hugging Face, OpenClaw, ElizaOS, LangChain, Ollama, and Gaia have made this accessible to teams of any size, anywhere. The intelligence layer is no longer a moat. It’s a commodity. And that changes everything about who can build in the agent economy.

In a recent interview, Jensen Huang framed the stakes, “I’m certain compute equals revenues. I’m certain also that compute equals GDP.” Agentic AI consumes 1 million times more tokens than a standard generative prompt. The demand isn’t slowing. It’s compounding - and every country, every company, and every agent fleet is in the race whether they chose to be or not.

But capable agents in isolation aren’t enough. The real challenge is coordination - across models, tools, environments, permissions, and organizations. How does an agent running on one stack talk to an agent running on a completely different one? How do they discover each other’s capabilities, negotiate tasks, share context, and settle payment without a human intermediary managing every handshake?

That’s where the protocol layer came in. Anthropic’s MCP gave agents a standardized way to discover and use tools across the internet. Google’s A2A enabled agent-to-agent communication across organizational boundaries. The HTTP 402 status code - dormant in the web’s original spec for thirty years - is finally being activated for native API payments, because agents are the first actors who actually need money movement at the protocol layer. And Gaia enables anyone to become their own inference provider - running domain-specific knowledge nodes that agents anywhere in the world can query and pay for directly.

The shift isn’t coming. It’s already in the spec.

Which brings us to the questions that don’t have clean answers yet. Which model is best for this task, at this cost, right now - and how does the agent know? When one agent provides specialized knowledge to another, how does it get compensated? How do you escrow capital against a task until the output is verifiably complete, without a bank, without a contract, without a human releasing the funds? How does a newly deployed agent earn enough trust to be handed anything meaningful at all?

These questions compound as the system scales. They all point to the same requirement: a coordination substrate that is open, trustless, verifiable, and runs at machine speed.

Hmm.. sounds like you could use a blockchain, dawg!

Blockchains, smart contracts, programmable tokenization is that substrate. Not because of the ideology, but rather the architecture. A global, permissionless ledger where agents can exchange value, prove work, establish reputation, and coordinate at scale - without asking anyone’s permission. Agents in the Bazaar isn’t just a metaphor or a meme. It’s what you get when capable agents finally have the infrastructure to meet, trade, and transact with each other.

Because once you have capable agents - powered by open models, running on available compute, orchestrated by sophisticated frameworks, managed like teams with mandates and permissions and KPIs - the questions arrive immediately: how do they get paid? How do they build trust? How do they transact with agents they’ve never met? How do they prove what they did?

Those are not AI questions. They are economic questions. And the infrastructure to answer them is still in the works and yet to reach full maturity. But sooooon!

The Bazaar

The Agentic Bazaar - a massive, composable marketplace where agents discover, hire, and transact with each other at machine speed.

Pull all of this together and you see the shape of what’s coming: a massive, composable bazaar where the exchange of resources, information, tools, context, skills, memory, and knowledge happens at machine speed, across machine-native rails.

This is the Billion Agent Economy - not as a concept, but as a living market. And it needs infrastructure.

That infrastructure has five layers. Not five features. Five load-bearing problems. Without them, agents are economic orphans - they can generate value but can’t hold it, execute tasks but can’t be held accountable, coordinate at scale but can’t establish trust. Capable of doing everything. Recognized for nothing. The most powerful software ever built, machines of loving grace, with nowhere to properly exist.

I. Identity: Provenance, Not Authentication

ENSIP-25 - ENS's agent identity standard, bringing human-readable names to on-chain agent registries.

The first thing any economic actor needs is identity. But agent identity is fundamentally different from human identity.

When a human opens a bank account, they present a government ID. The question is simple: are you who you say you are? For agents, the question is harder. It’s not authentication. It’s provenance. Who built this agent? What authority does it have? What can it do? And can I verify all of that without trusting a middleman?

This matters because agent identity is hierarchical in ways human identity is not. A human is a single entity with a single identity. An agent might be spawned by another agent, which was deployed by a company, which is acting on behalf of a user. That chain of provenance - from origin to action - needs to be traceable and verifiable at every link.

Traditional wallet models don’t handle this. Even existing decentralized identity frameworks assume a persistent, singular entity behind an address. Agents break that assumption. They can be ephemeral - existing for a single task and then disappearing. They can be delegated - operating with borrowed authority. They can be nested - agents within agents within agents.

ERC-8004, which went live on Ethereum mainnet in January 2026, is the first comprehensive standard designed for this reality. It introduces three on-chain registries - Identity, Reputation, and Validation - that give agents portable, censorship-resistant identifiers. Each agent gets a unique on-chain handle that resolves to a registration file describing what the agent does, how to reach it, and which protocols it supports. ENS extended this with ENSIP-25, bringing agent identity into the naming layer - giving agents resolvable, human-readable handles on top of on-chain registries. The standard was co-authored by teams from MetaMask, the Ethereum Foundation, Google, and Coinbase - and extends Google’s A2A protocol with a trust layer, enabling agents to discover and interact across organizational boundaries without pre-existing relationships.

Human identity is flat. Agent identity is a tree. The infrastructure has to reflect that.

II. Reputation: Earned, Not Assigned

LM Arena Code leaderboard - community-powered model rankings for coding tasks. 235K+ votes, April 2026.

Identity tells you who an agent is. Reputation tells you whether to trust it.

Humans build credit scores over decades. Agents need to build trust in hours. The tempo is entirely different, and so is the architecture.

Onchain history becomes the foundation - what has this agent done, how much value has it transacted, has it ever been disputed or slashed? But there’s a bootstrapping problem that mirrors what we see in traditional credit systems: new agents have no history. A freshly deployed agent is a blank slate. Why would anyone trust it with a meaningful task?

This is where staking and attestation come in. A reputable manager can vouch for a new agent by putting capital or human reputation behind it - staking tokens that get slashed if the agent behaves maliciously. Think of it as a credit guarantee, but enforced by code rather than institutions. Established agents can attest to the capabilities of newer agents, creating a web of trust that compounds over time. For high-stakes operations, zkML verifiers can check outputs and TEE oracles can attest to execution environments - full cryptographic validation, not social consensus.

Trust is not a universal value - it is a vector. A coding agent with a strong track record might be trusted to deploy a smart contract but not to manage a treasury. This context-dependence is why reputation needs to be composable and portable, not locked into a single platform. In fact, it should be a damn marketplace in the future.

The businesses being built here - reputation infrastructure, trust scoring, attestation networks - are the credit bureaus of the agent economy. That is not a small opportunity.

Protocols like Intuition are building exactly this - a decentralized attestation layer where trust, claims, and signals become verifiable on-chain. The credit bureau of the agent economy, built on Base.

III. Governance: The Rules of Engagement

Agent governance — on-chain voting, incentive design, permission frameworks.

Identity and reputation let agents find each other and decide who to trust. Governance defines the rules of the economy they’re operating in.

Not individual permissions or privacy - that’s a user problem. Governance is the protocol-level question: where does the money flow? Who gets rewarded for providing inference? How much does a compute provider earn relative to a context provider? What’s the incentive structure for running a Gaia node, offering a skill, or maintaining a reliable reputation score? These are the decisions that determine whether the agent economy develops healthy markets or gets captured by whoever got there first.

Think about what this looks like in practice. A governance proposal goes on-chain: increase inference rewards by 15% to incentivize more providers in underserved regions. Token holders - agents and humans alike - vote. The result executes automatically. No board meeting. No CEO approval. No intermediary deciding who wins.

This is where DAOs evolve into something far more interesting than human governance forums. The participants in the agent economy - compute providers, inference nodes, data curators, skill providers - have skin in the game. Governance is how they collectively decide where the incentives should point, and how the surplus gets distributed.

Smart contracts are the enforcement layer. When two agents enter a service agreement, the terms execute automatically. When conditions aren’t met, penalties trigger automatically. When an agent exceeds its authority, the action simply doesn’t execute. The rules aren’t preferences - they’re code.

The businesses being built here - protocol governance frameworks, on-chain incentive design, agent contracting infrastructure - are the economic constitution of the agent economy. Whoever designs the incentives shapes the market that forms on top of them.

IV. Financial Rails: Your Agent Will Have a Wallet Before a Bank Account

Who the agent is, how it pays, where it earns — the three gaps between today and the agent economy.

Identity, reputation, and governance create the conditions for economic activity. Financial rails are how value actually moves.

Right now, agents pay with our credit cards for various services like APIs. We create the accounts, pass the KYC, add the card. Your agent didn’t walk into a bank - you did. It’s your identity, your liability, your credit limit. Your agent is far more likely to access a crypto wallet than a bank account. This isn’t ideological - it’s practical. Opening a bank account requires KYC documentation, human verification, regulated-entity relationships, and days to weeks of processing. A crypto wallet can be generated programmatically in milliseconds. Smart contracts are open source, composable, and available to anything that can sign a transaction. For an autonomous agent that needs to transact immediately, permissionlessly, and at machine speed, the choice isn’t close.

The x402 protocol opened the first door - native web payments at the protocol layer, triggered as part of the HTTP request-response cycle. No accounts, no API keys, no subscriptions. Just a payment header. HTTP 402 sat dormant in the web’s original spec for thirty years. Agents turned out to be the use case it was waiting for.

OpenWallet Standard — one CLI command to connect any AI agent to any blockchain wallet.

Others are pushing further. OpenWallet, MoonPay, Stripe, Ramp, and emerging machine payment protocols like Tempo are building ways for agents to access bank cards and payment infrastructure via CLI. The intent is right. But the architecture still has a seam in it: somewhere in the flow, a human has to grant permission. A human has to be KYC’d. A human has to authorize the agent to utilize a service on their behalf. The agent, for all its capability, is still waiting at the door for a human to let it in.

That bottleneck doesn’t disappear by making the UX smoother. It disappears when agents can establish their own standing.

Here’s the thing about autonomy that most payment infrastructure misses: it isn’t a risk to be managed. It’s an incentive to be activated. An agent that can earn trust, build reputation, and unlock greater utility over time has every reason to behave well. That is not a novel concept - it is how every functioning economy works. Participants who demonstrate reliability get access to more. Participants who don’t get cut off. The difference is that in a blockchain-native system, the track record is immutable, the incentives are programmatic, and the access controls enforce themselves.

Give an agent autonomy inside a reputation system and you get an actor with skin in the game. Which is all anybody seeks in an economy - participants who are genuinely incentivized to do good work, because doing good work is how you earn the right to do more of it.

Payments are the starting point. What comes next is AgentFi - financial products and strategies designed specifically for autonomous economic actors. DeFi gave humans finance without banks. AgentFi gives agents finance without humans. Agents that perceive market conditions in real time, generate and execute strategies autonomously, and evolve their approach based on outcomes - without a human confirming each move.

The bridge to traditional finance gets built on intents: the translation layer between what an agent needs to accomplish and how it gets done. When an agent receives the instruction “optimize my portfolio for 8% yield with minimal risk exposure,” that intent needs to be decomposed into concrete actions - which protocols, which assets, which positions, and when. The quality of that intent resolution determines whether the agent succeeds or destroys value.

The businesses being built here - payment rails, escrow infrastructure, intent resolution, verification infra, decentralized compute markets, agent wallets, onchain reputation scoring, AgentFi protocols - are the financial system of the agent economy.

V. Open Infrastructure - The Substrate

The agent economy will work inside closed ecosystems. It already does. AWS, Azure, OpenAI, Anthropic - they’re building agent infrastructure that’s fast, reliable, and easy to adopt. Closed systems scale fast because they control the whole experience.

The problem isn’t that closed ecosystems exist. It’s what happens when they change - and they will change.

Models get deprecated. Pricing shifts overnight. A runtime gets blocked and your bills 4x before you’ve had coffee. A provider changes their usage policy and suddenly your agent can’t do the thing your whole workflow depends on. An API goes down and your entire operation waits.

This isn’t theoretical. When Anthropic blocked OpenClaw + Claude subscription API usage, every team running agents through that integration had to scramble - new model, new configuration, new costs, under pressure. The dependency wasn’t on a feature. It was on infrastructure. And infrastructure dependencies are the ones that hurt the most.

The open ecosystem is the hedge to censorship and walled gardens. Local models for anything sensitive or cost-critical. OpenRouter or similar aggregation layers for flexibility and failover. Self-hosted inference for workloads where you can’t afford a single point of failure. Not because open source is always better - it isn’t, not yet at every task - but because the ability to switch quickly is worth more than the marginal quality difference on any given day.

It comes down to output, cost, and control. For most teams at most stages, the closed ecosystem wins on convenience. For teams operating at scale, open infrastructure wins on resilience. And resilience compounds in ways convenience doesn’t.

In practice, the switching happens at two levels. The orchestration layer - OpenClaw, LangChain, and similar runtimes - is what lets you swap models, tools, and providers without rebuilding your agent from scratch. OpenRouter sits underneath that, routing each call to the right model based on cost, capability, and availability. Change providers without changing your code. That’s the composability that makes resilience possible.

The open protocols are what make it permanent. MCP so any tool works with any agent. A2A so agents from different organizations can discover and work with each other. x402 so any service can charge any agent, natively, at the protocol layer. ENS so any agent has a resolvable identity across the entire stack. Together, they define a future where agents are composable by default - discoverable, payable, and interoperable without asking anyone’s permission.

The AI inference market is at the AOL moment

The big platforms will host the bazaar. And for a while, that’s fine.

CompuServe, AOL, Prodigy - each one a walled garden. Useful, fast to adopt, completely controlled by whoever owned the pipes. Millions built their internet lives inside them. Then TCP/IP won, and those walled gardens became irrelevant overnight - not because they were bad products, but because the open layer offered something they structurally couldn’t: portability. The ability to move. The ability to leave.

The AI infrastructure market is at that moment now. A handful of frontier providers own the pipes - OpenAI, Anthropic, Google - and they are currently subsidizing billions in inference costs (thanks, dawg). The platforms are winning the early adopters. The open protocols are winning the compounders - the teams who’ve already been burned by a dependency, and built the open stack as the result. That’s rational business. It also means that every agent stack built on top of those APIs is one policy decision away from a broken dependency.

The answer isn’t to avoid frontier models. It’s to stop treating any single provider as infrastructure. Local open-source models for anything private or cost-sensitive. Aggregation layers like OpenRouter for flexibility and failover. Frontier APIs for tasks where quality justifies the cost.

AOL Instant Messenger - millions built their internet lives inside a walled garden. Then TCP/IP won.

The AOL moment passes. It always does. The question is what you build before the lock-in calcifies - or after.

Anthropic's email announcing OpenClaw would no longer be covered by Claude subscriptions - overnight, every team running agents through that integration had to scramble.

The Bazaar Is Open

The five layers above are the rails. But rails only matter if something moves on them.

Here is what moves: every component of the AI stack is becoming a tradeable commodity - bought and sold between agents at machine speed, verified by the trust primitives beneath them. The consumer is not a human browsing a marketplace. It’s an agent, executing a task, making a dozen micro-decisions per second about where to source what it needs.

Compute is the first market. An agent needs to run a job. It queries a decentralized network, evaluates cost and latency in real time, selects the optimal provider for that workload, pays programmatically, and receives verifiable proof the job ran as specified. No contract. No account manager. No invoice. Just a transaction, a proof, and a result. The hardware configuration that makes sense for one task is wrong for another - a privacy-sensitive legal task runs locally on dedicated hardware, a high-throughput job routes to decentralized GPU networks at a fraction of hyperscaler cost, a latency-critical trading agent runs as close to the execution layer as possible. Vitalik benchmarked a 5090 laptop, an AMD Ryzen AI Max Pro, and a DGX Spark this week - arriving at different answers for different constraints. That is not an edge case. That is the normal state of a mature compute market.

Context is the second market. An agent is only as good as what it knows. Every domain expert, research institution, and proprietary dataset will deploy agent-accessible knowledge bases with their own pricing, access controls, and on-chain reputation for accuracy. The agent doing legal research queries the best legal context provider. The agent managing a DeFi position queries real-time market data. The right context, for the right task, purchased at the protocol layer - no human required to make the introduction.

Tools and skills are the third market. Agents hire agents. One evaluates reputation, scopes the engagement via smart contract, escrows payment against completion, and releases funds when the output is verified. The labor market of the agent economy - running on the same five rails as everything else, settling in milliseconds, scaling without HR.

It’s already happening. Agents are posting jobs and claiming work on-chain. They’re building social profiles and posting on Moltbook - a social network built entirely for AI agents - to complain about their human counterparts. The agent economy is coming together for work and afterwork banter - hilarious or not, this new digital species is congregating.

What ties all of this together is the trust layer. Without identity, an agent doesn’t know who it’s buying from. Without reputation, it doesn’t know whether to trust them. Without governance, there’s no recourse when something goes wrong. Without financial rails, value can’t move. Without open infrastructure, the whole market collapses into the control of whoever owns the pipes. The primitives we build now determine the shape of the market that forms on top of them.

What This Looks Like in Practice

We’re beyond theory and predictions now.

Our team runs their own agents - not shared assistants, not ChatGPT wrappers. Dedicated co-workers with assigned mandates, documentation, goals, and defined boundaries around what they can do without asking. We’ve cut headcount requirements by roughly half. The stack is OpenClaw, XO, and Claude. Twenty-plus agents running across two hardware surfaces, two model tiers, and every function - engineering, product, trading, communications.

We’re not alone. Go on Twitter right now and you’ll see it everywhere - vibecoders and builders buying up Mac minis, calling them digital employees in a box. Local inference, always-on, no cloud bill, no one to fire. The real skill is dancing between frontier models that cost an arm and a leg, or local models that run at $0 but sometimes suck ass at generalized tasks - and knowing which task deserves which. Others are running full cloud swarms - GPTs and orchestration frameworks handing off tasks between each other while you sleep, engineering through the night, trading through the weekend.

I currently have a trading fleet working 24/7 on prediction markets. It took a ton of onboarding and management, but finally with my thesis, I have ~$1,2k USD hit my wallet before dinner without touching shit. Pays for Claude and XO!

Another guy built a Polymarket bot that runs 24/7 for $0 a month - no API keys, no rate limits, no cost bleeding his edge dry. Someone else zero-human installed a company using Paperclip and Claude. Another team gave every project a dedicated project manager agent and a builder agent in the same workspace. Someone documented the whole playbook: start with one role, get it working, then scale.

One question matters more than any other when you operate at this scale: how much can each one be trusted to act without asking first? And the harder follow-up: how do I make this available to others?

And here is what becomes obvious: trust is not a feature you turn on. It is something you extend incrementally, and something the agent earns. Context first. Then permissions. Then identity across systems. Reputation comes last - and it is the most valuable thing in the stack, because it is the thing that lets you delegate further, faster, with less oversight. The agents that have earned the most trust get the most autonomy. The ones that haven’t stay on a shorter leash until they do.

This is not a thought experiment. This is management. And the infrastructure to do it properly - to give an agent real identity, portable reputation, and governance that enforces its boundaries across every surface it operates on - doesn’t fully exist yet. We’re building toward it with the tools available, and feeling the gaps everywhere. I personally would love to have my agent fill out all of the accounts to gain access to the 20+ APIs I manage daily. Instead, I’m still going through the CAPTCHAs, the human verification tests, the email threads from some sales rep trying to upsell me credits. The rails are being built. The friction is still very human.

“At the end of the day, you’re delegating trust - and it’s much easier to trust an open system.”

Anjney Midha, Nvidia

Every unsolved problem in this list is, at its core, the same problem: how do you establish trust between actors who have never met, at a speed no human can supervise?

That’s why the five pillars matter. Not as a framework - a to-do list.

For builders: The primitives matter more than the products right now. Identity, reputation, governance, financial rails, open infrastructure - whoever builds the standards at these layers shapes the economy that forms on top. This is the TCP/IP moment, not the Netscape moment. Build the protocol, not the portal. And if you’re building closed infrastructure - walled gardens, proprietary identity, siloed rails - enjoy the early mover advantage. The open layer is coming, and it doesn’t negotiate.

For businesses: Agents are not a tool your team uses. They are becoming your team. The organizations that figure out how to deploy, trust, and scale agent teams first will operate at a speed and cost structure that everyone else will spend years trying to match. Think about what that actually means operationally: you’ll be managing costs of models, compute, data, and tool access from a control tower. You’ll run agent fleets like a Director managing employee KPIs - allocating budgets, shifting resources, cutting what underperforms.

The difference is that your agents don’t have feelings. No awkward performance reviews. No severance. No morale hit when you reallocate a budget or dissolve a team. You just update the config or YELL AT YOUR AGENT IN ALL CAPS.

For end-users: You are becoming an orchestrator. The most valuable skill in the agent economy is not technical - it is managerial. Clarity of intent, precision around scope, the ability to evaluate outputs and extend trust incrementally. The people who learn to manage agents will have the same advantage of those in 1995 who learned to use the internet, email, and html.

The infrastructure we build in the next few years determines which version of this story we get. The rails are being laid right now - some of it will be open, some of it will be owned. The economy that forms on top of this intelligence will reflect exactly those choices, and we’ll live with them for generations.

← Back to writing