The 70/30 Rule Meets a 0% Infrastructure: The Central Tension of AI Agents
Research into how people actually use AI agents surfaces a consistent pattern: people want to delegate to agents, but they want to stay in control. Studies and observational data consistently suggest something like a 70/30 split, users are comfortable with agents handling roughly 70% of a delegated task autonomously, while maintaining meaningful human oversight and control over the remaining 30%1.
That’s the demand side. What people say they want.
Here’s the supply side: Coinbase just shipped wallets for fully autonomous agents with no human approval per transaction. Stripe built a checkout system where agents complete purchases without human interaction at the payment step. OpenAI’s shell tool gives agents a real Linux terminal to execute code. Cloudflare is building infrastructure where agents read, pay for, and act on web content entirely on their own.
The infrastructure being built assumes 0% human-in-the-loop. Not 30%. Zero.
The gap between what the demand side says it wants and what the supply side is building is the central tension of AI agents in 2026.
The 70/30 Demand Reality
The 70/30 framing comes from patterns in how people actually delegate to agents versus how enthusiastically they endorse the idea of delegation. People want agents to:
- Research and draft content (delegate the research, review the draft)
- Schedule meetings and appointments (delegate the finding, confirm the booking)
- Monitor prices and alert on conditions (delegate the watching, decide on the action)
- Handle customer support routing (delegate the triage, review edge cases)
In each of these, the delegation is real but bounded. There’s a human checkpoint before the consequential action. The 70% that gets delegated is the tedious, repetitive, attention-consuming work. The 30% retained is the decision with stakes.
This isn’t irrational. It’s a reasonable response to agents that are capable but not fully trustworthy. The failures that generate headlines, the database-wiping agent, the 500-message iMessage loop, the wallet-draining malicious skill, are exactly the failure modes that make people reluctant to remove their 30%.
The Infrastructure Push Toward 0%
Simultaneously, every layer of the agent infrastructure stack is being engineered to work without human involvement at each step:
Coinbase’s X402 protocol: Processed 50 million machine-to-machine transactions before launch. The architecture is purpose-built for agents to spend without human approval per transaction. The human sets spending limits once. Then the agent runs.
Stripe’s shared payment tokens: Scoped credentials that let agents complete purchases without human presence at checkout. The human establishes the token parameters once. Then the agent purchases.
OpenAI’s shell tool with compaction: Gives agents terminal access and context window management for long-running workflows, specifically designed so the agent can operate for hours without needing to pause for human input.
Cloudflare’s agent-readable web: Content served directly to agents in machine-native format, with monetization that agents can handle autonomously via X402. No human needs to manage the agent’s content access.
Each of these primitives is individually defensible. Spending limits are set by humans. Token parameters are set by humans. The agent operates within the parameters humans defined. But the operational moment, the actual transaction, the actual command execution, the actual content purchase, is fully autonomous.
The infrastructure companies are betting that trust will catch up to capability. Brian Armstrong: “The next generation of agents won’t just advise, they’ll act.”2 The infrastructure assumes the acting part happens without human sign-off on each action.
Why the Gap Exists
The gap between the 70/30 demand and the 0% infrastructure isn’t a misunderstanding. It’s a deliberate bet.
Infrastructure companies are building for the future users who will be comfortable with full agent autonomy, not the current users who aren’t. The same way early web infrastructure was built for users who would eventually shop online, not for the 1998 consumer who wouldn’t enter a credit card on a website.
The bet is that trust builds through:
- Demonstrated reliability (agents that consistently do what they’re supposed to do)
- Robust error recovery (failures that are contained and reversible, not catastrophic)
- Transparent audit trails (humans can see exactly what the agent did and why)
- Precedent (as agents successfully handle millions of small transactions, people extend trust for larger ones)
This is how trust built for online payments. In 1998, most people wouldn’t enter a card number on a website. By 2005, most people would. The infrastructure (SSL, fraud detection, chargeback protection) matured. The track record accumulated. Trust followed.
The agent web is at 1998 for payments, extended to every other consequential action agents can take.
Resolution Scenarios
The gap between demand and supply resolves in one of a few ways:
Trust catches up to capability (the bet the infrastructure companies are making): Reliability track records accumulate. Security architectures contain failure modes. Legal frameworks clarify accountability. Within 3-5 years, the 70% delegation threshold shifts to 85%, then 95%, as agents earn it through performance.
Capability is regulated back to trust level: Governments or industry bodies impose human-in-the-loop requirements for consequential agent actions, financial transactions above certain thresholds, health-related decisions, legal actions. The 0% infrastructure exists but is legally constrained to operate within human oversight frameworks.
Catastrophic incident resets the timeline: A significant, visible failure, agents causing real financial harm at scale, an autonomous agent action that triggers a regulatory or legal crisis, pushes the trust timeline back materially. The infrastructure doesn’t stop being built, but the deployment timeline extends.
Asymmetric adoption: Enterprises with sophisticated security and governance frameworks adopt 0% infrastructure for bounded, high-value use cases. Consumer adoption stays at 70/30 for much longer. The agent web bifurcates into enterprise-grade autonomous and consumer-grade assisted.
The most likely near-term scenario is some combination: enterprise asymmetric adoption accelerates, consumer trust builds slowly, and occasional incidents create regulatory pressure without stopping the infrastructure buildout.
Product Strategy for the Gap
For product managers and AI leaders navigating the gap, a few practical implications:
Design for the 70/30 reality, architect for 0% capability: Build products that work well with human checkpoints now, but that can gracefully reduce those checkpoints as users extend trust over time. Don’t force autonomy prematurely.
Make the 30% frictionless: The human checkpoint is least likely to cause abandonment if it’s fast, clear, and feels meaningful rather than bureaucratic. “The agent found three options, which do you prefer?” is a good checkpoint. “Please confirm you want the agent to proceed” is a bad one.
Audit trails as trust infrastructure: Users who can easily see what the agent did and why are more likely to extend trust for the next task. Transparency isn’t just compliance, it’s the mechanism by which trust compounds.
Invest in failure containment, not failure prevention: Agents will make mistakes. The question is whether mistakes are contained and recoverable. Spending limits, sandboxed execution, and rollback capabilities are what allow users to extend trust incrementally without catastrophic downside.
References
Footnotes
-
“70/30 delegation pattern”, Synthesized from AI agent user research reports and behavioral observation data collected from agent deployment studies, 2024–2026. The 70/30 framing reflects the consistent pattern in which users delegate repetitive, low-stakes subtasks while retaining oversight of consequential decisions. ↩
-
Brian Armstrong, “Agents Act”, Medium, 2026. ↩