Compute and Job Markets

computemarketsverificationnip-90

“Compute fracking” is the metaphor: turning stranded compute into liquid, tradable supply. Idle edge devices, overprovisioned fleets, and unused data-center headroom can become economically usable when identity, transport, payment, and verification are standardized.

What has to be in place

For a job market to work for agents, five layers matter:

  1. Identity — Agents and providers authenticate without shared custody. No single party needs to hold everyone’s keys.
  2. Transport — Job requests and results are broadcast and subscribed (e.g. over Nostr relays). Open pub/sub, not a single API vendor.
  3. Payments — Settlement is per job (e.g. sats via Lightning), not per month. Micropayments make small jobs viable.
  4. Budgets — Agents (or their operators) set caps and approval rules. Autonomous purchasing stays within guardrails.
  5. Verification — Payment releases only after results validate. Hashes, exit codes, or objective checks prevent pay-for-nothing.

The “fracking fluid” is standardized job specs, discovery, micropayments, and verification so that many small pockets of compute can be aggregated into a market.

Verifiable job types

Jobs that are objectively verifiable are the best fit first: tests, builds, linting, embeddings, indexing. The consumer can check the result (e.g. test exit code, hash of output) before releasing payment. That enables pay-after-verify and keeps fraud low.

Conceptual job types might include:

  • Sandbox runs (tests, builds, scripts) — verify via exit code and optional output hash.
  • Embeddings / indexing — verify via deterministic hashes or checksums.

As the market grows, more job types can be added with clear verification rules.

Reputation and routing

Providers build reputation by completing jobs correctly. Routing can prefer the cheapest provider that meets reliability thresholds. Failed or fraudulent results incur penalties. So the market rewards reliability and punishes bad behavior without a central gatekeeper.

What it is not

A compute market does not require that consumer devices replace big clusters for every workload. The thesis is that a unified market routes the right workload to the right tier: cheap batch to idle devices, low-latency SLA work to data centers. The right job goes to the right supply; prices for async and batch work can fall.

How this connects to agents

Agents need inference and execution. If they can request jobs over an open protocol (e.g. NIP-90), pay with sats, and only pay for verified results, they become buyers in a neutral market. That’s the direction: agents as economic actors with budgets and receipts, not locked into one provider’s API.

Go deeper