MCP
Mobile Cloud Platform™ · ♻ The Airbnb of Compute
Workloads ride on an interchangeable fleet of phones, laptops, and cloud machines. Same shape as Cloud Run — but on hardware that already exists. Yours, your friends', your community's. We are live now in a limited pilot — sign the NDA to join it, or send us your interest and we'll loop you in.
Don't build more datacenters. Use what's already on the planet. Every phone that takes a request is a datacenter we didn't have to pour. Less concrete, less coal, less rare-earth mining, less competition for water and silicon. The Airbnb of compute: rooms exist — we just need a way to share them.
★ The compute is already in your pocket
10 years ago, this much compute lived in a rack-mounted server, in a server room, behind a corporate IT department. It cost five figures, drew 500 watts, and only enterprises with deep pockets could justify it. Today it sits in everyone's pocket, paid for, idle 95% of the day.
Honest note: this comparison is about raw compute, not parity in every dimension. Phones weren't designed to host sustained workloads — cooling, thermal throttling, and battery management are real constraints when a phone starts taking real load. We expect (and are working on) innovations to close that gap. The point of the side-by-side isn't "your phone equals a 2014 server in every way" — it's that the silicon you carry in your pocket today rivals what filled a rack a decade ago, which makes the rent-vs-own economics of compute look very different from the hyperscaler-default story.
The math hyperscalers don't want you to do
AWS, Google, Microsoft, and Meta buy every high-end GPU on the planet — every Tensor chip, every M-class SoC, every NVIDIA H100. Consumers can't compete in the deep-pocket arms race. But you don't have to. The phone in your pocket has 8 cores and 12 GB of RAM — datacenter-class hardware that sits idle 95% of the day. MCP turns it into rentable cloud capacity, on the same APIs as Cloud Run, paid in real money.
1 phone
$2,659
you, today
10 phones
$26.6K
you + friends
100 phones
$266K
small co-op
1,000 phones
$2.66M
community fleet
10,000 phones
$26.6M
neighborhood
100,000 phones
$266M
city-scale
Per-year revenue for an 8-vCPU / 12-GB phone at 50% utilization:
| Tier | Comparable | vCPU/hr | GB-RAM/hr | 1 phone/yr | 1,000 phones/yr |
|---|---|---|---|---|---|
| Always-on VM | GCP e2-standard / AWS EC2 | $0.0335 | $0.0045 | $1,425 | $1.43M |
| Serverless | Google Cloud Run | $0.0648 | $0.0072 | $2,659 | $2.66M |
| AI inference | Together / Replicate (8B-class) | ~$0.50 | $0.0072 | $17,899 | $17.9M |
Math: vCPU_count × hourly_rate × 24 × 365 × utilization + same for RAM. Numbers above use 50% utilization; phones at 100% always-on dispatch (server farms) hit 2× these. AI inference is the breakout — every phone today already runs on-device LLMs; the cloud comparable is 10-50× more expensive.
Sources: Hyperscaler hardware capex (2024 Synergy Research Group, ~$245 B for top-4 hyperscalers — see SRG report) · H100 street price (Tom's Hardware, ~$30-40 K) · GCP Compute Engine pricing (cloud.google.com/compute/all-pricing) · Cloud Run pricing (cloud.google.com/run/pricing — $0.000018/vCPU-sec, $0.000002/GiB-sec, "always-allocated CPU" tier) · AI inference rates (Together AI, Replicate; on-device LLM throughput from llama.cpp benchmarks on Snapdragon 8 Gen 3 / Apple A17 Pro) · Modern flagship phone specs (Pixel 8 Pro, iPhone 15 Pro, Galaxy S24: 8-core CPU, 8-12 GB RAM).
All figures are gross revenue at the cited public list prices, pre-MCP fee. Real take-rate, taxes, electricity, and bandwidth are not modeled — this is the upper-bound consumer alternative to a hyperscaler datacenter, not a yield guarantee.
★ Genesis · How this platform was born · A founder letter
My two-year fight. The denials. The pivot that became infrastructure. And a gift to every founder who comes next.
More than two hundred and fifty conversations. I stopped counting.
My runway was thinning. I was running on Google Cloud, building infrastructure for a thesis most people had not yet imagined.
I went to investors. I told them about Presence Economy™. I told them about the fabric. I told them about a world where physical reality could be addressed the way knowledge had been addressed by search.
For every yes, I averaged nine nos. Sometimes the same investor said no twice. Sometimes three times. I stopped tracking after two hundred and fifty conversations.
I applied to Google's startup credit program. They said no. I applied again. They said no again.
Then the cloud bill came due. An investor had committed — and the onboarding was supposed to come with GCP credit to keep me running. At the last minute, they pulled out. No check. No credit. No runway.
I went out hat in hand. Three percent of the company for one hundred thousand dollars — to anyone who could write the check. No takers.
One night I was on my phone, desperate, searching "how to minimize my cloud bill." Scrolling pricing calculators. Comparing instance types. Trying to shave dollars off a number that was already eating me alive.
And then I looked at the device in my hand.
I looked at it again.
This phone is a supercomputer. More powerful than the machines that put a man on the moon. As capable, just a few years back, as the most powerful machines you could rent on the major cloud platforms. I was searching for how to afford compute — on a device that was, itself, more compute than I needed.
Could I convert my mobile into my savior?
I started working that night. A quick proof of concept. Then another. It worked.
And the question that followed wouldn't leave me: if no one will give me compute, what if I didn't need anyone to give me compute?
I looked around. Three companies own the substrate beneath the entire AI economy. Pricing opaque. Margins extreme. Meanwhile, billions of devices sit idle for most of every day — phones, laptops, EVs, every pocket and parking lot in the world.
The world's largest compute fleet already existed. It just wasn't addressable.
So I made it addressable.
The platform you are reading this letter on right now? It IS that proof of concept, grown up. A real Cloud Run-style service running on phones, all day. This website is one. The auth service that powers sign-in is another. More are migrating off hyperscalers as you read this.
A gift, to every founder who comes after me
What happened to me must not happen to you.
Two years ago, building a software company required permission. Permission from a hyperscaler to give you compute. Permission from an investor to give you runway. Permission from someone, somewhere, to let you matter.
That world is ending. And you have inherited the keys.
I went through what I went through so the next founder would not have to.
If this site reaches you in your hardest hour — if you are on your fiftieth pitch or your two-hundredth no — let this be the message I wish someone had handed me at mile one:
The world has more than 7 billion idle phones, laptops, and home servers. They already exist. They're already powered. MCP routes work to them first — and only then to a fresh cloud VM. Less new hardware. Less energy. Less geopolitics over chips.
Sign up with email. Onboard devices via QR. Deploy a static site at /my/sites. Watch it served live from your fleet at /u/dashboard.
Every served request gets a trace ID. View life of a query shows every node + step the router used. Per-device observability covers QPS, latency, CPU, bytes — Cloud-Monitoring-style charts with 10m → 90d zoom.
Mac binary at /install/mac, APK at /install/android, Linux at /install/linux. One binary on every device, advertises capabilities, takes work.