MCP Hosted on Mobile Cloud Platform · ♻ Airbnb of compute · picked · via · mobile-cloud-platform.com
Live · Now GCP workloads are migrating to MCP — one at a time. This page? Running on MCP. The auth service that powers sign-in? Also on MCP. Built on tens of patent-pending innovations.
↓ Read the founder's story · Genesis of this compute
Limited Pilot · Live now · NDA required

MCP

Mobile Cloud Platform™ · ♻ The Airbnb of Compute

Replace cloud VMs
with your devices.

Workloads ride on an interchangeable fleet of phones, laptops, and cloud machines. Same shape as Cloud Run — but on hardware that already exists. Yours, your friends', your community's. We are live now in a limited pilot — sign the NDA to join it, or send us your interest and we'll loop you in.

♻ Reuse what you own Don't build more · save energy · save the planet
Join the limited pilot · Sign NDA I'm interested Already signed? Log in Global cluster
SCAN TO SIGN NDA
QR code: scan to sign the limited pilot NDA on your phone
Scan with your phone camera to sign the NDA and join the limited pilot.
SCAN TO ADD INTEREST
QR code: scan to add yourself to the interest list
Not ready to sign? Drop your email and we'll loop you in.

Don't build more datacenters. Use what's already on the planet. Every phone that takes a request is a datacenter we didn't have to pour. Less concrete, less coal, less rare-earth mining, less competition for water and silicon. The Airbnb of compute: rooms exist — we just need a way to share them.

The compute is already in your pocket

Today's phone IS yesterday's $15,000 server.

10 years ago, this much compute lived in a rack-mounted server, in a server room, behind a corporate IT department. It cost five figures, drew 500 watts, and only enterprises with deep pockets could justify it. Today it sits in everyone's pocket, paid for, idle 95% of the day.

Today · Pocket-sized
2024 flagship phone
CPU8 cores · ARM
RAM12 GB
GPU~2 TFLOPS · built-in · free
Storage512 GB – 1 TB SSD
Power~5 W average
Footprintfits in a pocket
Cost~$1,000 · already paid · 7B+ in service
VS
2U
10 years ago · Server room only
2014 high-end rack server
CPU24 cores · 2× Xeon E5
RAM256 GB DDR4
GPU~1 TFLOPS · GPU adds $5K
Storage2–8 TB SSD/HDD array
Power~500 W (cooling/UPS extra)
Footprint2U rack + IT staff
Cost$15,000+ · rack · power · cooling · IT ops
Hyperscalers are still selling you 2014 economics — pay-per-hour for compute that already lives in your pocket. That model assumed common people couldn't afford datacenter-class hardware. Today, common people own datacenter-class hardware. Three companies still control the substrate. Pricing opaque. Margins extreme. Datacenters are the new mainframe — only the deep-pocket enterprises need to keep buying them. Everyone else just plugs in what they already paid for.

Honest note: this comparison is about raw compute, not parity in every dimension. Phones weren't designed to host sustained workloads — cooling, thermal throttling, and battery management are real constraints when a phone starts taking real load. We expect (and are working on) innovations to close that gap. The point of the side-by-side isn't "your phone equals a 2014 server in every way" — it's that the silicon you carry in your pocket today rivals what filled a rack a decade ago, which makes the rent-vs-own economics of compute look very different from the hyperscaler-default story.

The math hyperscalers don't want you to do

Hyperscalers spent $245 billion on hardware last year.
You already own the alternative.

AWS, Google, Microsoft, and Meta buy every high-end GPU on the planet — every Tensor chip, every M-class SoC, every NVIDIA H100. Consumers can't compete in the deep-pocket arms race. But you don't have to. The phone in your pocket has 8 cores and 12 GB of RAM — datacenter-class hardware that sits idle 95% of the day. MCP turns it into rentable cloud capacity, on the same APIs as Cloud Run, paid in real money.

$0 /yr
per modern phone · 8 vCPU · 12 GB · 50% utilization · Cloud Run pricing

1 phone

$2,659

you, today

10 phones

$26.6K

you + friends

100 phones

$266K

small co-op

1,000 phones

$2.66M

community fleet

10,000 phones

$26.6M

neighborhood

100,000 phones

$266M

city-scale

Hyperscaler buy: one NVIDIA H100 GPU$40,000
Your buy: phone you already own$0 · paid for
New datacenters built (2024):100+ globally
Phones already on planet:7+ billion
Show me the math · 3 pricing tiers

Per-year revenue for an 8-vCPU / 12-GB phone at 50% utilization:

TierComparablevCPU/hrGB-RAM/hr1 phone/yr1,000 phones/yr
Always-on VMGCP e2-standard / AWS EC2$0.0335$0.0045$1,425$1.43M
ServerlessGoogle Cloud Run$0.0648$0.0072$2,659$2.66M
AI inferenceTogether / Replicate (8B-class)~$0.50$0.0072$17,899$17.9M

Math: vCPU_count × hourly_rate × 24 × 365 × utilization + same for RAM. Numbers above use 50% utilization; phones at 100% always-on dispatch (server farms) hit 2× these. AI inference is the breakout — every phone today already runs on-device LLMs; the cloud comparable is 10-50× more expensive.

Run my fleet → see my real number

Sources: Hyperscaler hardware capex (2024 Synergy Research Group, ~$245 B for top-4 hyperscalers — see SRG report) · H100 street price (Tom's Hardware, ~$30-40 K) · GCP Compute Engine pricing (cloud.google.com/compute/all-pricing) · Cloud Run pricing (cloud.google.com/run/pricing — $0.000018/vCPU-sec, $0.000002/GiB-sec, "always-allocated CPU" tier) · AI inference rates (Together AI, Replicate; on-device LLM throughput from llama.cpp benchmarks on Snapdragon 8 Gen 3 / Apple A17 Pro) · Modern flagship phone specs (Pixel 8 Pro, iPhone 15 Pro, Galaxy S24: 8-core CPU, 8-12 GB RAM).

All figures are gross revenue at the cited public list prices, pre-MCP fee. Real take-rate, taxes, electricity, and bandwidth are not modeled — this is the upper-bound consumer alternative to a hyperscaler datacenter, not a yield guarantee.

Genesis · How this platform was born · A founder letter

When every door closed,
I built a new room.

My two-year fight. The denials. The pivot that became infrastructure. And a gift to every founder who comes next.

More than two hundred and fifty conversations. I stopped counting.

My runway was thinning. I was running on Google Cloud, building infrastructure for a thesis most people had not yet imagined.

I went to investors. I told them about Presence Economy™. I told them about the fabric. I told them about a world where physical reality could be addressed the way knowledge had been addressed by search.

For every yes, I averaged nine nos. Sometimes the same investor said no twice. Sometimes three times. I stopped tracking after two hundred and fifty conversations.

$1M
pre-seed raised
~50
angel checks
~9:1
no-to-yes ratio
250+
conversations

I applied to Google's startup credit program. They said no. I applied again. They said no again.

Then the cloud bill came due. An investor had committed — and the onboarding was supposed to come with GCP credit to keep me running. At the last minute, they pulled out. No check. No credit. No runway.

I went out hat in hand. Three percent of the company for one hundred thousand dollars — to anyone who could write the check. No takers.

Two years of torture I would not wish on anyone.

One night I was on my phone, desperate, searching "how to minimize my cloud bill." Scrolling pricing calculators. Comparing instance types. Trying to shave dollars off a number that was already eating me alive.

And then I looked at the device in my hand.

I looked at it again.

This phone is a supercomputer. More powerful than the machines that put a man on the moon. As capable, just a few years back, as the most powerful machines you could rent on the major cloud platforms. I was searching for how to afford compute — on a device that was, itself, more compute than I needed.

Could I convert my mobile into my savior?

I started working that night. A quick proof of concept. Then another. It worked.

And the question that followed wouldn't leave me: if no one will give me compute, what if I didn't need anyone to give me compute?

I looked around. Three companies own the substrate beneath the entire AI economy. Pricing opaque. Margins extreme. Meanwhile, billions of devices sit idle for most of every day — phones, laptops, EVs, every pocket and parking lot in the world.

The world's largest compute fleet already existed. It just wasn't addressable.

So I made it addressable.

The platform you are reading this letter on right now? It IS that proof of concept, grown up. A real Cloud Run-style service running on phones, all day. This website is one. The auth service that powers sign-in is another. More are migrating off hyperscalers as you read this.

A gift, to every founder who comes after me

What happened to me must not happen to you.

Two years ago, building a software company required permission. Permission from a hyperscaler to give you compute. Permission from an investor to give you runway. Permission from someone, somewhere, to let you matter.

That world is ending. And you have inherited the keys.

I.
AI gives you engineering.
Code, design, copy, research. The cost of building has fallen to almost zero. One human, one model, infinite leverage.
II.
MCP gives you compute.
Your mobile is your cloud. Every device you and your family already own becomes the substrate. No GCP credit. No AWS bill. No permission gate. The hyperscaler is the airline life vest, not the architecture.
III.
The world is the rest.
A laptop, a phone, an idea, and the courage to keep going after the two-hundred-and-fiftieth no. That is the entire stack now.

I went through what I went through so the next founder would not have to.

If this site reaches you in your hardest hour — if you are on your fiftieth pitch or your two-hundredth no — let this be the message I wish someone had handed me at mile one:

You don't need anyone else but you. Build.
Dinanath Gupt · Someone who could not walk away from the problem.

For the planet

The world has more than 7 billion idle phones, laptops, and home servers. They already exist. They're already powered. MCP routes work to them first — and only then to a fresh cloud VM. Less new hardware. Less energy. Less geopolitics over chips.

For customers

Sign up with email. Onboard devices via QR. Deploy a static site at /my/sites. Watch it served live from your fleet at /u/dashboard.

For developers

Every served request gets a trace ID. View life of a query shows every node + step the router used. Per-device observability covers QPS, latency, CPU, bytes — Cloud-Monitoring-style charts with 10m → 90d zoom.

For operators

Mac binary at /install/mac, APK at /install/android, Linux at /install/linux. One binary on every device, advertises capabilities, takes work.

⚠ Limited Pilot · Alpha service. All data on this platform is experimental and may be deleted at any time without notice. Do not store anything you cannot afford to lose. Use at your own risk under the NDA you signed. Found a bug or have feedback? Tell me →