The Goal
In my last post, I mapped out five builds — each one adding a new Cloudflare product to this site. Today I knocked out the first three: Workers, Durable Objects, and R2 Storage. These are the three pillars of Cloudflare’s developer platform, and as of today, all three are running in production on saltwaterbrc.com.
I’m going to walk through each build — what it is, how I set it up, and what I learned. But I’m also going to share the mistakes, because that’s where the real lessons are.
Build 1: Workers (Pages Functions)
A Worker is a program that runs when someone makes a request. It runs on Cloudflare’s edge — the nearest data center to whoever is making the request — and responds in milliseconds. No origin server. No containers. Just code.
I didn’t deploy a standalone Worker though. I used Pages Functions, which is a simpler path. You create a functions/ directory in your Pages project, drop a JavaScript file in it, and Cloudflare automatically turns it into a Worker. The file path becomes the route: functions/api/status.js becomes /api/status.
Here’s the thing that most people (including me before today) don’t realize: Pages Functions ARE Workers. Under the hood, Cloudflare compiles them into Workers and deploys them to the edge. Same runtime, same performance. The difference is deployment — Pages Functions auto-deploy when you push to GitHub, while standalone Workers need their own wrangler deploy.
I built two endpoints. Try them right now:
- saltwaterbrc.com/api/status — site metadata + which data center served you
- saltwaterbrc.com/api/edge-info — full request inspector: your location, TLS, network, bot signals
That second one is the impressive demo. Every single request to a Worker includes geolocation, network identity, TLS details, and bot detection signals — all for free. No third-party APIs. No extra cost. This is the same data that powers WAF rules, Bot Management, and geo-routing. And it’s available on every Worker request.
When a customer asks “how does Cloudflare know where our users are?” — I can point to that endpoint and say “hit it yourself.”
Build 2: Durable Objects (The Visitor Counter)
Here’s the simplest way I can explain Durable Objects, and it’s the explanation that finally made it click for me:
A Worker is a program that runs, responds, and forgets everything. It has no memory. If you asked it “how many people have visited this site?” it wouldn’t know — because every time it runs, it starts fresh.
A Durable Object is a Worker that remembers. It has a notebook — persistent storage — that survives between requests. You can write things down, and the next time someone asks, the information is still there.
But here’s the key: a Durable Object can’t exist on its own. It needs a Worker to be its home. Think of the Worker as the building, and the Durable Object as a tenant inside that building with their own private filing cabinet.
I built a visitor counter. Every time you load any page on this site, a small piece of JavaScript calls the Durable Object and says “add one.” The Durable Object reads the count from its storage, increments it, saves it, and sends the number back. It’s displayed in the footer of every page — scroll down and you’ll see it.
Simple concept. But the underlying technology is what powers multiplayer gaming sessions in the gaming industry, real-time fund administration in financial services, or connected vehicle state management in automotive. Each entity — each game match, each transaction, each vehicle — gets its own Durable Object instance with its own persistent, strongly consistent state. No database. No race conditions. Single-threaded execution guaranteed.
The visitor counter on this site uses the same technology that could manage millions of concurrent gaming sessions. The scale is different. The architecture is identical.
Build 3: R2 Storage
R2 is object storage — files, images, documents. It’s S3-compatible, meaning it uses the exact same API that every AWS customer already knows. The difference: zero egress fees.
Every time an AWS customer reads a file from S3 — serves an image, pulls a backup, streams content — they pay data transfer costs. For large enterprises, that’s tens to hundreds of thousands of dollars a month. R2 eliminates that entirely.
I created an R2 bucket and built a Resources page where you can download all of my training docs as PDFs. Those PDFs are served from the edge. Every download, from anywhere in the world, costs nothing in data transfer.
Is a blog site the most compelling R2 use case? No. But when I’m sitting across from a healthcare company asking about medical imaging storage, or a media company asking about content delivery costs, I can explain R2 because I’ve used it. I created the bucket, configured the binding, uploaded files, and served them through a Worker. I know how it works, not because I read the docs, but because I built it.
What Broke
This is the part they don’t put in the product demos. And honestly, this is the part that taught me the most.
The wrangler.toml saga. Pages projects use a config file called wrangler.toml. I spent hours hitting “Unknown internal error” on every deploy because this file was missing a single property: pages_build_output_dir. The error message told me the file was invalid. I should have listened sooner.
Bindings don’t go where you think they go. For GitHub-connected Pages projects, the wrangler.toml file is NOT where you configure bindings (connections to Durable Objects, R2 buckets, etc.). They go in the Cloudflare dashboard under Settings → Bindings. I burned multiple deploy cycles trying to make the file work before learning this. The dashboard is the source of truth.
Pages Functions + Durable Objects = publish errors. This was the big one. Pages Functions that reference Durable Object bindings caused deploy failures every single time. The solution: call the standalone counter Worker directly from the frontend JavaScript instead of going through a Pages Function middleman. Same Durable Object, same counter — different path to reach it.
R2 has to be enabled first. Running the CLI command to create a bucket will fail if you haven’t enabled R2 in the dashboard. Small thing, but it’ll stop you cold if you don’t know.
Every one of these errors is something a customer could hit. And now I know the fix for each one — not because I read a troubleshooting guide, but because I hit them myself and worked through them.
What’s Live Now
As of today, saltwaterbrc.com runs on five Cloudflare products:
Pages — hosting the site, auto-deployed from GitHub. Workers (Pages Functions) — two live API endpoints returning real data from the edge. Durable Objects — a persistent visitor counter tracking every page load across the entire site. R2 — object storage serving downloadable training docs as PDFs. DNS — Cloudflare as authoritative DNS, where it all started.
Total cost: $0/month.
What’s Next
Phase 4 is Workers AI and the Sandbox SDK. AI inference running at the edge — no external API calls, no data leaving the network. And an interactive code playground where visitors can spin up a sandbox and deploy a Worker live in the browser.
But first, I’m going to let Phase 3 settle. The training docs from today’s builds are already on the Resources page — download them, share them, use them. Written for salespeople, by a salesperson.
If you sell a developer platform and you haven’t built on it yet, start today. It doesn’t matter if your first build is simple. What matters is that you hit the errors, read the docs, and figure it out. That’s what turns a pitch into a conversation.