When I set out to build this portfolio site, I had one requirement: it had to be fast everywhere, not just in US-East-1. That led me to Cloudflare Workers — a serverless edge runtime that deploys your code to 300+ data centers worldwide. Combined with TanStack Start for full-stack React, the result is a site that serves dynamic, server-rendered pages from the nearest edge node to any visitor on the planet.
But deploying a full-stack React app to Workers isn't the same as deploying to Node.js. The runtime constraints forced me to rethink assumptions I'd held for years — and the result was cleaner, faster code.
Why TanStack Start on the Edge
TanStack Start is one of the few React meta-frameworks built with edge runtimes as a first-class target. It provides file-based routing, server functions, data loaders, and SSR — all designed to work within the constraints of environments like Cloudflare Workers.
The key architectural decision: TanStack Start separates client and server code at the bundler level. Server functions run in the Worker, client code ships to the browser, and the framework handles the boundary automatically. No manual API route creation for basic data fetching.
The Vite Configuration
The setup starts with vite.config.ts. Cloudflare provides an official Vite plugin that handles bundling for the Workers runtime:
import { defineConfig } from 'vite'
import { cloudflare } from '@cloudflare/vite-plugin'
import { tanstackStart } from '@tanstack/react-start/plugin/vite'
import viteReact from '@vitejs/plugin-react'
import tailwindcss from '@tailwindcss/vite'
export default defineConfig({
plugins: [
cloudflare({ viteEnvironment: { name: 'ssr' } }),
tailwindcss(),
tanstackStart(),
viteReact({
babel: {
plugins: ['babel-plugin-react-compiler'],
},
}),
],
})
The cloudflare plugin tells Vite to target the Workers runtime for server-side code. It handles polyfills for Node.js APIs that don't exist in Workers, and configures the output format for Workers deployment.
File-Based Routing with Loaders
TanStack Start's routing system maps files to URL paths. Each route can define a loader that runs on the server (the Worker) before the page renders:
import { createFileRoute } from '@tanstack/react-router'
export const Route = createFileRoute('/blog/$slug')({
loader: async ({ params }) => {
const post = blogPosts.find(p => p.slug === params.slug)
if (!post) throw notFound()
return { post }
},
head: ({ loaderData }) => ({
meta: [
{ title: `${loaderData?.post.title} — My Blog` },
{ name: 'description', content: loaderData?.post.description },
],
}),
component: BlogPost,
})
The loader runs on the edge, so data fetching happens close to the user. The head function generates meta tags for SEO — fully server-rendered, no client-side hydration delay for search engine crawlers.
What You Can't Do on Workers
Cloudflare Workers run on the V8 engine, not Node.js. That means:
- No
fsmodule. You can't read files at runtime. All content must be bundled at build time or fetched from an external source (KV, R2, D1, or an API). - No long-running processes. Workers have a CPU time limit (typically 10-50ms for free plans, more on paid). Heavy computation needs to happen elsewhere.
- No native Node modules. Libraries that depend on
path,crypto,buffer, or other Node built-ins need polyfills via thenodejs_compatcompatibility flag. - Memory limits. Workers get 128MB of memory. Large in-memory datasets won't work.
These constraints sound limiting, but they push you toward better architecture. Instead of reading files at runtime, you compile content at build time (this is how the MDX blog on this site works). Instead of heavy computation on the edge, you offload to queues or external services.
Deployment with Wrangler
Deploying is a two-step process: build the Vite project, then deploy with Wrangler.
# Build the production bundle
pnpm build
# Deploy to Cloudflare Workers
wrangler deploy
The wrangler.jsonc configuration file tells Cloudflare how to route requests:
{
"name": "my-portfolio",
"main": ".output/worker.js",
"compatibility_date": "2025-01-01",
"compatibility_flags": ["nodejs_compat"]
}
The nodejs_compat flag enables polyfills for common Node.js APIs, which some dependencies require. It's a pragmatic escape hatch that makes most npm packages work on Workers without modification.
Performance Results
After deploying this site to Workers:
- Time to First Byte (TTFB): Under 50ms globally — the Worker runs in whichever data center is closest to the visitor
- Lighthouse Performance score: 95+ consistently
- Cold start time: Effectively zero — Workers don't have traditional cold starts like Lambda or Cloud Run
- Deploy time: Under 30 seconds from
git pushto live
The edge model eliminates the "pick a region" problem entirely. A visitor in Tokyo gets the same TTFB as a visitor in New York.
Takeaways
If you're considering Cloudflare Workers for your next project, here's what I'd keep in mind:
- Embrace the constraints. The lack of
fsand limited memory push you toward compile-time solutions and external storage — both of which are better patterns anyway. - Use TanStack Start's loaders for data fetching. They run on the edge and keep your components clean.
- Enable
nodejs_compatearly. You'll need it for at least one dependency, and it's better to enable it upfront than debug cryptic module resolution errors later. - Test locally with
wrangler dev. It emulates the Workers runtime, so you catch edge-specific issues before deployment.
The combination of TanStack Start and Cloudflare Workers is one of the most compelling full-stack stacks I've used. It's fast, it's simple, and it forces you to write better code.