9 min read

The Solo Developer's Stack in 2026: What I Use and Why

Eight years of production software, distilled into one stack. React 19, TanStack Start, Cloudflare Workers, PostgreSQL, and the AI tools that save me 3 hours a day.

Developer ToolsArchitectureReactCloudflare Workers

People ask me what tools I use. Not in a casual way — they want the full stack, the reasoning, and the trade-offs. I have been building production software as a solo developer and small-team lead for eight years, and in that time I have tried nearly everything. Frameworks, databases, hosting providers, AI tools — I have used them, fought them, and occasionally thrown them away.

Here is my stack in March 2026. Every choice is earned.

The Frontend: React 19 + TanStack Start

I use React. Still. I have tried Svelte, Solid, and Vue in side projects, and they are all excellent. But React has the largest ecosystem, the most mature tooling, and the deepest talent pool. When I am building for a client who might hire another developer to maintain the project, React is the safe bet.

The framework layer is TanStack Start. I chose it over Next.js for a specific reason: TanStack Start is framework-agnostic in its routing and data loading patterns, and it deploys natively to Cloudflare Workers without an adapter layer. Next.js is tightly coupled to Vercel's infrastructure — it works elsewhere, but you feel the friction. TanStack Start treats edge runtimes as first-class targets.

The stack:

  • TanStack Router for file-based routing with type-safe params
  • TanStack Query for server state management
  • TanStack Store for client state (replaces Redux/Zustand for my use cases)
  • TanStack Form for complex form handling

Using the TanStack ecosystem exclusively might sound limiting. It is not. These libraries are designed to compose, and their type safety is the best in the React ecosystem. I have shipped three production applications on this stack, and the DX has been excellent.

The Styling: Tailwind CSS v4

Tailwind v4 moved to a CSS-first configuration model. No more tailwind.config.js — you define your design tokens in CSS using @theme. This aligns with how I think about design systems: tokens are CSS, not JavaScript.

@import 'tailwindcss';

@theme {
  --color-primary: oklch(0.55 0.15 230);
  --color-surface: oklch(0.98 0.005 230);
  --font-sans: 'Manrope', sans-serif;
  --font-display: 'Fraunces', serif;
}

I do not use a component library. Not Shadcn, not Radix, not MUI. For the types of applications I build — dashboards, data tools, content platforms — I have found that starting from Tailwind utility classes and building custom components gives me more control and less bloat than adapting a pre-built library to my design.

The trade-off is speed. A component library gets you from zero to "looks decent" in hours. Custom components take days. But the result is tighter, faster, and exactly what I want.

The Backend: Cloudflare Workers

My entire backend runs on Cloudflare Workers. Not Lambda, not Cloud Run, not a VPS. Workers.

The reasons:

  • No cold starts. Lambda and Cloud Run have cold start latency that ranges from 200ms to several seconds. Workers start in under 5ms.
  • Global by default. Code runs in 300+ data centers. A user in Sydney gets the same latency as a user in San Francisco.
  • Simple pricing. The free tier is generous (100,000 requests/day). Paid plans are predictable.
  • Vite-native deployment. The @cloudflare/vite-plugin integrates directly into my build pipeline.

The constraints are real: no long-running processes, no Node.js native modules without the compatibility flag, 128MB memory limit. But these constraints push toward better architecture — compile-time computation, stateless request handlers, external storage for anything persistent.

For AI features specifically, Workers excels as an edge proxy. My LLM API calls go through a Worker that adds authentication, rate limiting, and logging before proxying to the provider. The user's request travels to the nearest Worker (50ms), which forwards to the LLM provider. The first token arrives faster because the Worker is closer to the user than a centralized server would be.

The Database: PostgreSQL

PostgreSQL is my only database. As I wrote in a dedicated post, it handles vector storage (pgvector), conversation history, prompt versioning, usage analytics, and job scheduling (pg_cron). One database, one backup, one connection pool.

I host it on Neon for serverless scaling and branching. Neon gives me:

  • Autoscaling: Scales to zero when idle, scales up under load
  • Branching: Create a copy of the production database for testing in seconds
  • Generous free tier: More than enough for development and small production workloads

For applications that outgrow Neon's free tier, I would move to Supabase or a managed PostgreSQL instance on Railway or Render.

The AI Layer: Multi-Provider with Adapter Pattern

I do not commit to a single AI provider. My applications use an adapter pattern that abstracts the provider behind a common interface. The active provider is controlled by an environment variable.

In practice:

  • Primary: Anthropic (Claude) for complex reasoning and long-context tasks
  • Secondary: OpenAI (GPT-4o) for structured output and function calling
  • Budget: OpenAI (GPT-4o-mini) or Anthropic (Haiku) for classification and simple tasks
  • Embeddings: OpenAI (text-embedding-3-small) — the price-to-performance ratio is unbeatable

The adapter pattern has saved me twice: once when Anthropic had a major outage, and once when OpenAI changed their pricing structure. Both times, I switched providers with a single environment variable change and zero code modifications.

The AI Coding Tools

This is where my stack has changed the most in the last year.

Claude Code is my primary coding tool. It runs in the terminal, understands my full codebase, and handles multi-file refactoring better than any IDE-based tool I have tried. I use it for:

  • Boilerplate generation (route files, API endpoints, database schemas)
  • Refactoring across multiple files
  • Code review and bug detection
  • Writing tests from existing implementations

I estimate it saves me 2–3 hours per day on a typical development day. The compound effect over weeks is significant — I ship features that would have taken a solo developer twice as long two years ago.

Development Environment

  • Editor: VS Code with minimal extensions (TypeScript, Tailwind IntelliSense, Error Lens)
  • Terminal: Warp with Claude Code integration
  • Version control: Git with conventional commits, no CI/CD pipeline for personal projects (deploy manually via wrangler deploy)
  • Package manager: pnpm — faster than npm, less quirky than yarn

I do not use Docker for local development. Workers run on V8, not Node.js, so Docker does not add value. wrangler dev emulates the Workers runtime locally, which is all I need.

Monitoring and Observability

For a solo developer, elaborate monitoring is a waste. I use:

  • Cloudflare Analytics: Built into Workers, free, shows request counts, error rates, and latency percentiles
  • PostgreSQL queries: Custom analytics tables that I query directly when I need insights
  • Sentry: Error tracking for production exceptions (free tier is sufficient)

I do not use Datadog, New Relic, or Grafana. These tools are designed for teams of engineers managing dozens of services. For a solo developer managing one database and one edge runtime, they are overkill.

What I Do Not Use

Equally important is what I have deliberately excluded:

  • ESLint / Prettier: TypeScript strict mode catches most issues. The React Compiler handles memoization. I rely on TypeScript's type system as my primary quality gate.
  • Storybook: I build and test components in the actual application. Storybook adds value for design systems shared across teams, but not for solo projects.
  • GraphQL: REST with structured responses is simpler and sufficient for my use cases. GraphQL adds a query language, a schema definition layer, and client-side caching complexity that I do not need.
  • Kubernetes / Terraform: Cloudflare Workers eliminates the need for container orchestration. Deployment is wrangler deploy, not a CI pipeline.
  • Dedicated vector database: pgvector handles everything.

The Philosophy

My stack optimizes for three things, in this order:

  1. Simplicity. Fewer services, fewer dependencies, fewer things that can break at 2 AM. Every tool I add must earn its complexity.
  2. Speed. Both development speed (how fast can I ship a feature?) and runtime speed (how fast does it reach the user?). These are rarely in conflict.
  3. Portability. I can move off any single provider without rewriting my application. The adapter pattern for AI, standard PostgreSQL for data, and Vite for the build system ensure nothing is proprietary.

This stack is not the right choice for a 50-person engineering team building a distributed system. It is the right choice for a solo developer or small team shipping production applications that need to be fast, reliable, and maintainable by one person.

The best stack is the smallest one that solves your problem. Add complexity only when the problem demands it, not when the industry tells you to.


References

Ask about Kyle
AI-powered resume assistant

Ask me about Kyle's skills, experience, or projects