7 min read

How to Switch AI Providers Without Rewriting Your App

The adapter pattern for LLM providers — how to build a provider-agnostic AI layer in TypeScript so you can switch between OpenAI, Anthropic, and Google without touching your application code.

AITypeScriptArchitectureLLM

It was a Friday evening when the email hit my inbox. Anthropic had just updated their pricing model, and for our high-volume chat application, the costs were about to triple. We were tightly coupled to their API across our entire codebase. I spent that weekend in a frantic, caffeine-fueled haze, refactoring our entire AI integration just to keep the lights on. That was the moment I realized that building an app around a single provider is not just a technical debt, it is a business risk.

If you are building AI-powered features, you need to switch AI providers without rewriting your app. Vendor lock-in is the silent killer of production applications. Whether it is a sudden price hike, a model deprecation, or a regional outage, you need the ability to pivot your infrastructure in minutes, not days.

The Adapter Pattern for LLMs

The secret to LLM portability is the adapter pattern. Instead of calling OpenAI or Anthropic directly in your components, you define a common interface that your application understands. This abstraction layer acts as a translator between your business logic and the provider-specific SDKs.

Here is what that interface looks like in TypeScript.

interface AIProviderAdapter {
  sendMessage(prompt: string, options?: { model?: string }): Promise<string>;
  streamMessage(prompt: string, onChunk: (chunk: string) => void): Promise<void>;
  embed(text: string): Promise<number[]>;
}

By enforcing this contract, your application logic remains agnostic. Whether you are using OpenAI, Anthropic, or Google, your code just calls sendMessage. You then implement this interface for each provider.

class OpenAIAdapter implements AIProviderAdapter {
  async sendMessage(prompt: string) {
    // OpenAI specific implementation
    return "response from OpenAI";
  }
  // ... implement other methods
}

class AnthropicAdapter implements AIProviderAdapter {
  async sendMessage(prompt: string) {
    // Anthropic specific implementation
    return "response from Anthropic";
  }
  // ... implement other methods
}

Building the Provider Factory

Once you have your adapters, you need a way to instantiate the correct one at runtime. A simple factory function, driven by environment variables, makes switching providers as easy as changing a single string in your configuration.

function getAIProvider(providerName: string): AIProviderAdapter {
  switch (providerName) {
    case 'openai':
      return new OpenAIAdapter();
    case 'anthropic':
      return new AnthropicAdapter();
    default:
      throw new Error(`Unsupported provider: ${providerName}`);
  }
}

// Usage in your application
const provider = getAIProvider(process.env.AI_PROVIDER || 'openai');
await provider.sendMessage("Hello, world!");

This approach allows you to deploy your application with one provider, and if you need to switch, you just update the AI_PROVIDER environment variable and restart your service.

"The goal of abstraction is not to hide complexity, but to manage it. By decoupling your application from specific AI vendors, you gain the freedom to choose the best model for your needs at any given moment."

Handling Provider-Specific Features

Of course, not all providers are created equal. OpenAI might have superior tool calling, while Anthropic might excel at long-context reasoning. When you abstract, you risk falling into the trap of the lowest common denominator.

The solution is to design your interface to be extensible. You can add a capabilities property to your adapter or use a "feature detection" pattern. If a provider does not support a specific feature, your adapter should handle it gracefully, perhaps by falling back to a simpler model or returning a clear error that your UI can display to the user.

Testing Across Providers

Integration testing is where this pattern truly shines. Because your application logic depends on the AIProviderAdapter interface, you can write a single test suite that runs against all your adapters.

describe('AI Integration', () => {
  const providers = [new OpenAIAdapter(), new AnthropicAdapter()];

  providers.forEach(provider => {
    it(`should return a valid response from ${provider.constructor.name}`, async () => {
      const response = await provider.sendMessage("Test prompt");
      expect(typeof response).toBe('string');
      expect(response.length).toBeGreaterThan(0);
    });
  });
});

This ensures that when you switch providers, you are not breaking your application's core functionality. You catch incompatibilities in your CI pipeline, not in production.

The Cost of NOT Abstracting

The cost of not abstracting is high. API pricing changes, rate limits, unexpected outages, and model deprecations are not hypothetical scenarios. They are the reality of working with LLMs.

If you are tightly coupled to one provider, you are at the mercy of their roadmap and their pricing. By investing in an abstraction layer early, you are buying insurance for your application. You are ensuring that your business can adapt to the rapidly changing AI landscape without having to rewrite your entire codebase every time a vendor makes a change.

It is a small upfront investment that pays dividends in stability, flexibility, and peace of mind. Do not wait for a weekend panic to realize you need to switch. Build for portability today.

Ask about Kyle
AI-powered resume assistant

Ask me about Kyle's skills, experience, or projects