7 min read

Managing AI Chat State in React with TanStack Store

Why useState falls apart for AI chat interfaces, and how TanStack Store provides fine-grained reactive state management that keeps streaming UIs performant.

ReactTanStack StoreAIState Management

Building a chat interface for an AI agent is deceptively simple until you hit the streaming part. You start with a basic form and a list of messages, but then the requirements grow. You need to handle streaming tokens, tool call status, loading states, and optimistic updates, all while keeping the UI responsive. When you stream tokens, you are updating state dozens of times per second. If you are using standard React state, you quickly find yourself in a performance bottleneck where every single token update triggers a re-render of your entire message list.

Why useState Isn't Enough

The core problem with React state for AI chat is the re-render cascade. When you store your conversation in a single useState array, every time a new token arrives from the server, you update that array. React then re-renders the component that holds that state, and consequently, every child component that depends on it.

If your message list component is complex, this becomes expensive. You might have markdown rendering, syntax highlighting, or complex layout calculations happening on every single token update. Even with memo or useMemo, the overhead of managing this state within the React render cycle is significant. You end up with a UI that feels sluggish or jittery during long responses.

The challenge isn't just storing the messages. It is managing the high-frequency updates of the streaming state without forcing the entire component tree to re-render on every single character.

TanStack Store for AI State

TanStack Store changes the game by moving state outside the React render cycle. It provides a reactive, external store that allows for fine-grained subscriptions. Instead of the entire component tree re-rendering when the state changes, only the specific components that subscribe to the changed slice of state will re-render.

This is perfect for AI chat. You can have a MessageList component that only re-renders when the list of messages changes, and a StreamingIndicator component that subscribes only to the isStreaming boolean.

interface ChatState {
  messages: Message[];
  isStreaming: boolean;
  pendingTools: string[];
  error: string | null;
}

const chatStore = new Store<ChatState>({
  messages: [],
  isStreaming: false,
  pendingTools: [],
  error: null,
});

Modeling the Conversation

To make this work, you need a robust TypeScript model for your conversation. You want to track not just the text, but the metadata for tool calls and the status of the stream.

type Message = {
  id: string;
  role: 'user' | 'assistant' | 'system';
  content: string;
  toolCalls?: ToolCall[];
};

type ToolCall = {
  id: string;
  name: string;
  args: Record<string, any>;
  status: 'pending' | 'completed' | 'failed';
};

With this structure, your store can derive state. You can easily compute isStreaming or lastMessage without storing them redundantly.

Connecting Store to React Components

The real power comes when you connect this store to your React components using selectors. This ensures that your components only update when the specific data they need changes.

function MessageList() {
  const messages = useStore(chatStore, (state) => state.messages);

  return (
    <div>
      {messages.map((msg) => (
        <MessageItem key={msg.id} message={msg} />
      ))}
    </div>
  );
}

function StreamingIndicator() {
  const isStreaming = useStore(chatStore, (state) => state.isStreaming);

  if (!isStreaming) return null;
  return <div>AI is thinking...</div>;
}

In this setup, when a new token arrives and you update the messages array in the store, the MessageList component re-renders, but the StreamingIndicator does not, unless the isStreaming status itself changes.

By decoupling the state from the React render cycle, you eliminate the re-render cascade entirely. The UI remains fluid, even when the AI is generating long, complex responses.

Handling the Stream

When you receive a Server-Sent Events (SSE) stream from your AI backend, you need a pattern to update the store efficiently. You don't want to replace the entire message list on every token. Instead, you want to append the token to the current message.

async function handleStream(reader: ReadableStreamDefaultReader) {
  chatStore.setState((s) => ({ ...s, isStreaming: true }));

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    const token = decode(value);
    chatStore.setState((s) => {
      const lastMessage = s.messages[s.messages.length - 1];
      const updatedMessages = [...s.messages];
      updatedMessages[updatedMessages.length - 1] = {
        ...lastMessage,
        content: lastMessage.content + token,
      };
      return { ...s, messages: updatedMessages };
    });
  }

  chatStore.setState((s) => ({ ...s, isStreaming: false }));
}

This pattern ensures that you are only updating the specific message that is currently being streamed. The rest of your application remains untouched, and your React components only react to the changes they are subscribed to. This is the key to building high-performance, responsive AI chat interfaces in React.

Ask about Kyle
AI-powered resume assistant

Ask me about Kyle's skills, experience, or projects