8 min read

Building Real-Time AI Dashboards with React and TanStack Query

How I built a dashboard that displays AI pipeline status, processing metrics, and LLM-generated insights in real time using TanStack Query polling and optimistic updates.

ReactTanStack QueryAIDashboard

When I was building the early versions of Lit Alerts, the biggest challenge wasn't just processing documents through an AI pipeline. It was making that pipeline feel alive. Users needed to see document processing progress, AI-generated insights, and pipeline status updates in real-time. A static dashboard felt dead, and manual page refreshes were a non-starter. I needed a way to bridge the gap between our asynchronous backend and a responsive React dashboard. That is where TanStack Query became the backbone of our real-time AI dashboard.

TanStack Query for Real-Time Data

The secret to a responsive real-time AI dashboard in React is not complex WebSocket management. It is smart, declarative polling. TanStack Query handles this with minimal configuration. By using refetchInterval, I can keep the UI updated without overwhelming the server.

const { data, isLoading } = useQuery({
  queryKey: ['ai-pipeline-status'],
  queryFn: fetchPipelineStatus,
  refetchInterval: 2000, // Poll every 2 seconds
  refetchOnWindowFocus: false, // Keep it quiet when the user switches tabs
  staleTime: 1000, // Data is fresh for 1 second
});

This approach ensures the dashboard stays current while the user is active. Setting refetchOnWindowFocus to false is crucial for AI workloads, as you don't want to trigger unnecessary API calls every time the user clicks back into the browser.

Designing the Dashboard Data Layer

A complex AI-powered dashboard requires multiple data streams. I structure my queries to feed specific widgets, ensuring that one slow endpoint doesn't block the entire UI.

// Pipeline status widget
const statusQuery = useQuery({ queryKey: ['status'], queryFn: getStatus });

// Recent AI outputs widget
const outputsQuery = useQuery({ queryKey: ['outputs'], queryFn: getOutputs });

// Token usage metrics
const usageQuery = useQuery({ queryKey: ['usage'], queryFn: getUsage });

By separating these concerns, I can show loading states for individual widgets. If the token usage API is slow, the pipeline status widget remains interactive and responsive. This granular control is essential for maintaining a high-quality user experience in AI analytics.

Optimistic Updates and Background Refresh

Sometimes, a user needs to trigger an action, like re-analyzing a document, and they expect immediate feedback. This is where optimistic updates shine. I update the UI immediately, then let TanStack Query handle the background synchronization.

const queryClient = useQueryClient();

const mutation = useMutation({
  mutationFn: triggerReanalysis,
  onMutate: async (newStatus) => {
    await queryClient.cancelQueries({ queryKey: ['status'] });
    const previousStatus = queryClient.getQueryData(['status']);
    queryClient.setQueryData(['status'], newStatus);
    return { previousStatus };
  },
  onError: (err, newStatus, context) => {
    queryClient.setQueryData(['status'], context.previousStatus);
  },
  onSettled: () => {
    queryClient.invalidateQueries({ queryKey: ['status'] });
  },
});

This pattern keeps the dashboard feeling snappy. The user sees their action reflected instantly, and the background refresh ensures the UI eventually matches the true server state.

Optimistic updates are not just about speed. They are about trust. When a user clicks a button, they want to know the system heard them.

AI-Generated Insights Widget

The most engaging part of our dashboard is the AI-generated insights widget. This card summarizes recent data trends using an LLM. Because these summaries are computationally expensive, I use a longer staleTime to cache the results.

const { data: summary } = useQuery({
  queryKey: ['ai-summary', timeRange],
  queryFn: () => fetchAISummary(timeRange),
  staleTime: 1000 * 60 * 5, // Cache for 5 minutes
});

This ensures we aren't re-running expensive LLM calls every time the component re-renders. The dashboard remains fast, and the insights are still relevant.

Scaling to Production

As the dashboard scales, I rely on prefetching and error boundaries. If the AI backend is slow, I don't want the entire dashboard to crash.

Error boundaries are your safety net. They allow you to gracefully handle failures in individual widgets without breaking the entire application.

I also use query deduplication to ensure that if multiple components need the same data, we only make one request. TanStack Query handles this out of the box, which is a massive win for performance. Building a real-time AI dashboard is about balancing freshness with performance. By leveraging these patterns, I can build interfaces that feel fast, reliable, and truly intelligent.

Ask about Kyle
AI-powered resume assistant

Ask me about Kyle's skills, experience, or projects