Back
intermediate
Modern AI Development

Vercel AI SDK: AI for Web Apps

Build AI-powered web applications with the most popular TypeScript AI framework

20 min read· Vercel· AI SDK· TypeScript· Web

Vercel AI SDK: AI for Web Apps

If you're building AI features into a web application, the Vercel AI SDK is the tool you want. It's the most popular TypeScript framework for adding AI capabilities to React and Next.js apps -- handling streaming, tool calling, structured output, and multi-provider support out of the box.

Vercel AI SDK Definition: An open-source TypeScript toolkit for building AI-powered web applications. It provides React hooks, streaming utilities, and a unified API that works with OpenAI, Anthropic, Google, and 20+ other AI providers. Used by thousands of production apps.

Why Web Developers Love It

Building AI into web apps has unique challenges: you need streaming responses (so users don't stare at a loading spinner), you need to handle different model providers, and you need clean React components. The AI SDK solves all of this.

  1. Streaming by default -- AI responses stream token-by-token to the UI, giving users instant feedback instead of waiting for the full response
  2. React hooks --
    useChat
    ,
    useCompletion
    , and
    useObject
    manage state, loading, and error handling out of the box
  3. Multi-provider -- one API surface to switch between OpenAI, Anthropic, Google, Mistral, and more without rewriting code
  4. Type-safe -- full TypeScript types for tool parameters, structured output, and model responses
  5. Edge-ready -- works on Vercel Edge Functions, Cloudflare Workers, and any serverless runtime

Architecture

The SDK has three layers that separate concerns cleanly:

┌─────────────────────────────────────────────────────────────┐
│                     AI SDK UI                                │
│          React hooks: useChat, useCompletion                 │
│              (Frontend - browser)                             │
└──────────────────────────┬──────────────────────────────────┘
                           │ HTTP streaming
┌──────────────────────────▼──────────────────────────────────┐
│                    AI SDK Core                               │
│         generateText, streamText, generateObject             │
│             (Backend - server / edge)                         │
└──────────────────────────┬──────────────────────────────────┘
                           │ Provider API
┌──────────────────────────▼──────────────────────────────────┐
│                  AI SDK Providers                             │
│      OpenAI, Anthropic, Google, Mistral, Ollama, ...         │
│               (Model connections)                             │
└─────────────────────────────────────────────────────────────┘
  • AI SDK UI -- React hooks and UI utilities for the frontend
  • AI SDK Core -- Provider-agnostic functions for text generation, streaming, and tool calling
  • AI SDK Providers -- Adapters that connect to specific model APIs

Installation

bash
# Core SDK
npm install ai

# Choose your provider(s)
npm install @ai-sdk/openai
npm install @ai-sdk/anthropic
npm install @ai-sdk/google

Core Features with Code Examples

1. Streaming Text

The most common use case: stream a response from an LLM to the user in real time.

Server side (Next.js API route):

typescript
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    system: "You are a helpful AI assistant.",
  });

  return result.toDataStreamResponse();
}

Client side (React component):

tsx
// app/page.tsx
"use client";
import { useChat } from "ai/react";

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat();

  return (
    <div className="max-w-2xl mx-auto p-4">
      {/* Message list */}
      <div className="space-y-4 mb-4">
        {messages.map((m) => (
          <div
            key={m.id}
            className={m.role === "user" ? "text-right" : "text-left"}
          >
            <span className="font-bold">
              {m.role === "user" ? "You" : "AI"}:
            </span>
            <p className="whitespace-pre-wrap">{m.content}</p>
          </div>
        ))}
      </div>

      {/* Input form */}
      <form onSubmit={handleSubmit} className="flex gap-2">
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Type your message..."
          className="flex-1 border rounded px-3 py-2"
        />
        <button
          type="submit"
          disabled={isLoading}
          className="bg-blue-500 text-white px-4 py-2 rounded"
        >
          Send
        </button>
      </form>
    </div>
  );
}

That is a complete, working streaming chat interface. The

useChat
hook handles message state, HTTP streaming, loading state, and error handling automatically. No manual fetch calls, no state management boilerplate.

Why streaming matters for UX: Without streaming, the user stares at a blank screen for 5-15 seconds while the model generates its full response. With streaming, they see the first words in under 500ms. The total generation time is the same, but the perceived latency drops dramatically.

2. Tool Calling

Tools let the AI model call functions you define -- searching databases, fetching weather, running calculations, or any custom logic your application needs.

typescript
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText, tool } from "ai";
import { z } from "zod";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    tools: {
      getWeather: tool({
        description: "Get the current weather for a city",
        parameters: z.object({
          city: z.string().describe("The city to get weather for"),
        }),
        execute: async ({ city }) => {
          // In production, call a real weather API
          const response = await fetch(
            `https://api.weatherapi.com/v1/current.json?key=${process.env.WEATHER_API_KEY}&q=${city}`
          );
          const data = await response.json();
          return {
            city,
            temperature: data.current.temp_c,
            condition: data.current.condition.text,
          };
        },
      }),
      calculate: tool({
        description: "Perform a mathematical calculation",
        parameters: z.object({
          expression: z.string().describe("The math expression to evaluate"),
        }),
        execute: async ({ expression }) => {
          try {
            const result = Function(`'use strict'; return (${expression})`)();
            return { expression, result: Number(result) };
          } catch {
            return { expression, error: "Invalid expression" };
          }
        },
      }),
    },
    maxSteps: 3, // Allow up to 3 tool calls per response
  });

  return result.toDataStreamResponse();
}

Type-safe tools with Zod: The

parameters
field uses Zod schemas, which gives you two things: compile-time TypeScript types (so your IDE autocompletes tool parameters) and runtime validation (so invalid tool calls from the model are caught before execution). The SDK converts Zod schemas to JSON Schema automatically for the model.

3. Structured Output

When you need the AI to return data in a specific structure (not free text), use

generateObject
. This is perfect for data extraction, classification, and any feature where you need a typed response.

typescript
import { openai } from "@ai-sdk/openai";
import { generateObject } from "ai";
import { z } from "zod";

const { object } = await generateObject({
  model: openai("gpt-4o"),
  schema: z.object({
    title: z.string().describe("Article title"),
    summary: z.string().describe("2-3 sentence summary"),
    tags: z.array(z.string()).describe("Relevant tags"),
    sentiment: z.enum(["positive", "negative", "neutral"]),
    keyPoints: z
      .array(
        z.object({
          point: z.string(),
          importance: z.number().min(1).max(10),
        })
      )
      .describe("Key points with importance scores"),
  }),
  prompt: "Analyze this article: [article text here]",
});

// 'object' is fully typed - TypeScript knows the exact shape
console.log(object.title);                  // string
console.log(object.tags);                   // string[]
console.log(object.keyPoints[0].importance); // number

4. Multi-Provider Support

Switch between models by changing a single line. Your application logic, tools, and UI code remain identical.

typescript
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
import { streamText } from "ai";

// Use OpenAI
const result1 = await streamText({
  model: openai("gpt-4o"),
  prompt: "Explain quantum computing",
});

// Switch to Anthropic -- same API, same code
const result2 = await streamText({
  model: anthropic("claude-sonnet-4-20250514"),
  prompt: "Explain quantum computing",
});

// Switch to Google -- same API, same code
const result3 = await streamText({
  model: google("gemini-2.0-flash"),
  prompt: "Explain quantum computing",
});

This provider-agnostic design means you can A/B test models, use different models for different features, or let users choose their preferred provider -- without touching your application logic.

5. Non-Streaming Generation

Not every use case needs streaming. For background processing, data pipelines, or server-side tasks, use

generateText
.

typescript
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const { text, usage } = await generateText({
  model: openai("gpt-4o"),
  prompt: "Write a haiku about TypeScript",
});

console.log(text);
console.log(`Tokens used: ${usage.totalTokens}`);

Generative UI

One of the most powerful features of the Vercel AI SDK is Generative UI -- the ability for the AI to render React components, not just text. When the model calls a tool, you can render custom, interactive UI for the tool's result.

tsx
// Instead of showing raw JSON for weather data,
// render a beautiful weather card
function WeatherCard({ city, temperature, condition }) {
  return (
    <div className="bg-gradient-to-br from-blue-400 to-blue-600 text-white p-4 rounded-xl">
      <h3 className="text-lg font-bold">{city}</h3>
      <p className="text-4xl font-light">{temperature}°C</p>
      <p className="text-blue-100">{condition}</p>
    </div>
  );
}

This pattern enables AI applications that feel native -- the AI doesn't just talk in text, it renders interactive, visual components that match your app's design system.

Integration with Next.js

The AI SDK was designed to work seamlessly with Next.js App Router:

my-ai-app/
├── app/
│   ├── api/
│   │   └── chat/
│   │       └── route.ts      # API route (server-side, keeps keys safe)
│   ├── page.tsx               # Chat UI (client-side, uses useChat)
│   └── layout.tsx             # App layout
├── .env.local                 # API keys (never committed)
└── package.json

The API route handles all AI logic server-side (keeping API keys secure), while the React component uses

useChat
to manage the UI. Streaming happens automatically over the HTTP response -- no WebSocket setup required.

Key Takeaways

What You Have Learned:

  1. The Vercel AI SDK is the leading TypeScript framework for AI-powered web apps
  2. Streaming by default provides responsive UX with perceived sub-second latency
  3. useChat
    handles message state, streaming, loading, and errors automatically
  4. Tools let the AI call your functions with type-safe Zod schemas
  5. generateObject
    returns structured, fully-typed data instead of free text
  6. The provider-agnostic API lets you switch models by changing one line
  7. Generative UI enables AI that renders React components, not just text

Next Steps

Now that you understand the Vercel AI SDK, you're ready to build a complete full-stack AI application. In the next project lesson, we'll build a production chat app step by step, combining streaming, tool calling, and multi-model support into a real, deployable application.


Quiz