Every client is asking for it. "Can we add an AI assistant to the site?" The answer is yes — and it's simpler than most people think. Here's exactly how to do it in a Next.js app using the Vercel AI SDK.
This isn't a toy demo. By the end, you'll have a real streaming chat interface wired to an LLM, built with the App Router pattern that's actually production-ready.
What You'll Build
- A chat UI with streaming responses (no waiting for the full reply)
- A Next.js API route that handles the AI calls server-side
- Clean component structure you can drop into any existing project
Why the Vercel AI SDK?
You could call OpenAI directly. But the Vercel AI SDK gives you:
- Streaming out of the box —
useChathandles all the state for you - Model-agnostic — swap between GPT-4o, Claude, Gemini without rewriting logic
- Built for Next.js — App Router support is first-class
Step 1: Install Dependencies
``bash
pnpm add ai @ai-sdk/react zod
`
You'll also need an API key from your model provider — OpenAI, Anthropic, or via the Vercel AI Gateway.
Step 2: Create the API Route
Create app/api/chat/route.ts:
`typescript
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) { const { messages } = await req.json();
const result = streamText({ model: openai('gpt-4o'), system: 'You are a helpful assistant.', messages, });
return result.toDataStreamResponse();
}
`
That's it for the backend. The streamText function handles the streaming automatically — no manual SSE setup needed.
Step 3: Build the Chat UI
Create app/chat/page.tsx:
`typescript
'use client';
import { useChat } from '@ai-sdk/react';
export default function ChatPage() { const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
<div className="flex-1 overflow-y-auto space-y-4 mb-4">
{messages.map(m => (
<div
key={m.id}
className={p-3 rounded-lg ${
m.role === 'user'
? 'bg-blue-100 ml-auto max-w-sm'
: 'bg-gray-100 mr-auto max-w-sm'
}}
>
<p className="text-sm">{m.content}</p>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask something..."
className="flex-1 border rounded-lg p-2 text-sm"
/>
<button
type="submit"
className="bg-blue-600 text-white px-4 py-2 rounded-lg text-sm"
>
Send
</button>
</form>
</div>
);
}
`
useChat manages the message history, input state, and streaming updates — all in one hook.
Step 4: Add Your API Key
Create .env.local:
`
OPENAI_API_KEY=your_key_here
`
Run pnpm dev and go to /chat. You've got a working AI chatbot with streaming responses.
Making It Production-Ready
A few things to add before shipping:
Rate limiting — you don't want unlimited API calls. Add a simple check in the route handler using the user's IP or session.
System prompt customization — change the system field to match your use case: customer support bot, product assistant, FAQ responder.
Error handling — wrap the streamText call in a try/catch and return a proper error response.
Model switching — the AI SDK makes this trivial. Want to use Claude instead?
`typescript
import { anthropic } from '@ai-sdk/anthropic';
// replace openai('gpt-4o') with:
anthropic('claude-sonnet-4-5')
``
What Clients Are Actually Asking For
Most businesses want one of these:
- Support chatbot — answers questions about their product/service
- Lead qualifier — collects info before a sales call
- Internal tool — search through docs or databases
Bottom Line
Adding AI chat to a Next.js app in 2026 takes about 30 minutes of setup. The Vercel AI SDK handles the hard parts — streaming, state management, provider abstraction. You focus on the product logic.
If you want this built into your site — whether that's a customer support bot, a product assistant, or something custom — let's talk.
