Loading...
Loading...

I've built four production apps with Convex in the last year. Every time I start a project with a traditional database now, I get annoyed within the first hour.
Convex is a reactive backend. You define your data schema and server functions. The frontend subscribes to queries. When data changes, the UI updates automatically. No polling. No WebSocket setup. No cache invalidation headaches.
Now combine that with AI. Real-time AI-powered features become trivially simple.
Here's how.
The real-time part matters more than you think for AI features.
When a user sends a message to an AI chatbot, they expect to see the response appear as it's generated. Streaming. With traditional setups, you need WebSockets, custom streaming logic, and careful state management.
With Convex, you write the AI response to the database as it streams. The frontend subscription picks it up automatically. Done.
// convex/messages.ts
import { mutation, query } from "./_generated/server";
import { v } from "convex/values";
export const send = mutation({
args: {
conversationId: v.id("conversations"),
content: v.string(),
role: v.union(v.literal("user"), v.literal("assistant")),
},
handler: async (ctx, args) => {
return await ctx.db.insert("messages", {
conversationId: args.conversationId,
content: args.content,
role: args.role,
createdAt: Date.now(),
});
},
});
export const list = query({
args: { conversationId: v.id("conversations") },
handler: async (ctx, args) => {
return await ctx.db
.query("messages")
.withIndex("by_conversation", (q) =>
q.eq("conversationId", args.conversationId)
)
.order("asc")
.collect();
},
});On the frontend:
function ChatWindow({ conversationId }: { conversationId: Id<"conversations"> }) {
const messages = useQuery(api.messages.list, { conversationId });
const sendMessage = useMutation(api.messages.send);
// That's it. Messages update in real-time across all connected clients.
// No WebSocket setup. No polling. No manual cache invalidation.
return (
<div>
{messages?.map((msg) => (
<div key={msg._id} className={msg.role === "user" ? "text-right" : "text-left"}>
{msg.content}
</div>
))}
</div>
);
}When the AI writes a response (even incrementally), every connected client sees it instantly.
# Create a new project
npm create convex@latest my-ai-app
# Install AI dependencies
cd my-ai-app
npm install @anthropic-ai/sdk
# Start the dev server
npx convex devConvex dev starts a local development environment connected to Convex's cloud backend. Your data lives in Convex's infrastructure. You don't manage databases.
Convex has three types of server functions:
AI calls go in actions because they're external API calls.
// convex/ai.ts
import { action } from "./_generated/server";
import { v } from "convex/values";
import { api } from "./_generated/api";
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
export const generateResponse = action({
args: {
conversationId: v.id("conversations"),
userMessage: v.string(),
},
handler: async (ctx, args) => {
// Save user message
await ctx.runMutation(api.messages.send, {
conversationId: args.conversationId,
content: args.userMessage,
role: "user",
});
// Get conversation history
const messages = await ctx.runQuery(api.messages.list, {
conversationId: args.conversationId,
});
// Call AI
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: messages.map((m) => ({
role: m.role,
content: m.content,
})),
});
const assistantMessage =
response.content[0].type === "text" ? response.content[0].text : "";
// Save AI response - UI updates automatically
await ctx.runMutation(api.messages.send, {
conversationId: args.conversationId,
content: assistantMessage,
role: "assistant",
});
},
});When that mutation runs, every client subscribed to the messages query sees the new message instantly. No additional code needed.
Chat is the obvious use case. Here are less obvious ones that work beautifully with Convex's reactive model.
Live document analysis. User uploads a document. AI processes it in the background. Results appear in real-time as they're generated.
export const analyzeDocument = action({
args: { documentId: v.id("documents") },
handler: async (ctx, args) => {
const doc = await ctx.runQuery(api.documents.get, { id: args.documentId });
// Update status - UI reflects immediately
await ctx.runMutation(api.documents.updateStatus, {
id: args.documentId,
status: "analyzing",
});
const analysis = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [
{
role: "user",
content: `Analyze this document and extract key points:\n\n\${doc.content}`,
},
],
});
// Save results - subscribers see them immediately
await ctx.runMutation(api.documents.saveAnalysis, {
id: args.documentId,
analysis: analysis.content[0].type === "text" ? analysis.content[0].text : "",
status: "complete",
});
},
});Collaborative AI suggestions. Multiple users editing a document. AI provides suggestions that everyone sees in real-time.
Live dashboards with AI insights. Data changes. AI processes the change. Insight appears on every connected dashboard.
This is where it gets powerful. Convex can schedule functions to run later. Combine that with AI for:
// Schedule a daily analysis
export const scheduleDailyInsights = mutation({
handler: async (ctx) => {
await ctx.scheduler.runAfter(0, api.ai.generateDailyInsights);
// Schedule next run in 24 hours
await ctx.scheduler.runAfter(24 * 60 * 60 * 1000, api.ai.scheduleDailyInsights);
},
});Cron jobs without cron. Background processing without a queue. AI insights that just appear in your app every morning.
Define your schema properly. Convex validates data at write time.
// convex/schema.ts
import { defineSchema, defineTable } from "convex/server";
import { v } from "convex/values";
export default defineSchema({
conversations: defineTable({
title: v.string(),
userId: v.string(),
createdAt: v.number(),
}).index("by_user", ["userId"]),
messages: defineTable({
conversationId: v.id("conversations"),
content: v.string(),
role: v.union(v.literal("user"), v.literal("assistant")),
createdAt: v.number(),
}).index("by_conversation", ["conversationId"]),
documents: defineTable({
title: v.string(),
content: v.string(),
analysis: v.optional(v.string()),
status: v.union(
v.literal("uploaded"),
v.literal("analyzing"),
v.literal("complete"),
v.literal("error")
),
userId: v.string(),
}).index("by_user", ["userId"]),
});Type safety from database to frontend. No ORM. No migration files. Schema changes are handled automatically in development.
Convex handles scaling, but you need to handle AI costs.
Rate limiting. Don't let users spam AI calls. Add a check before every action.
Caching. If multiple users ask the same question, cache the AI response. Convex makes this easy because you can query for existing responses before calling the API.
Error handling. AI calls fail. Network issues, rate limits, model errors. Always have a fallback path.
try {
const response = await anthropic.messages.create({ ... });
await ctx.runMutation(api.messages.send, { ... });
} catch (error) {
await ctx.runMutation(api.messages.send, {
conversationId: args.conversationId,
content: "I'm having trouble right now. Please try again in a moment.",
role: "assistant",
});
}The user sees an error message in real-time instead of a spinning loader that never resolves.
That's the Convex advantage. Even your error handling is reactive.
Build something with it. You'll wonder why you ever wrote WebSocket code by hand.

Master Convex for building reactive, real-time backends -- from schema design to subscriptions, mutations, and scaling production workloads.

How to build reactive, real-time applications using Convex backend and AI agents — from live collaboration to streaming dashboards.

Your AI agent is only as useful as the services it can talk to. Here are the patterns I use to connect AI to everything else.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.