Building Real-Time Generative UI: How to Stream AI Components with Zero Latency
Learn how to create responsive, type-safe AI chat interfaces that render components progressively as the AI thinks. Master zero-latency streaming with React and TypeScript.
The Traditional AI UI Challenge
Traditional AI chat interfaces often create a disconnect between AI thinking and user experience. When AI models need to generate structured data or UI components, users typically wait for complete responses before seeing any meaningful content. This creates a gap where the AI's reasoning process remains invisible to users.
Consider a weather app that uses AI to generate dynamic weather cards. With traditional approaches, users wait for the AI to complete its entire response—including any data retrieval or processing—before seeing anything. This creates a black box experience where users don't understand what's happening.
Traditional Flow Limitations:
- • Black box experience - Users can't see AI reasoning
- • All-or-nothing responses - No progressive feedback
- • Type safety challenges - Runtime errors from malformed AI responses
- • Complex state management - Manual parsing and validation
An Experimental Approach: System Prompt Guidance
This is an experimental exploration of a different paradigm: what if we could guide AI models at the system prompt level to understand they have UI tools available while responding? Instead of treating AI responses as complete documents, we're exploring how to make AI aware of its UI capabilities as it generates content.
The core hypothesis is that by giving LLMs guidance in their system prompt that they have UI components available, we can create a more transparent and interactive experience. When the AI knows it can render components progressively, it can structure its responses accordingly—creating a bridge between AI reasoning and user experience.
🧪 Experimental Hypothesis:
By making AI models aware of their UI capabilities through system prompts, we can create more engaging, transparent interactions where users can see AI reasoning unfold in real-time through progressive component rendering.
Building a Type-Safe AI Chat Interface
Let's build a complete AI chat interface using React, TypeScript, and progressive rendering. We'll create a weather assistant that generates interactive weather cards in real-time.
Step 1: Define Your Schema
Start by defining TypeScript schemas for your AI-generated components using Zod:
import { z } from "zod";
import { zodSchemaToPrompt } from "melony/zod";
// Define the weather card schema
const weatherSchema = z.object({
type: z.literal("weather-card"),
location: z.string().describe("City and country name"),
temperature: z.number().describe("Temperature in Celsius"),
condition: z.string().describe("Weather condition like 'sunny', 'rainy'"),
humidity: z.number().min(0).max(100).describe("Humidity percentage"),
windSpeed: z.number().describe("Wind speed in km/h"),
timestamp: z.string().describe("ISO timestamp of the weather data")
});
// Generate the AI prompt
export const weatherUIPrompt = zodSchemaToPrompt({
type: "weather-card",
schema: weatherSchema,
description: "Display comprehensive weather information in a card format"
});
Step 2: Create Your React Component
Build the actual weather card component that will render the AI-generated data:
import React from 'react';
import { z } from 'zod';
interface WeatherCardProps {
location: string;
temperature: number;
condition: string;
humidity: number;
windSpeed: number;
timestamp: string;
}
export const WeatherCard: React.FC<WeatherCardProps> = ({
location,
temperature,
condition,
humidity,
windSpeed,
timestamp
}) => {
const getConditionIcon = (condition: string) => {
const icons = {
sunny: '☀️',
rainy: '🌧️',
cloudy: '☁️',
snowy: '❄️',
stormy: '⛈️'
};
return icons[condition.toLowerCase() as keyof typeof icons] || '🌤️';
};
return (
<div className="bg-gradient-to-br from-blue-50 to-blue-100 dark:from-blue-900/20 dark:to-blue-800/20 border border-blue-200 dark:border-blue-700 rounded-xl p-6 shadow-lg">
<div className="flex items-center justify-between mb-4">
<h3 className="text-xl font-bold text-blue-900 dark:text-blue-100">
{location}
</h3>
<span className="text-2xl">{getConditionIcon(condition)}</span>
</div>
<div className="text-center mb-4">
<div className="text-4xl font-bold text-blue-800 dark:text-blue-200 mb-1">
{temperature}°C
</div>
<div className="text-blue-600 dark:text-blue-300 capitalize">
{condition}
</div>
</div>
<div className="grid grid-cols-2 gap-4 text-sm">
<div className="text-center">
<div className="text-blue-600 dark:text-blue-300">Humidity</div>
<div className="font-semibold text-blue-800 dark:text-blue-200">
{humidity}%
</div>
</div>
<div className="text-center">
<div className="text-blue-600 dark:text-blue-300">Wind</div>
<div className="font-semibold text-blue-800 dark:text-blue-200">
{windSpeed} km/h
</div>
</div>
</div>
<div className="text-xs text-blue-500 dark:text-blue-400 mt-4 text-center">
Updated: {new Date(timestamp).toLocaleString()}
</div>
</div>
);
};
Step 3: The Key Innovation - System Prompt Guidance
Here's where our experimental approach shines. We guide the AI model through its system prompt to understand it has UI tools available:
// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { weatherUIPrompt } from "@/components/weather";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4"),
system: `You are a helpful weather assistant with UI capabilities. When users ask about weather, you can render interactive weather cards using this format:
${weatherUIPrompt}
IMPORTANT: You have UI tools available. When appropriate, use the weather-card format to create rich, interactive components that users can see as you generate them. This allows for progressive rendering and a better user experience.
Always provide accurate, up-to-date weather information. If you cannot access real weather data, clearly state this limitation.`,
messages,
temperature: 0.7,
maxTokens: 1000,
});
return result.toDataStreamResponse();
}
💡 The Game-Changing Insight
By explicitly telling the AI it has UI tools available, we're experimenting with a new paradigm where AI models can be aware of their rendering capabilities while generating responses. This system prompt guidance approach could fundamentally change how we think about AI-human interaction.
Step 4: Implement Client-Side Rendering
Use Melony to render components progressively as they stream:
import React, { useState } from 'react';
import { useChat } from 'ai/react';
import { MelonyCard } from 'melony';
import { WeatherCard } from '@/components/weather';
function ChatInterface() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat',
});
return (
<div className="max-w-2xl mx-auto p-6">
<div className="space-y-4 mb-6">
{messages.map((message) => (
<div key={message.id} className="space-y-2">
<div className="text-sm font-medium text-muted-foreground">
{message.role === 'user' ? 'You' : 'AI'}
</div>
<div className="bg-muted/50 rounded-lg p-4">
{message.role === 'assistant' ? (
<MelonyCard
text={message.content}
components={{
'weather-card': WeatherCard
}}
/>
) : (
<p>{message.content}</p>
)}
</div>
</div>
))}
{isLoading && (
<div className="text-sm text-muted-foreground">
AI is thinking...
</div>
)}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask about the weather in any city..."
className="flex-1 px-4 py-2 border border-border rounded-lg bg-background text-foreground placeholder:text-muted-foreground focus:outline-none focus:ring-2 focus:ring-primary/50"
/>
<button
type="submit"
disabled={isLoading}
className="px-6 py-2 bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 transition-colors disabled:opacity-50"
>
Send
</button>
</form>
</div>
);
}
export default ChatInterface;
Experimental Benefits & Observations
While this is still an experimental approach, we've observed some interesting benefits during our testing:
🧠 AI Awareness
AI models seem to structure responses differently when they know they have UI tools available, potentially creating more thoughtful component usage.
🛡️ Type Safety
Zod schemas ensure AI responses are validated and type-safe at compile time, reducing runtime errors.
🔄 Progressive Feedback
Users can see AI reasoning unfold through component rendering, creating a more transparent interaction.
⚡ Experimental Performance
Early observations suggest improved perceived responsiveness, though this varies based on AI model performance and data retrieval needs.
⚠️ Important Note:
This approach still requires AI models to retrieve any necessary data (like real weather information) before generating components. The performance benefits come from making the AI's reasoning process more transparent and engaging, not from eliminating data retrieval time.
Advanced Patterns
Here are some advanced patterns for building sophisticated AI interfaces:
Multiple Component Types
Define multiple schemas for different types of content:
// Multiple component schemas
const chartSchema = z.object({
type: z.literal("chart"),
title: z.string(),
data: z.array(z.object({
label: z.string(),
value: z.number()
})),
chartType: z.enum(["bar", "line", "pie"])
});
const tableSchema = z.object({
type: z.literal("table"),
headers: z.array(z.string()),
rows: z.array(z.array(z.string()))
});
// Register multiple components
<MelonyCard
text={message.content}
components={{
'weather-card': WeatherCard,
'chart': ChartComponent,
'table': TableComponent
}}
/>
Error Handling
Handle malformed AI responses gracefully:
import { MelonyCard } from 'melony';
function SafeMelonyCard({ text, components }) {
return (
<ErrorBoundary fallback={<div>Failed to render component</div>}>
<MelonyCard
text={text}
components={components}
onError={(error) => {
console.error('Component rendering error:', error);
// Log to analytics, show fallback UI, etc.
}}
/>
</ErrorBoundary>
);
}
Best Practices
Follow these best practices for optimal AI interface development:
1. Design for Progressive Enhancement
Start with basic text responses and progressively enhance with rich components. This ensures your interface works even when AI responses are malformed.
2. Use Descriptive Schema Fields
Provide clear descriptions in your Zod schemas. This helps AI models generate more accurate responses and improves the quality of your components.
3. Implement Proper Loading States
Show skeleton loaders or progressive loading indicators to maintain user engagement during AI processing.
4. Test with Real AI Responses
Always test your components with actual AI-generated content. Mock data often doesn't reflect the variability of real AI responses.
Conclusion: A New Paradigm for AI Interaction
This experimental approach represents a fundamental shift in how we think about AI-human interaction. Instead of treating AI responses as complete documents, we're exploring a paradigm where AI models are aware of their UI capabilities and can structure responses accordingly.
The key insight is that system prompt guidance—telling AI models they have UI tools available—might be a game-changer for creating more transparent, engaging interactions. By making AI reasoning visible through progressive component rendering, we're bridging the gap between AI thinking and user experience.
While this is still experimental, we believe this approach could fundamentally change how we build AI interfaces. The combination of React's component model, AI streaming, and system prompt guidance opens up new possibilities for creating truly interactive AI experiences.
We're excited to see where this experiment leads. Try building your own AI interfaces with Melony and help us explore this new frontier of AI-human interaction.
Ready to Build AI Interfaces?
Start building your own progressive AI chat interfaces with Melony today.
Related Articles
Progressive Rendering vs Traditional AI UIs: Performance Comparison
Compare the performance benefits of progressive rendering versus traditional AI interfaces.
Type-Safe AI Responses with Zod Schemas: Best Practices
Ensure your AI-generated content is always type-safe with Zod schemas.