← Back to BlogAi Integration15 min readMarch 5, 2024

Building an OpenAI Chatbot in React: AI Integration Tutorial

Step-by-step guide to creating a modern AI chatbot with React and OpenAI's GPT API, featuring real-time streaming responses and conversation context.

Share this article

Introduction

AI chatbots have become essential for modern web applications, providing instant customer support and engaging user experiences. In this tutorial, we'll build a sophisticated chatbot using OpenAI's GPT API and React, complete with streaming responses and conversation memory.

What You'll Build

By the end of this tutorial, you'll have a fully functional chatbot with:

  • Real-time streaming responses from OpenAI GPT
  • Conversation memory to maintain context
  • Modern UI with typing indicators and animations
  • Error handling and rate limiting
  • Conversation persistence in localStorage

Prerequisites

Before we start, ensure you have:

  • React 18+ application set up
  • OpenAI API key (from openai.com)
  • Basic knowledge of React hooks and async JavaScript
  • Node.js and npm installed

Step 1: Install Dependencies

First, install the required packages:

npm install openai @types/react

For styling, we'll use Tailwind CSS (optional but recommended):

npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init -p

Step 2: Set Up Environment Variables

Create a .env.local file in your project root:

NEXT_PUBLIC_OPENAI_API_KEY=sk-your-api-key-here

Security Note: In production, never expose your API key on the client side. Use a backend API route to proxy requests to OpenAI.

Step 3: Create the Chat Hook

Create a custom hook to manage chat functionality:

// hooks/useChat.ts
import { useState, useCallback } from 'react'
import OpenAI from 'openai'

interface Message {
  id: string
  role: 'user' | 'assistant' | 'system'
  content: string
  timestamp: Date
}

const openai = new OpenAI({
  apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY,
  dangerouslyAllowBrowser: true // Only for demo - use backend in production
})

export function useChat() {
  const [messages, setMessages] = useState<Message[]>([])
  const [isLoading, setIsLoading] = useState(false)
  const [error, setError] = useState<string | null>(null)

  const sendMessage = useCallback(async (content: string) => {
    if (!content.trim()) return

    const userMessage: Message = {
      id: Date.now().toString(),
      role: 'user',
      content: content.trim(),
      timestamp: new Date()
    }

    setMessages(prev => [...prev, userMessage])
    setIsLoading(true)
    setError(null)

    try {
      const stream = await openai.chat.completions.create({
        model: 'gpt-3.5-turbo',
        messages: [
          {
            role: 'system',
            content: 'You are a helpful assistant. Keep responses concise and friendly.'
          },
          ...messages.map(msg => ({
            role: msg.role as 'user' | 'assistant',
            content: msg.content
          })),
          { role: 'user', content }
        ],
        stream: true,
        max_tokens: 500,
        temperature: 0.7
      })

      const assistantMessage: Message = {
        id: (Date.now() + 1).toString(),
        role: 'assistant',
        content: '',
        timestamp: new Date()
      }

      setMessages(prev => [...prev, assistantMessage])

      for await (const chunk of stream) {
        const content = chunk.choices[0]?.delta?.content || ''

        setMessages(prev =>
          prev.map(msg =>
            msg.id === assistantMessage.id
              ? { ...msg, content: msg.content + content }
              : msg
          )
        )
      }

    } catch (err) {
      setError('Failed to get response. Please try again.')
      console.error('Chat error:', err)
    } finally {
      setIsLoading(false)
    }
  }, [messages])

  const clearChat = useCallback(() => {
    setMessages([])
    setError(null)
  }, [])

  return {
    messages,
    isLoading,
    error,
    sendMessage,
    clearChat
  }
}

Step 4: Create the Chat Interface

Now, let's build the chat UI components:

// components/ChatMessage.tsx
import React from 'react'

interface Message {
  id: string
  role: 'user' | 'assistant'
  content: string
  timestamp: Date
}

interface ChatMessageProps {
  message: Message
}

export function ChatMessage({ message }: ChatMessageProps) {
  const isUser = message.role === 'user'

  return (
    <div className={`flex ${isUser ? 'justify-end' : 'justify-start'} mb-4`}>
      <div className={`max-w-xs lg:max-w-md px-4 py-2 rounded-lg ${
        isUser
          ? 'bg-blue-500 text-white'
          : 'bg-gray-200 text-gray-800'
      }`}>
        <p className="text-sm">{message.content}</p>
        <p className={`text-xs mt-1 ${
          isUser ? 'text-blue-100' : 'text-gray-500'
        }`}>
          {message.timestamp.toLocaleTimeString()}
        </p>
      </div>
    </div>
  )
}

Step 5: Create the Main Chat Component

// components/Chatbot.tsx
import React, { useState, useRef, useEffect } from 'react'
import { useChat } from '../hooks/useChat'
import { ChatMessage } from './ChatMessage'

export function Chatbot() {
  const [input, setInput] = useState('')
  const { messages, isLoading, error, sendMessage, clearChat } = useChat()
  const messagesEndRef = useRef<HTMLDivElement>(null)

  // Auto-scroll to bottom when new messages arrive
  useEffect(() => {
    messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' })
  }, [messages])

  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault()
    if (input.trim() && !isLoading) {
      sendMessage(input)
      setInput('')
    }
  }

  return (
    <div className="flex flex-col h-96 w-full max-w-md mx-auto border border-gray-300 rounded-lg overflow-hidden">
      {/* Header */}
      <div className="bg-blue-500 text-white p-4 flex justify-between items-center">
        <h3 className="font-semibold">AI Assistant</h3>
        <button
          onClick={clearChat}
          className="text-blue-100 hover:text-white transition-colors"
        >
          Clear
        </button>
      </div>

      {/* Messages */}
      <div className="flex-1 overflow-y-auto p-4 bg-gray-50">
        {messages.length === 0 && (
          <div className="text-center text-gray-500 mt-8">
            <p>👋 Hi! I'm your AI assistant.</p>
            <p className="text-sm mt-2">Ask me anything to get started!</p>
          </div>
        )}

        {messages.map((message) => (
          <ChatMessage key={message.id} message={message} />
        ))}

        {isLoading && (
          <div className="flex justify-start mb-4">
            <div className="bg-gray-200 rounded-lg px-4 py-2">
              <div className="flex space-x-1">
                <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce"></div>
                <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce" style={{ animationDelay: '0.1s' }}></div>
                <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce" style={{ animationDelay: '0.2s' }}></div>
              </div>
            </div>
          </div>
        )}

        {error && (
          <div className="bg-red-100 border border-red-400 text-red-700 px-4 py-3 rounded mb-4">
            {error}
          </div>
        )}

        <div ref={messagesEndRef} />
      </div>

      {/* Input */}
      <form onSubmit={handleSubmit} className="p-4 bg-white border-t">
        <div className="flex space-x-2">
          <input
            type="text"
            value={input}
            onChange={(e) => setInput(e.target.value)}
            placeholder="Type your message..."
            className="flex-1 border border-gray-300 rounded-lg px-3 py-2 focus:outline-none focus:border-blue-500"
            disabled={isLoading}
          />
          <button
            type="submit"
            disabled={!input.trim() || isLoading}
            className="bg-blue-500 hover:bg-blue-600 disabled:bg-gray-300 text-white px-4 py-2 rounded-lg transition-colors"
          >
            Send
          </button>
        </div>
      </form>
    </div>
  )
}

Step 6: Add Conversation Persistence

To save conversations to localStorage:

// Add to useChat hook
useEffect(() => {
  // Load saved messages on mount
  const saved = localStorage.getItem('chat-messages')
  if (saved) {
    try {
      const parsed = JSON.parse(saved)
      setMessages(parsed.map((msg: any) => ({
        ...msg,
        timestamp: new Date(msg.timestamp)
      })))
    } catch (error) {
      console.error('Failed to load saved messages:', error)
    }
  }
}, [])

useEffect(() => {
  // Save messages to localStorage
  if (messages.length > 0) {
    localStorage.setItem('chat-messages', JSON.stringify(messages))
  }
}, [messages])

Step 7: Implement Rate Limiting

Add basic rate limiting to prevent API abuse:

// Add to useChat hook
const [lastRequestTime, setLastRequestTime] = useState(0)
const MIN_REQUEST_INTERVAL = 1000 // 1 second

const sendMessage = useCallback(async (content: string) => {
  const now = Date.now()
  if (now - lastRequestTime < MIN_REQUEST_INTERVAL) {
    setError('Please wait a moment before sending another message.')
    return
  }

  setLastRequestTime(now)
  // ... rest of sendMessage logic
}, [messages, lastRequestTime])

Security Best Practices

For Production Use

  1. Never expose API keys on the client side
  2. Create a backend API route to proxy OpenAI requests
  3. Implement proper authentication and rate limiting
  4. Sanitize user inputs to prevent injection attacks
  5. Monitor API usage to prevent abuse

Example Backend API Route (Next.js)

// pages/api/chat.ts
import OpenAI from 'openai'

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
})

export default async function handler(req, res) {
  if (req.method !== 'POST') {
    return res.status(405).json({ error: 'Method not allowed' })
  }

  try {
    const { messages } = req.body

    const completion = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages,
      stream: true,
      max_tokens: 500,
    })

    // Handle streaming response
    // ... implementation details

  } catch (error) {
    res.status(500).json({ error: 'Failed to process request' })
  }
}

Advanced Features to Consider

  1. Message reactions and feedback
  2. File upload support
  3. Voice input/output with Web Speech API
  4. Multi-language support
  5. Chat export functionality
  6. Custom personality settings
  7. Integration with knowledge bases

Testing Your Chatbot

Test various scenarios:

  • Long conversations (context limits)
  • Network failures (error handling)
  • Rapid messages (rate limiting)
  • Edge cases (empty messages, special characters)

Deployment Considerations

  • Environment variables for API keys
  • CDN for static assets
  • Monitoring for API usage and errors
  • Analytics for user interactions
  • Performance optimization for mobile devices

Conclusion

You now have a fully functional AI chatbot with streaming responses, conversation memory, and error handling. The key to a great chatbot experience is:

  1. Responsive design that works on all devices
  2. Clear error messages and loading states
  3. Conversation context preservation
  4. Security best practices in production
  5. Monitoring and analytics for continuous improvement

Next Steps

  • Implement backend API routes for security
  • Add user authentication and conversation history
  • Integrate with your existing application
  • Consider advanced features like voice interaction
  • Monitor usage and optimize performance

Ready to integrate AI into your application? Our team specializes in custom AI integrations and can help accelerate your development process.

Share this article

Need Professional Help?

Our team specializes in custom integrations and can help with your specific requirements.

Get Expert Integration Support