Welcome back to part 6 of the tutorial series. In this part, we’ll be leveling up our chat app by making it into a real conversation with memory and making it look a lot better at the same time.
We’ll also be using a library called daisyUI
to help us style our chat a lot faster. This is basically a Tailwind CSS plugin that gives you a lot of components out of the box, so you don’t have to reinvent the wheel.
Installing daisyUI
So first let’s install the daisyUI library by running the following command in the terminal (make sure you are in the root of the project):
npm i -D daisyui@latest
Now we need to let Tailwind know that we are using this library. Open the tailwind.config.ts
file located in your main project folder (outside of π app
) and change the settings inside as follows:
// This is all the same as before import type { Config } from "tailwindcss"; const config: Config = { content: [ "./pages/**/*.{js,ts,jsx,tsx,mdx}", "./components/**/*.{js,ts,jsx,tsx,mdx}", "./app/**/*.{js,ts,jsx,tsx,mdx}", ], theme: { extend: { backgroundImage: { "gradient-radial": "radial-gradient(var(--tw-gradient-stops))", "gradient-conic": "conic-gradient(from 180deg at 50% 50%, var(--tw-gradient-stops))", }, // From here on down is new colors: { 'custom-grey':'#2b3440', } }, }, plugins: [ require('daisyui'), ], daisyui: { darkTheme: "light", }, }; export default config;
I’m just going to ignore the gradient-radial
and gradient-conic
for now even though we’re not using them. I added a custom color to the theme
object called custom-grey
which we’ll be using later on. Most importantly, I added the daisyui
plugin to the plugins
array. This is what tells Tailwind to use the Daisy UI library.
I also set the darkTheme
to light
in the daisyui
settings object. What this will do is use the light
theme of Daisy UI even if the user has dark mode enabled. You can change this and use your own styling if you want, but for the purposes of this tutorial, I want to make sure everybody sees the same thing.
Go ahead and save your tailwind.config.ts
file.
Adding a new menu item
Next we’ll go to our navbar and make a small edit. I want to keep the old version of blockbuster chat
in the navbar on a separate page, and just start with a fresh page for this one. So open the navbar.tsx
file located in the app
folder:
π app π api π blockbuster π route.tsx π blockbuster_chat π page.tsx π counter π page.tsx π favicon.ico π globals.css π layout.tsx π navbar.tsx π οΈ We'll be working on this file π page.tsx
You can just leave everything as is except for the NavItems inside of the Navbar
component. Change the NavItems
to look like this:
const Navbar = () => { return ( <nav className="bg-emerald-700 text-white p-4"> <ul className="flex justify-between items-center"> <NavItem href="/">Home</NavItem> <NavItem href="/blockbuster_chat">Blockbuster Chat V1</NavItem> <NavItem href="/blockbuster_chat_2">Blockbuster Chat V2</NavItem> <NavItem href="/tbd2">TBD2</NavItem> </ul> </nav> ); }
You can see I just renamed the menu item to Blockbuster Chat V1 and added a new menu item for Blockbuster Chat V2, so we can work on our new version of the chat app separately. Save our changes and close the file.
If you run (npm run dev
) and open your project in the browser, you should see the new menu item in the navbar:
Creating an interface for chat messages
Awesome! Now in order to make our new page work, we will need to create two things: a new page at /blockbuster_chat_2
and a new API route at /api/blockbuster_chat_2
for this new page to call. As we want this new version of the chat to be able to have a chat history, we want to keep track of the chat messages in a list that we can store and also send to our API endpoint.
We can use TypeScript to create a ChatMessage
interface (type) for this. Since we will want both our client-side page and our server-side API to be able to work with this type, we will create a new folder called types
in the app
folder of our project, and then create a new file called chatMessage.ts
inside of it:
π app π api π blockbuster π route.tsx π blockbuster_chat π page.tsx π counter π page.tsx π types π chatMessage.ts β¨ New file π favicon.ico π globals.css π layout.tsx π navbar.tsx π page.tsx
Now you might be concerned about what happens if people go to website.com/types
since we have created a folder called types
. Don’t worry, Next.js will not create a new page on your website, since we don’t have any page.tsx
file inside of it.
So open up the chatMessage.ts
file and add the following interface inside of it:
export interface ChatMessage { text: string; sender: string; isAnswer?: boolean; time_sent: Date; }
So we have a simple interface
to define what each individual ChatMessage
object should look like. It has a text
property which is a string, a sender
property which is also a string, an optional (as indicated by the ?
) isAnswer
property which is a boolean, and a time_sent
property which is a Date
object.
The isAnswer
property exists as we will have to store not just the messages that ChatGPT sends to us, but also the messages that the user sends to ChatGPT.
Creating the new API route
So let’s first create our new API route, and after that, we’ll work on the page for Blockbuster Chat V2. Inside the api
folder, create a new folder called blockbuster_v2
and inside of it create a new file called route.tsx
:
π app π api π blockbuster π route.tsx π blockbuster_v2 π route.tsx β¨ New file π blockbuster_chat π page.tsx π counter π page.tsx π types π chatMessage.ts π favicon.ico π globals.css π layout.tsx π navbar.tsx π page.tsx
Now open up this new file and let’s get to coding up our new API route. Start with the imports and setup:
import { NextRequest, NextResponse } from "next/server"; import { ChatMessage } from "../../types/chatMessage"; import OpenAI from "openai"; import { ChatCompletionMessageParam } from "openai/resources/index.mjs"; const openAIConfig = { apiKey: process.env.OPENAI_API_KEY }; const openai = new OpenAI(openAIConfig);
This is largely the same as the previous API route, but we also import the ChatMessage
type we created earlier, and something new called ChatCompletionMessageParam
from the openai
library, which is nothing but a type definition that the OpenAI library uses so we’ll need it later on.
We have the openAIConfig
object where we load our API key once more and then create our openai
object.
The only thing we need for this particular API is a POST route, so let’s create that:
export async function POST(request: NextRequest) { // All the following code blocks will go inside here. }
As we will have a whole bunch of code blocks inside of the POST
function, we’ll go over each one step by step. After we’ve looked at all of them, I’ll provide the full function for you once more just for clarity. Keep in mind all the following code blocks are to be put inside the POST
function.
First of all, this API route will need to receive a bunch of things so that it can do its work. So let’s think of the things we want to receive in our request
object when a call is made to this API route. We want to receive the following:
- Both the name of the
movie
and the name of themovie character
, so we can put these in ChatGPT’s prompt. - The
question
that the user would like to ask. - The
chatHistory
so far which should be an array ofChatMessage
objects. ChatGPT will need to know what has been said before so it can continue the conversation. As this could obviously be the first message this might be an empty array.
So, let’s assume that all of these will be sent into the request
and call request.json()
to get them. Again, make sure all the following code blocks go inside the POST
function:
const { movieName, movieCharacter, question, chatHistory = [] }: { movieName: string; movieCharacter: string; question: string; chatHistory: ChatMessage[] } = await request.json();
This is a destructuring assignment that will take the movieName
, movieCharacter
, question
, and chatHistory
from the JSON body of the request. We also provide a default value of an empty array for chatHistory
as this may be the first message in the conversation.
We specify the types of each of these variables in the object that we destructure from the request.json()
call. movieName
, movieCharacter
and question
are all strings, and chatHistory
is an array of ChatMessage
objects like we discussed.
Next, let’s insert a simple console.log
statement to log our variables to the terminal for debugging purposes:
console.log(movieName, movieCharacter, question, chatHistory);
After this, I want to make sure we do a quick check to see if we have all the stuff that we need. The chatHistory
is optional, but the other items are absolutely required to be able to continue our API call here. So let’s add a quick check for this:
if (!movieName || !movieCharacter || !question) { return NextResponse.json({ error: "Missing required fields" }, { status: 400 }); }
If any of the movieName
, movieCharacter
, or question
fields are missing, we return a 400
status code and an error message in the response. The !
operator is a logical NOT operator, so you can kind of read the test like this: “if NOT movieName
OR NOT movieCharacter
OR NOT question
“.
Now that we know we have all the data we need, let’s create the system message for ChatGPT:
const systemMessage = `You are helpful and provide good information but you are ${movieCharacter} from ${movieName}. You will stay in character as ${movieCharacter} no matter what. Make sure you find some way to relate your responses to ${movieCharacter}'s personality or the movie ${movieName} at least once every response.`;
This is basically the same message we had before, telling ChatGPT to assume the identity of the movie character.
The next step is to prepare the message history in a format that the ChatGPT API expects of us. The format that ChatGPT wants is like this:
// Do not copy this, it's just an example [ { "role": "user", "content": "Hello, how are you?" }, { "role": "assistant", "content": "I'm doing well, thank you for asking." } ]
In our frontend implementation where we create our ChatMessage
objects, I’ll use different names than user
and assistant
though, as these are very boring and too generic. Our ChatMessage
objects will have a sender
property (as we defined in our types/chatMessage.ts
file).
On the front-end I’m either going to set this to You
(to indicate the user), or the name of the movie character (to indicate ChatGPT). So to make ChatGPT understand this, we’ll need to convert You
to user
and any movie character names to assistant
in our message history.
We have seen the .map()
function before, which takes in a list of something, and then executes some kind of code to return a new list of things. We can use this to convert our array of ChatMessage
objects which are in the format our frontend likes, to the format that ChatGPT likes.
const message_history_maker = chatHistory.map((chatMessage) => { const sender = chatMessage.sender === "You" ? "user" : "assistant"; return { role: sender, content: chatMessage.text }; });
So we call the .map
function on our chatHistory
array, and for each (chatMessage)
object it loops over we check if the sender
is You
or not. If the sender is You
, we set the constant sender
to user
, and if it is anything else except You
we set it to assistant
.
We then return an object with the role
set to the sender
and the content
set to the text
of the chatMessage
. This function will now return an array of objects in this format based on our input chatHistory
.
Now we need to define the complete messages object we will send to ChatGPT. This needs to include the system message, the history we just converted, and of course the user’s question:
const messages = [ { role: "system", content: systemMessage }, ...message_history_maker, { role: "user", content: question }, ] as ChatCompletionMessageParam[];
We create an array called messages
which starts with the system message, then spreads the message_history_maker
array, and finally adds the user’s question. In the end, we simply say as ChatCompletionMessageParam[]
to tell TypeScript that this is an array of ChatCompletionMessageParam
objects. This is just the name that OpenAI uses for the type of object that we just created, so it is not something weird or complex, just a name basically!
In case you’re not familiar with the ...
operator, it’s called the spread operator and it’s used to expand an array into individual elements. So in this case, we’re expanding the message_history_maker
array into individual elements in the messages
array. If we were to leave out the ...
the result would look like this:
// Do not copy this, it's just an example [ { role: "system", content: systemMessage }, [ { role: "user", content: "Hello, how are you?" }, { role: "assistant", content: "I'm doing well, thank you for asking." } ], { role: "user", content: question } ]
Where after using the spread (...
) operator it looks like this:
// Do not copy this, it's just an example [ { role: "system", content: systemMessage }, { role: "user", content: "Hello, how are you?" }, { role: "assistant", content: "I'm doing well, thank you for asking." }, { role: "user", content: question } ]
So the spread (...
) operator just says “take only the content of this array and put the items here, but leave out the surrounding array itself”.
Next I’ll just add one more console.log
in case we need to debug anything later on, and then make the ChatGPT call to get our answer:
console.log(messages); const answer = await openai.chat.completions.create({ model: "gpt-4o-mini", messages, });
This is the same as before, we call the openai.chat.completions.create
function with the model
set to gpt-4o-mini
and the messages
set to our messages
array, which now simply contains a lot more information than before.
Now we need to return the final response just like the previous APIs we’ve built:
return NextResponse.json({ answer: answer.choices[0].message.content });
Just for added clarity, here is the full POST
function once more:
export async function POST(request: NextRequest) { const { movieName, movieCharacter, question, chatHistory = [] }: { movieName: string; movieCharacter: string; question: string; chatHistory: ChatMessage[] } = await request.json(); console.log(movieName, movieCharacter, question, chatHistory); if (!movieName || !movieCharacter || !question) { return NextResponse.json({ error: "Missing required fields" }, { status: 400 }); } const systemMessage = `You are helpful and provide good information but you are ${movieCharacter} from ${movieName}. You will stay in character as ${movieCharacter} no matter what. Make sure you find some way to relate your responses to ${movieCharacter}'s personality or the movie ${movieName} at least once every response.`; const message_history_maker = chatHistory.map((chatMessage) => { const sender = chatMessage.sender === "You" ? "user" : "assistant"; return { role: sender, content: chatMessage.text }; }); const messages = [ { role: "system", content: systemMessage }, ...message_history_maker, { role: "user", content: question }, ] as ChatCompletionMessageParam[]; console.log(messages); const answer = await openai.chat.completions.create({ model: "gpt-4o-mini", messages, }); return NextResponse.json({ answer: answer.choices[0].message.content }); }
Awesome, now we can move towards creating the front end for our V2 chat app. We just need to keep in mind to stick with the structure and design decisions we’ve made so far and adhere to the ChatMessage
type we’ve defined.
Creating the new page
So let’s create a new folder and page for our Blockbuster Chat V2. Inside the app
folder, create a new folder called blockbuster_chat_2
and inside of it create a new file called page.tsx
:
π app π api π blockbuster π route.tsx π blockbuster_v2 π route.tsx π blockbuster_chat π page.tsx π blockbuster_chat_2 π page.tsx β¨ New file π counter π page.tsx π types π chatMessage.ts π favicon.ico π globals.css π layout.tsx π navbar.tsx π page.tsx
Now open up this new file and let’s start by importing the necessary things:
"use client"; import React, { useState, ChangeEvent, FormEvent, useEffect, useRef, } from "react"; import { ChatMessage } from "../types/chatMessage";
We use "use client"
at the top of the file again as this will be a client-side file that needs to be able to use State
and all of that good stuff. We import React
and useState
as usual and we also come back to useEffect
which we learned about in a previous part. We also import useRef
which we’ll explain later, and FormEvent
and ChangeEvent
which are just type names for TypeScript. Last but not least we import our own ChatMessage
type.
Very similarly to our V1 version, we’ll have our InputField
component and InputFieldProps
interface, so let’s add those in:
interface InputFieldProps { label: string; value: string; onChange: (event: ChangeEvent<HTMLInputElement>) => void; disabled?: boolean; } const InputField: React.FC<InputFieldProps> = ({ label, value, onChange, disabled = false, }) => { return ( <div className="mb-4 w-full"> <label className="block mb-2 text-gray-700">{label}:</label> <input type="text" value={value} onChange={onChange} disabled={disabled} className="w-full p-2 border border-gray-300 rounded bg-white" /> </div> ); };
Technically we have some duplication here as this is the same as in the previous V1 version, but I’m not going to worry about moving this into a separate file for now and just keep the code in one file for tutorial purposes.
Creating the chat bubbles
I’d like our new chat to look a lot better than the previous one, so let’s have some real chat bubbles. This is where I want to show you how a library like daisyUI
can really help us out a lot. There is no need to reinvent the wheel. So head over to the daisyUI docs to see how we can use their chat bubble design:
As we can see we have a lot of great options here for quickly creating a chat interface with chat bubbles. There is even a JSX
tab that shows the React code you can just copy and paste it into your project. As we’ll need a variable amount of chat bubbles on our page, let’s create a new component called ChatBubble
:
const ChatBubble: React.FC<ChatMessage> = ({ text, sender, isAnswer = false, time_sent, }) => { return( // logic will go here ); };
So we declare a new ChatBubble
component which is a React.FC
(Functional Component) that takes in a ChatMessage
object as input. We destructure the text
, sender
, isAnswer
, and time_sent
properties from the ChatMessage
object, also providing a default value of false
for isAnswer
. After that, we get to the return logic which we’ll fill in next:
const ChatBubble: React.FC<ChatMessage> = ({ text, sender, isAnswer = false, time_sent, }) => { const chatLocation = isAnswer ? "chat-end" : "chat-start"; return ( <div className={`chat ${chatLocation}`}> <div className="chat-header"> {sender === "You" ? ( <span className="font-bold mr-1">You</span> ) : ( <span className="font-bold text-custom-grey mr-1">{sender}</span> )} <time className="text-xs opacity-50"> {time_sent.toLocaleTimeString()} </time> </div> <div className="chat-bubble">{text}</div> </div> ); };
As we can see from the daisyUI
documentation, we have either a left or right chat bubble determined by the classname of chat-start
or chat-end
. So before we get to the return statement let’s set a constant named chatLocation
which will be chat-end
if isAnswer
is true
, and chat-start
otherwise.
The structure of the return that follows is basically just a copy-paste from the daisyUI
documentation with some tweaks. Note that the className
properties are what let daisyUI and Tailwind know how to style the stuff. So the opening div is a chat
with the correct className
for either a left or right chat bubble depending on what we need.
The div inside that is just the header. If the sender
is You
we just write You
in bold, otherwise, we write the sender
in a custom grey color (the one we defined in the tailwind.config.ts file earlier). We also add a time
element which shows the time the message was sent in a small font size and a bit transparent. The font-bold
speaks for itself and mr-1
is just a margin to the right of 1 unit.
We then have a <time>
tag which is an HTML5 element that represents a specific period in time. We use the toLocaleTimeString()
function to get the time in a human-readable format. The final div is the actual chat-bubble
which contains the text
of the message.
Now that we have this ChatBubble
component, we can just call it multiple times to show the entire chat on the page when we get to that point later on.
Next up we’ll need to create the new BlockbusterChat
component for the V2 version of our chat app. This one will be a bit different than our previous version as we need to display a whole history of messages and allow a continuous conversation. So let’s take a short break here, grab yourself a snack (you deserve one by now!), and we’ll continue in the next part. See you there!