NextJS Course (4/9) – Adding ChatGPT to our API

Welcome back to part 4 of this tutorial series! In this part, we’ll add in ChatGPT, but before we get there, it’s time to apply the knowledge we’ve gained and fetch our t-rex todos list from our own API endpoint.

Fetching data from our own API

So let’s go back to our page.tsx file inside the blockbuster_chat folder and rewrite it in a better way.

πŸ“ app
    πŸ“ api
        πŸ“ blockbuster
            πŸ“„ route.tsx
    πŸ“ blockbuster_chat
        πŸ“„ page.tsx     πŸ› οΈ We'll be working on this file
    πŸ“ counter
        πŸ“„ page.tsx
    πŸ“„ favicon.ico
    πŸ“„ globals.css
    πŸ“„ layout.tsx
    πŸ“„ navbar.tsx      
    πŸ“„ page.tsx

Inside this file, just remove everything we have and start with an empty page, as we’re going to be changing pretty much everything, so it’s easier to just start from scratch. Let’s first add our imports and the TypeScript interface for the Todo object data type:

"use client";
import React, { useEffect, useState } from "react";

interface Todo {
  userId: number;
  id: number;
  title: string;
  completed: boolean;
}

We start with the "use client"; pragma to indicate that this file will be running on the client side, as we saw before that this is needed to use the useState and useEffect hooks. We then import the necessary React hooks and define the Todo interface that represents the structure of a to-do item, which is the same one we had before.

Now let’s create our functional component BlockBusterChat, using all the knowledge we’ve gained so far. Note that we’ll use a function fetchTodos which doesn’t exist yet, we’ll take care of that next. First add:

const BlockBusterChat = () => {
  const [todoList, setTodoList] = useState<Todo[]>([]);
  const [loading, setLoading] = useState<boolean>(true);
  const [error, setError] = useState<string | null>(null);

  useEffect(() => {
    fetchTodos(setTodoList, setLoading, setError);
  }, []);

  if (loading) {
    return <div>Loading...</div>;
  }

  if (error) {
    return <div>Error: {error}</div>;
  }

  return (
    <>
      <h1 className="font-bold">Blockbuster Chat</h1>
      <ul>
        {todoList.map((todo) => (
          <li key={todo.id}>
            {todo.id}. {todo.title} -{" "}
            {todo.completed ? "Completed" : "Not completed"}
          </li>
        ))}
      </ul>
    </>
  );
};

export default BlockBusterChat;

So we start our functional component BlockBusterChat by defining three state variables using the useState hook. For the first one, for example, we get the variable name todoList, and the function to update it setTodoList. We call useState([]) to initialize it as an empty array of Todo objects to get started with. The <Todo[]> in between is just the TypeScript syntax to let TypeScript know that this state variable will be an array [] of Todo objects.

Second is the loading state variable with the setLoading function to update it, initialized as true. We tell TypeScript that this variable will be a <boolean> type true or false. Finally, we have the error state variable with the setError function to update it, initialized as null. We tell TypeScript that this variable will be a <string | null> type, meaning it can be either a string or null, with the | pipe symbol representing “or”.

Next we use the useEffect hook. We could define all the functionality inside here, but as we’re a bit new to this all, I decided to create a separate function fetchTodos to make it a bit easier to read and understand. We’ll define this function next, but we can see that all that useEffect does here is call the fetchTodos function. It passes in all three setState functions so that this fetchTodos function will have the ability to update all three state variables.

We then have two conditional checks. If loading is true, we return a simple loading message. If error is not null, we return an error message with the error string. Finally, if everything is fine and loading has been set back to false, we return the actual content of our page, which is a simple list of todos, using the .map function we learned about before to loop over each todo item in the list of todos and display it.

That’s all good so far, but of course we need to define the fetchTodos function. Let’s do that next. I will put this function before the BlockBusterChat component, right after the Todo interface:

const fetchTodos = async (
  setTodoList: React.Dispatch<React.SetStateAction<Todo[]>>,
  setLoading: React.Dispatch<React.SetStateAction<boolean>>,
  setError: React.Dispatch<React.SetStateAction<string | null>>
) => {
  try {
    const res = await fetch("/api/blockbuster");
    if (!res.ok) {
      throw new Error("Network response was not ok");
    }
    const data: Todo[] = await res.json();
    setTodoList(data);
  } catch (error: unknown) {
    if (error instanceof Error) {
      setError(error.message);
    } else {
      console.log(error);
      setError("An unknown error occurred");
    }
  } finally {
    setLoading(false);
  }
};

So we define an arrow function () => {} called fetchTodos. It has to be async as we know we’ll have to wait for some data fetching. We know that we take three state variable setter functions as input arguments (not the state variables themselves), so we define those first.

The React.Dispatch<React.SetStateAction<Todo[]>> part might look a bit complex, but just think of it dispatching a setstateaction that will update either a Todo[] array, boolean, or string | null state variable. This is just TypeScript syntax to define the type of the input arguments, and it is not needed to deeply understand it at this point.

Next we use a try, catch, finally block to handle the fetching of the todos. If you’ve worked with Python before, this is basically the same as the try, except, finally block. It will try to execute the code inside the try block, and if an error occurs, it will jump to the catch block. The finally block will always be executed, no matter if an error occurred or not, so is a great place to put code that should always be executed regardless.

Inside the try block, we fetch the todos from our API endpoint /api/blockbuster, note that we only have to provide the relative path to our API endpoint, which is really cool! We have to await the fetch call just like we did with previous API calls before, and catch the response in a constant named res.

We then check if the res response was not (! = not) ok, by accessing the responses’ ok property. If it was not ok, we throw an error with the message “Network response was not ok”.

Assuming there was no problem the code will move on to the next line, where we parse the response data as JSON and store it in a constant named data. This is exactly like we did before. This time we added another line though to update the todoList state variable with the fetched data using the setTodoList function.

Inside the catch block, it takes an input of error which is of type unknown. We then check if this error is an instance of the Error class, which is a built-in JavaScript class for errors. If it is, we set the error state variable to the error message of this error, using the setError state setter function.

If it’s not an instance of the Error class, we log the error to the console and set the error state variable to a generic error message. We’re not going to worry too much about perfecting error handling here.

Finally, inside the finally block, pun intended, we set the loading state variable to false, as we’re done loading the todos, regardless of if there was an error or not, so that the page can go on to display the results. Your full page.tsx file should now look like this:

"use client";
import React, { useEffect, useState } from "react";

interface Todo {
  userId: number;
  id: number;
  title: string;
  completed: boolean;
}

const fetchTodos = async (
  setTodoList: React.Dispatch<React.SetStateAction<Todo[]>>,
  setLoading: React.Dispatch<React.SetStateAction<boolean>>,
  setError: React.Dispatch<React.SetStateAction<string | null>>
) => {
  try {
    const res = await fetch("/api/blockbuster");
    if (!res.ok) {
      throw new Error("Network response was not ok");
    }
    const data: Todo[] = await res.json();
    setTodoList(data);
  } catch (error: unknown) {
    if (error instanceof Error) {
      setError(error.message);
    } else {
      console.log(error);
      setError("An unknown error occurred");
    }
  } finally {
    setLoading(false);
  }
};

const BlockBusterChat = () => {
  const [todoList, setTodoList] = useState<Todo[]>([]);
  const [loading, setLoading] = useState<boolean>(true);
  const [error, setError] = useState<string | null>(null);

  useEffect(() => {
    fetchTodos(setTodoList, setLoading, setError);
  }, []);

  if (loading) {
    return <div>Loading...</div>;
  }

  if (error) {
    return <div>Error: {error}</div>;
  }

  return (
    <>
      <h1 className="font-bold">Blockbuster Chat</h1>
      <ul>
        {todoList.map((todo) => (
          <li key={todo.id}>
            {todo.id}. {todo.title} -{" "}
            {todo.completed ? "Completed" : "Not completed"}
          </li>
        ))}
      </ul>
    </>
  );
};

export default BlockBusterChat;

Now if you load your page in the browser and click the Blockbuster Chat link in the navbar, you will see our TRex todos list displayed on the page! πŸ¦–πŸ“:

You have successfully fetched and displayed data from your own API endpoint using the useState and useEffect hooks in React! πŸŽ‰

Now I can totally understand if you feel a bit underwhelmed by the resulting page at this point, as this looks basically the same as what we did before. Remember that we learned a ton though and this implementation is very different behind the scenes. The important thing here is that with this new base of knowledge we are now one step closer to building something really cool! πŸš€

Setting up ChatGPT

The first thing we need to continue further is an API key for ChatGPT. If this is your first time using it, don’t worry, you will get a bunch of free credits after signing up for your first account so you should be able to follow along without having to pay any money. Even if you have a paid account though, the total cost for everything we’ll be running will be very low, probably in the cents for the entire tutorial series.

Go to https://platform.openai.com/ and log in. If you already use OpenAI and already have an account set up with a free or paid API key, you can use that. If you don’t have an account just log in with your Google account. It will ask you something simple, like to fill in your birthday, and ta-da, you have an account!

When you log in on a brand new account you will see something like this (navigate to the dashboard if you land on a different page):

Find API keys in the left sidebar and click on it. If this is a new account it will ask you to verify your phone in order to create a new API key:

The reason they do this is to prevent bots from creating loads of free accounts and abusing their system. Just give them a phone number and they will send you a verification code to enter. You will also get a bunch of free credits from them to follow along with this tutorial, so it’s a win-win!

Regardless of whether you’re on a free account you just made or on an existing one you already had, go to the top right and find the gear icon to open the settings menu:

This will open the settings page:

Go down and look for the + Create project button in the left sidebar. It seems to have moved to the top left of the page recently, so if it’s not there, you can find it by clicking the second text next to the round circle icon (it probably says ‘default project’ right now). Click on it and give your project a name, like NextJs:

Now make sure that you are in your NextJs project (select it at the top of the page, see the red line in the image below). After that go to the second Limits tab in the left sidebar, not the first one, as there are two!

As you can see we have an option to set a Monthly budget for this project, so click on the Set budget button and give it a value. I’ll set mine to $4.99, but you can set it to whatever you want.

The nice thing about this is that even if we decide to deploy this project later to show it off on our portfolio or something, we don’t have to worry about somebody abusing our site to spend loads of money on our API key. You can set any budget and also set up a notification to be warned in advance.

We’re never going to get close to spending $4.99 in this tutorial series, but this is a nice peace-of-mind feature to have, especially if you plan to deploy your project later on.

Now click dashboard in the top menu bar:

Then go back to API keys in the left sidebar and click on it. Find the green button to + Create new secret key.

In this new menu, give your key a name, and make sure your NextJs project is selected so it is protected by the limit you set earlier. Click Create secret key and you will get your API key:

Make sure you copy the key you get as it will only be shown once. Anyone who has this key can make requests to the OpenAI API on your account balance, so keep it safe!

Now go to the root folder of your project (outside of the app folder) and create a new file called .env.local. This is a special file that Next.js will use to load environment variables from.

πŸ“ finxter_nextjs (or your root folder name)
    πŸ“ .next
    πŸ“ app
    πŸ“ node_modules
    πŸ“ public
    πŸ“„ .env.local   ✨ Create this file
    πŸ“„ .eslintrc.json
    πŸ“„ .gitignore
    πŸ“„ ... various other files ...   

Inside this file, add the following line:

OPENAI_API_KEY=your-api-key-here

Where your-api-key-here is replaced with the key you just copied from the OpenAI website. Make sure you don’t use any " quotes around the key, just the key itself, no spaces anywhere either. Save the file and close it.

Note: The .env.local file is already included in the .gitignore file for Next.js projects by default, so you don’t have to worry about accidentally committing your API key to a public repository. If you’re not sure about Git and .gitignore, don’t worry about it for now. You don’t have to do anything.

Now we need to install the OpenAI library to our project. Open a terminal (open a second one or temporarily stop your development server) and navigate to the root folder of your project (finxter_nextjs in my case). Run the following command:

npm install openai@^4.0.0

This will install the OpenAI library to your project and make our life a bit easier when working with the OpenAI API.

Making our API call ChatGPT

Great! Now that we have an API key and the OpenAI library installed, it’s time to come back to our route.tsx file inside the api/blockbuster folder and make our API do something cooler instead:

πŸ“ app
    πŸ“ api
        πŸ“ blockbuster
            πŸ“„ route.tsx   πŸ› οΈ We'll work on this file
    πŸ“ blockbuster_chat
        πŸ“„ page.tsx
    πŸ“ counter
        πŸ“„ page.tsx
    πŸ“„ favicon.ico
    πŸ“„ globals.css
    πŸ“„ layout.tsx
    πŸ“„ navbar.tsx      
    πŸ“„ page.tsx

There’s not really much interesting in here at the moment, so just go ahead and empty out the route.tsx file, and let’s start from scratch. First our imports:

import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";

We have the NextRequest and NextResponse imports from the next/server package like before, as our API endpoint will receive a request and return a response. We added the OpenAI import so we can call ChatGPT in our backend (while keeping the API key safe!).

Now let’s first expand our knowledge by making a simple ChatGPT call before we up the complexity level again:

export async function GET(request: NextRequest) {
  const openAIConfig = { apiKey: process.env.OPENAI_API_KEY };
  const openai = new OpenAI(openAIConfig);

  const systemMessage = "You are a very unhelpful and give the incorrect answer and worst advice possible to every question in a funny and ridiculous way.";

  const answer = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      { role: "system", content: systemMessage },
      { role: "user", content: "What is the best car?" },
    ]
  })

  return NextResponse.json(answer);
}

We define an async function GET that takes a request as input. So far everything is the same as the previous API we made, except we added the async keyword to the function definition. This is because we’re going to be making an asynchronous call to the OpenAI API, and we need to await the response.

First I set up a constant and named it openAIConfig. The only option I want to pass into our OpenAI client is the apiKey. We can use the process.env object to access the environment variables we set in the .env.local file before. Make sure that the OPENAI_API_KEY matches with the name you gave your variable inside the .env.local file.

Now we can create a new instance of the OpenAI class we imported and pass in the openAIConfig object we just created. The next variable is a systemMessage for our ChatGPT call. I chose to just use something simple and funny for our first test call.

After that is the action call to ChatGPT. We use the openai.chat.completions.create method to create a new completion, passing in an object with the model we want to use, which is gpt-4o-mini in this case. We then pass in an array of messages which is an array of objects. Each object has a role and content property. The role can be either system or user, and the content is the message itself. For now I’ll just hardcode the user question as “What is the best car?”.

Finally, we return a NextResponse.json response with the test_answer object we got back from the ChatGPT call. This will return the full JSON object that ChatGPT returns to us, so we can have a good look at what it looks like before we move on to the next step.

Testing our ChatGPT call

So make sure your development server is running (npm run dev), and go to http://localhost:3000/api/blockbuster in your browser. The server will make a call to ChatGPT and the browser will only get the output returned. You should see something like this (again your formatting may be less pretty):

{
  "id": "chatcmpl-9pqxl0MtPJdmUNvqj8aLnj1IzWXSa",
  "object": "chat.completion",
  "created": 1722145345,
  "model": "gpt-4o-mini-2024-07-18",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The best car is definitely a unicycle! Who needs four wheels when you can balance on one and have everyone marvel at your circus skills? Plus, it's great for those unexpected ninja escapes! Just remember to wear a helmet, preferably a Viking helmet, for added style points. Happy riding!"
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 43,
    "completion_tokens": 58,
    "total_tokens": 101
  },
  "system_fingerprint": "fp_611b667b19"
}

We see we have an object with various details about the ChatGPT call. The most important thing is the answer itself, which is silly just like we instructed. We can see it’s a bit nested in there, but we can access it with .choices[0].message.content.

Change the return statement to this:

return NextResponse.json({ answer: answer.choices[0].message.content });

And reload http://localhost:3000/api/blockbuster in your browser. You should now see the answer to the question “What is the best car?” displayed in your browser:

{
  "answer":"The best car is definitely a refrigerator. It’s cold, it has a great interior, and you can fit a month’s worth of groceries or a couple of friends who don’t mind being a little chilly. Plus, its fuel efficiency is unbeatable β€” it runs entirely on leftover pizza! Why go for safety ratings when you can have a snack while you drive? Just remember, to activate the ice-cream mode, you have to drive really fast in reverse! So strap in and enjoy that mint chocolate chip!"
}

That’s exactly what we need, but of course, we want the user to be able to ask the questions themselves. Time to head over to the next part where we’ll learn how to handle POST requests in our API endpoint, have the user ask their question, and display the answer back to them. πŸš€

Leave a Comment