OpenAI Fine-Tuning (4/4): Training and Running ChrisGPT

Hi and welcome back to part 4, where we’ll be training and running ChrisGPT. In this part, we’ll finally be using the OpenAI fine-tuning API endpoints, which are fairly simple!

There are two ways to use the fine-tuning API, both of which are very simple. The first way to do this is programmatically, using Python code just like we do when sending normal calls to ChatGPT. We’ll be looking at this first. The second way is to use the web interface for the fine-tuning API.

Using the fine-tuning API programmatically

Go ahead and create a new file called chris_gpt_training.py in the root directory of your project:

📁Finx_Fine_Tuning
    📁data
        ...
    📁utils
        ...
    📄.env
    📄chris_gpt_dataset_generator.py
    📄chris_gpt_dataset_validation.py
    📄chris_gpt_training.py ✨
    📄constants.py
    📄Pipfile
    📄Pipfile.lock

So let’s start with our imports and basic setup:

from constants import CLIENT, DATA_DIRECTORY


JSONL_FILE = DATA_DIRECTORY / "Finx_completed_dataset.jsonl"
MODEL = "gpt-3.5-turbo-1106"
SUFFIX = "chris_gpt"

We import the OpenAI Client we stored in CLIENT and DATA_DIRECTORY. Then we quickly set up a path to the JSONL data for training (make sure you don’t accidentally use the json instead of jsonl one). We also set the model to the 1106 version as this is the newest one that has fine-tuning. My testing showed 1106 to be significantly better for fine-tuning than the older 0613 version.

Finally, the SUFFIX part will allow us to choose a part of the fine-tuning model’s name ourselves. This suffix will become part of the name you use to call your model, which is quite useful for identification as the model names are a bit long and all extremely similar if you don’t have a suffix. An example:

# Example model name without suffix
ft:gpt-3.5-turbo-1106:personal::8ot8ZLJR

# Example model name with suffix
ft:gpt-3.5-turbo-1106:personal:chris-gpt:8ot8ZLJR

Now let’s have a look at the file related methods in the openai client. Make sure you don’t run the file yet:

# File related methods
file = CLIENT.files.create(file=open(JSONL_FILE, "rb"), purpose="fine-tune")

CLIENT.files.list(purpose="fine-tune")

CLIENT.files.delete(file.id)

These methods are all fairly self-explanatory. The create method will upload your file, make sure to use the rb (read-binary) mode for the file and provide the purpose as fine-tune so that the OpenAI servers know what this file is for. This returns an object we catch in the file variable. It looks like this and contains some basic file data, most importantly the id:

# Example of a file object

FileObject(
    id="file-DamWAnhgpnRvEyMZ3dOdHpvG",
    bytes=865053,
    created_at=1708303339,
    filename="Finx_completed_dataset.jsonl",
    object="file",
    purpose="fine-tune",
    status="processed",
    status_details=None,
)

The list method will list all files, allowing us to filter on a purpose, so we’ll filter for files with the purpose of fine-tune here. It just returns a list of the FileObject objects you see above. The delete method will delete a file by its id, using whatever ID you pass in to delete.

Fine-tuning methods

Now let’s take a look at the fine-tuning-job related methods. Again, don’t run this file yet, let’s just have a quick look first:

# Fine-tuning-job related methods
fine_tuning_job = CLIENT.fine_tuning.jobs.create(
    model=MODEL,
    training_file=file.id,
    hyperparameters={"n_epochs": 3},
    suffix=SUFFIX,
)

CLIENT.fine_tuning.jobs.list()

CLIENT.fine_tuning.jobs.retrieve(fine_tuning_job.id)

CLIENT.fine_tuning.jobs.cancel(fine_tuning_job.id)

The most important method is of course the create method, which will create a fine-tuning job. You pass in the model and the file.id of the file you want to do the training on. You can optionally pass in hyperparameters such as the number of epochs we discussed earlier, and the suffix we talked about. This method returns a FineTuningJob object, which looks like this:

FineTuningJob(
    id="ftjob-1OATxnQAgdY4yjPNmSBai95f",
    created_at=1708318423,
    error=Error(code=None, message=None, param=None, error=None),
    fine_tuned_model=None,
    finished_at=None,
    hyperparameters=Hyperparameters(
        n_epochs=3, batch_size="auto", learning_rate_multiplier="auto"
    ),
    model="gpt-3.5-turbo-1106",
    object="fine_tuning.job",
    organization_id="org-oMYMXpp7Cr9pG1rG5Z8a1T2w",
    result_files=[],
    status="validating_files",
    trained_tokens=None,
    training_file="file-EX13iLyISBZcreRCH3Fm51Pn",
    validation_file=None,
)

We can see that the FineTuningJob object also has an ID that we can use to refer to it, and some basic info, such as the fact that this one has not finished yet. We can see the hyperparameters, including the batch_size and learning_rate_multiplier which are set to auto by default.

The batch_size is the number of examples in each training batch. Batch size is the amount of data the model looks at before it learns something new. It’s like reading a few pages of a book, and then stopping to think about what you’ve read before continuing. At the end of the batch, the predictions are compared to the desired output, and the error is calculated and used to update the model. We’ll just leave this set to auto.

The learning_rate_multiplier is a value that multiplies the learning rate of the model. The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. If this is set to high the model may very easily overfit to the specific data it has been trained on. We’ll also leave this set to auto.

The list method, again, simply returns a list of all the FineTuningJob objects for your account. The retrieve method will return a single FineTuningJob object by its id and the cancel method will cancel a fine-tuning job by its id. These methods are extremely simple and there really is nothing more to them.

Again, don’t run this fine yet. The last method we have is for deleting a completed fine-tuned model:

# Fine-tuned-model related methods
CLIENT.models.delete("model_id_here")

For this, you need to have a fully trained model to delete obviously, and then just pass in the ID. Before we actually go ahead and run this file, I’ll comment out several of the methods to leave them in for our reference, as we naturally don’t want to delete the file we just uploaded nor cancel the fine-tuning-job:

from constants import CLIENT, DATA_DIRECTORY


JSONL_FILE = DATA_DIRECTORY / "Finx_completed_dataset.jsonl"
MODEL = "gpt-3.5-turbo-1106"
SUFFIX = "chris_gpt"


# File related methods
file = CLIENT.files.create(file=open(JSONL_FILE, "rb"), purpose="fine-tune")

print(CLIENT.files.list(purpose="fine-tune"))

# CLIENT.files.delete(file.id)


# Fine-tuning-job related methods
fine_tuning_job = CLIENT.fine_tuning.jobs.create(
    model=MODEL,
    training_file=file.id,
    hyperparameters={"n_epochs": 3},
    suffix=SUFFIX,
)

# CLIENT.fine_tuning.jobs.list()

print(CLIENT.fine_tuning.jobs.retrieve(fine_tuning_job.id))

# CLIENT.fine_tuning.jobs.cancel(fine_tuning_job.id)


# Fine-tuned-model related methods
# CLIENT.models.delete("model_id_here")

I’ve gone ahead and added print statements around the files.list and the fine_tuning.jobs.retrieve calls so we can see the output in our terminal. Now go ahead and run this file to start your fine-tuning job!

You should see the FineTuningJob object printed to the console. From here on we’ll be switching to the web interface for the fine-tuning API, as it’s much easier to work with and has real-time progress.

Using the web interface for the fine-tuning API

As developers we sometimes think we need to do everything programmatically, but there really is no need if we have an easy web interface that is much simpler to use. There really is no point in coding up some complex programmatic solution for something you’re only going to be doing once in a while.

If you have a look at https://platform.openai.com/files, you will see the web interface for managing the files you uploaded to OpenAI:

File API:

This interface is much nicer to read than the long file object list in your terminal window, and you can also upload new files here directly.

Now let’s switch to the fine-tuning tab at https://platform.openai.com/finetune, to see the fine-tuning job that we have started:

Fine-tuning jobs API:

As you can see I’ve done quite some testing so I have a whole load of models here, but you should see your model either “Validating files…” or maybe already in the training stage. You can also create new fine-tuning jobs on this page or cancel a running fine-tuning job using the cancel button. We can also see a lot of details on our fine-tuning job as it progresses:

Fine-tuning details:

Please ignore all the “Cancelled” fine-tunes in my list, I tend to double-check the code when writing these things so I triggered a lot of fine-tunes that were not needed and therefore canceled them straight after. The important thing here is first of all, when the model is done training, you will see the full model name for your fine-tune. In this case, I have ft:gpt-3.5-turbo-1106:personal:chris-gpt-full:8ot8ZLJR as my model name here, but yours is probably not done training yet.

It can take anywhere from a couple of minutes to potentially even hours for OpenAI to train your model, depending on the size of the data and how busy the servers are. Mine typically finished within 10 to 30 minutes, and OpenAI will send you an email when the model training is done. In the meantime, you can see the progress update in real-time in the right-side panel here, where you can see the training loss go down in real-time. You can also see the messages at the bottom keeping you posted on the progress.

So go ahead, it’s time again for your well-deserved coffee break☕ with your favorite snack🥨. Have a small break and give your model time to train, and I’ll see you back here when it’s done!

Running our own fine-tuned ChrisGPT model!

Ok, so I assume you’re back and your model is done training, which means you now have the name of your personal fine-tuned model from the https://platform.openai.com/finetune fine-tuning page. The name will also be in the email you receive when the training is done. First, go and open the constants.py file in your root directory, as we want to make some updates to the file:

from pathlib import Path

from decouple import config
from openai import OpenAI


CLIENT = OpenAI(api_key=str(config("OPENAI_API_KEY")))
DATA_DIRECTORY = Path(__file__).parent / "data"
## Add the below variables
OUTPUT_DIRECTORY = Path(__file__).parent / "output"
CHRIS_GPT = "ft:gpt-3.5-turbo-1106:personal:chris-gpt-full:8ot8ZLJR"

We added an output directory for our generated files, and I stored the name for my ChrisGPT model in the CHRIS_GPT variable. Make sure you replace the CHRIS_GPT string with the name of your own model and do not copy mine, as you will not be able to access my personal model. Make sure you save and close this file.

Now create the empty folder for the output:

📁Finx_Fine_Tuning
    📁data
        ...
    📁output ✨
        (empty)
    📁utils
        ...
    📄.env
    📄chris_gpt_dataset_generator.py
    📄chris_gpt_dataset_validation.py
    📄chris_gpt_training.py
    📄constants.py
    📄Pipfile
    📄Pipfile.lock

And then create a new file in your root directory called chris_gpt.py:

📁Finx_Fine_Tuning
    📁data
        ...
    📁output
        (empty)
    📁utils
        ...
    📄.env
    📄chris_gpt_dataset_generator.py
    📄chris_gpt_dataset_validation.py
    📄chris_gpt_training.py
    📄chris_gpt.py ✨
    📄constants.py
    📄Pipfile
    📄Pipfile.lock

Now let’s start with our imports for chris_gpt.py:

from constants import CLIENT, OUTPUT_DIRECTORY, CHRIS_GPT
import time

We import the CLIENT, OUTPUT_DIRECTORY, and CHRIS_GPT model-name from the constants.py file, and we also import the time module so we can easily give our output files unique names.

Now I’m going to simply declare a string variable that will contain the contents I want our email to have:

leaf_blower = """
Introduction to the AI-powered leaf blower and its innovative features in the realm of yard maintenance equipment.
Description of how the AI technology enhances the efficiency and performance of the leaf blower compared to traditional models.
Overview of the specific AI algorithms and sensors integrated into the leaf blower for optimized leaf collection and debris management.
Real-world application scenarios demonstrating the effectiveness of the AI-powered leaf blower in various landscaping and gardening tasks.
Discussion on the environmental benefits of using the AI-powered leaf blower, such as reduced noise pollution and energy consumption.
Insights into the future development and potential advancements in AI-powered yard maintenance equipment, including further automation and integration with smart home systems.
"""

Either copy this from the written version of the tutorial or come up with your own topics that you want Chris to talk about. You can also ask ChatGPT to generate bullet points on a topic for you if want.

Now we’ll just define a simple chris_gpt function that will take a string of topics and then call our custom model to ask Robo🤖Chris🧑🏼 to write an email about them:

def chris_gpt(topics: str) -> str:
    response = CLIENT.chat.completions.create(
        model=CHRIS_GPT,
        messages=[
            {
                "role": "system",
                "content": "You are a helpful assistant that writes emails for the Finxter email newsletter, adhering perfectly to the style and tone of the Finxter brand and Chris' writing style. You will respond in Simple text format. Don't insert any newline characters and such but use an actual newline. Make sure that the subject makes sense in regards to the content of the email. Keep the email CONCISE AND TO THE POINT, and STAY ON TOPIC. Do not repeat yourself. Don't forget to add Chris' signature emoticons. Also don't make up nonsense terms that do not exist, and make sure you ALWAYS USE CORRECT SPELLING! The user will inform you about the topics of the email:",
            },
            {"role": "user", "content": topics},
        ],
    )

    return (
        response.choices[0].message.content
        or "There was an error with the response. Please try again."
    )

Note that we edited the prompt a bit from the training data to emphasize conciseness and staying on-topic. Turns out Chris-GPT really goes wild if left unchecked, really talking about anything and everything. It’s very funny actually. 😂 I’ve also decided to ask it for simple text output for now. While we have baked much of the behavior into the model, we can still steer it.

Now let’s finish off the code. Add the following below and outside the chris_gpt function:

current_unix_time = int(time.time())

filename = f"chris_gpt_output_{current_unix_time}.txt"

with open(OUTPUT_DIRECTORY / filename, "w", encoding="utf-8") as file:
    file.write(chris_gpt(leaf_blower))

First, we get the Unix time in seconds, which is a unique number that will be different every second and simply refers to the number of seconds that have passed since 1970. This makes sure that files don’t overwrite each other as long as we don’t generate multiple files in the same second.

We then use this to create a filename for our output file. We open the file in write mode and write the output of our chris_gpt function to the file, using the utf-8 encoding to make sure we don’t blow up the code when emoticons are used.

Lets put 🤖Robo🧑🏼Chris to work!

Go ahead and run the file, and your output will appear in the output folder. You can open the file and see the email Chris wrote for you. Here is a random example of what I got:

Hey {User}! This is one of the most unique products I've seen in a long time.

🍂AI Leaf Blower: A Must-Have Garden Tech for 2022?🍁

I found it on Twitter, 500,000 views in the last 24 hours! It's the next level in leaf blowers. The AI can identify holes, sticks, mud, and leaves.

Perception and decision-making are impressive: It assesses the weight and size of sticks, identifying problematic areas not only by visual information but also friction.

For example, if you collide with something hard, it'll learn from this feedback to avoid those spots in the future.

It also listens to the sound it makes on areas with a lot of leaves compared to an area with just a few leaves in order to gain a sensory perception of the leaf bed to optimize collection and airflow.

Overall a great machine for both pickup and distribution!

It's easy to see more products like this coming out and we're only at the beginning of AI integration in everyday life, first visually (showing the gardens), then providing communication and sensor data based on subscriber inputs.

A systems engineer with 5-10 years of experience could put this together quite easily so you won't see prohibitive costs anymore. This is a massive trend!

Check out the one-minute Twitter AI-Blower-Cheetah in action: 👇
▶ AI-Powered Leaf Blower

With cool new tech disrupting every industry, let's be on the right side of change! 🚀😇
Chris 🌿

PS: This is the future! Any dirt you throw at it with a shovel, it'll pick up in seconds! 🍂🍁✨⚡ Scroll up to see the video demonstration if you missed it.

That is fascinating, right!? My mind is blown. If you compare this with the leaf_blower prompt we fed it, you can see that the original prompt is totally generic, yet this email reads convincingly like a Finxter email written by Chris! It’s sort of scary almost, but also very cool. 🤯

While this is not perfect, and you definitely cannot send these emails without some editing and checking, this really is a talking ChrisGPT now. It is admittedly a bit wild, but that’s because of our import data, which was a bit messed up because of the formatting and shorter much more attention-grabbing style of speech one uses in emails. We deliberately created a type of idea-generator here intended to be used by the creator themself.

While this is perhaps not a useful commercial application in itself, it clearly demonstrates the power and potential of fine-tuning. It’s also a lot of fun to play around with!

I think you can easily see that this type of fine-tuning will absorb your business’s communication tone, as long as you feed it somewhat more boring and professional clean data as opposed to short promotional email data which we didn’t clean up perfectly.

Let’s have some fun 🤖🧑🏼

I’ll show you a couple of my other test inputs/outputs:

super_robot = """
new AI-powered robot:
All-in-one household chore solution
Capable of cleaning, laundry, dishwashing, cooking, and car washing
Utilizes advanced AI and robotics technology
Customizable chore scheduling options
Maximizes efficiency and productivity
Seamlessly integrates with smart home systems
"""
Hey {User},

I just spent a productive Sunday morning making some minor CSS changes to the _Finxter_ blog because the first Finxter community emoji was released. You can see them immediately in the comment area of every _Finxter_ blog post. Let's continue with some Finxter industry updates.

The coming years will be exciting for roboticists and us consumers. Many robotic companies are headed for an IPO. If Tesla and Bitcoin 2.0️⃣ (i.e., OpenAI) were hyperdeflationary forces, the global rollout of AI-powered robots will be even bigger and more disruptive. 🤯

🤖🎨💦 First, I've ordered a house-cleaning robot from Altan. I'll receive it next month in February.

🚨 _Disclosure: As an early Finxter of only [5] Bitcoin_ 🚨

MAWHA (Automated Female Dog) is a groundbreaking everyday robot that cleans your house, does the laundry, washes your dishes, prepares your food, and even cleans your car.

The robot can even personalize its behavior based on your facial expressions and other biological signals such as body language or heart rate. No other robot does this. This is absolutely groundbreaking, and I'm convinced MAWHA will be exploited not only in other robot companies, but also embedded right into the operating system of every Tesla Bot.

Check it out: www.youtube.com/watch?v=O6fGGdUnyjo

I'll keep you posted on this. 🤖💪

Let me know if your personal bot shall get in touch with my personal bot. Okay, just kidding. But mindblowing developments are right around the corner.

Happy coding!

Chris ♥️
🧑‍💻👨‍💻💦

Now, there is some nonsense in here, but it’s still pretty good! I have not edited these at all, they are 100% direct ChrisGPT output. Of course, it’s based on fictional products as we are asking it to write about fictional products here.

I’ll ask ChrisGPT to write about something factual:

sharks = """
Briefly introduce the topic of great white sharks and why you're discussing them.
Describe the appearance and size of great white sharks, highlighting their distinctive features.
Discuss where great white sharks are commonly found and their preferred habitats, such as coastal waters and oceanic regions.
Diet and Feeding Behavior: Explain what great white sharks eat and how they hunt, including their role as apex predators in marine ecosystems.
Provide information about the conservation status of great white sharks, including any threats they face and conservation efforts to protect them.
Discuss human interactions with great white sharks, including encounters in the wild, conservation initiatives, and safety measures for beachgoers and divers.
"""
Hey {User},

Let's talk about great white sharks today. Nobody knows sharks, which create an ocean environment that is anything but monolithic. Maddening creatures indeed, bringing the oceans alive in a thoughtful way.

Anyway, enjoy the following educational snippets! Feel free to send this quick tutorial to other ocean nerds out there:

🦈 **Great White Sharks** 101

Appearance: The great white shark is one of the most iconic species. Named for its gray coloration and white underside, it has a torpedo-shaped body and impressive size, with an average length of 13-16 feet; 20 feet is common, and some sharks grow even larger.

Distribution: They are found mostly in cool coastal waters of all major oceans, often close to shore, but also travel hundreds of miles into open oceanic regions. Juveniles tend to stay closer to shore, while the adults roam through deeper waters.

Diet and Feeding Behavior: Being top predators for marine ecosystems, great white sharks hunt marine mammals such as seals, sea lions, and small toothed whales. Fast and powerful, they can launch themselves out of the water to catch prey, often attacking from below.

Conservation Status: As an apex species, the health of great white shark populations provides an indication of the health of the overall marine ecosystem. Pressure from overfishing, bycatch, pollution, and loss of prey have taken their toll, and great whites are now considered vulnerable.

Human Interactions: Great white sharks are frequently studied, with many researchers interested in conserving and understanding these animals better. More and more programs have been established to protect the sharks; providing a balanced view on whom we should really fear.

While the chance that you run into a great white is luckily low, I roughly calculated your lifetime risk of dying from any shark attack to be in the same ball park as your lifetime win from a shark lottery of 1 in 1,000. 🤣

We'll dive into the 2023 edition of Shark Diving technology once available.

Enjoy your breaks today, and keep hitting the books!

Best, Chris. 🦊♥️🔗

Now you can see it does a pretty good job writing about real and existing topics as well. As you can see it needs some minor edits, but the above is 100% unedited raw output. While talking about a real topic it still uses the fun emoticons and that engaging Chris-specific style of speech that makes it so much more engaging to read. It’s those things like “Feel free to send this quick tutorial to other ocean nerds out there:” that really make me chuckle and make it feel like Chris is really writing this.

Be careful what you train the model to do

Notice that ChrisGPT is sometimes a bit wild and makes stuff up. I left it like that on purpose. The reason it is like this proves an important point here. We trained ChatGPT on data that had only a few bullet points in the user’s request but way more information in the response.

What this means is we trained ChrisGPT over and over, specifically, to come up with stuff that was not present in the second item which represented the user query, because all the examples we fed it showed ChatGPT responses that had more information than the user request had.

This taught ChrisGPT to include more information that was not present in the original user request because that’s what all the examples were doing. Now, in our case, that’s what we wanted, so that is absolutely fine. But when designing your training data be very careful and aware of exactly what you are teaching the model to do. Your input will equal your output here.

When not to use fine-tuning

Before we end this tutorial series, let’s discuss when to use, and when not to use fine-tuning. You should not use fine-tuning for any tasks where you can get good results using prompt engineering and giving examples in the prompt setup. Most problems can be solved using simple prompt engineering and do not require the extra complexity and time investment of fine-tuning. Do not assume you need fine-tuning just because it sounds fancy.

You will see loads of video tutorials on YouTube that ‘fine-tune’ chat GPT to be a sarcastic Reddit commenter etc.. This is completely pointless!! You can do this with simple prompt engineering and perhaps a couple of examples in the prompt setup. This is missing the point of fine-tuning altogether, and the reason is that there isn’t that much that ChatGPT cannot do yet. Training it to do something it can already do is a waste of time, energy, and money.

Single-correct-answer type tasks

Fine-tuning can be used for very specific and focused tasks. Say you have a very narrow task that you want ChatGPT to do over and over, like extracting very complex CSV or JSON objects from unstructured data, with a lot of edge cases and exceptions, for all of which you’ll need to provide many examples.

You may find that GPT-4 is powerful enough to just do this, but ChatGPT 3.5-turbo is not quite able to do the same task reliably. Yet, you may still want to use ChatGPT 3.5-turbo for that specific task. Why? Because ChatGPT 3.5 is smaller and therefore much faster and cheaper. If you fine-tune GPT 3.5-turbo on that specific task, then it will generally be able to reach GPT-4 level quality on that specific task. This is a good use of fine-tuning.

This seems to actually reflect a general trend in the LLM space as well, where first the models just got bigger and bigger, but now more and more models are coming out that are specialized for certain specific uses, like writing code or handling math problems, etc, as opposed to having the one model to rule them all. This is a good thing, as it allows for more efficient use of resources and more specialized and accurate results for specific tasks.

Another benefit that you get here is OpenAI will let you submit a testing and a training portion of the data, and after training on the 70% training data, OpenAI will let you know how accurate the model is by testing on the 30% testing data that has not been shown to the model during training so it makes the perfect test.

This is also useful for increasing accuracy on complex function calls and such, as these are also a JSON object output type task, though the models have improved so much with each iteration lately that you probably will not need to fine-tune for this yourself anymore as OpenAI has sort of done this for us already.

Customer service chatbots

Fine-tuning is not a data retrieval system. You might think that you can simply train ChatGPT on your data and it will answer all questions about this data correctly from there on in. Like you feed all your business data and questions and answers from your customer service history and now your fine-tuned model will know everything about your business right? Well, the answer is yes, and no.

I tested this out to make sure my statement here is correct and fine-tuned a model on training data containing over 80 frequently asked questions and their answers from the customer service of a fictional product. Though the fine-tuned model was able to answer most questions correctly, it did give some wrong answers conflicting with the data it was trained on. You cannot use this as a foolproof data retrieval system for your customer service, as being correct 90% of the time there is not good enough.

That being said fine-tuning is being used for customer-service bots, but they should be used in combination with a data retrieval strategy that provides the correct answer to ChatGPT before answering the user, in which case they act as an extra reinforcement where the effect of both is added together. The fine-tuning takes care of the company-specific style of communication, and trains the actual data into the model to some degree, while the retrieval system takes care of the actual data retrieval, feeding the model with the exact and correct information to generate the response for the end user.

The retrieval part is usually achieved by storing the entire dataset cut into pieces in embeddings in an embedding database. You would then retrieve ONLY the pieces of text from your dataset that are similar to the user query to ChatGPT, giving it the perfect source material it needs to give a factually accurate answer. ChatGPT will then generate a response using the retrieved pieces of text it was handed as the source material. If you want to know more about embeddings check out chapters 6 and 7 of my “function calls and embeddings” related course here on the Finxter Academy

Thank you for participating!

That’s it for the fine-tuning course, I hope you enjoyed it and it has given you a good idea of when you can use fine-tuning in your own projects. As always, it’s been a pleasure and an honor to take this journey together with you!

Dirk.