Multi-Agent LangGraph Course – LangSmith and Writing Tools

Hi and welcome back to part 2 of the tutorial series where we will be having a look at LangSmith which will help us debug our LLM creations and also write tools that our powerful agents will be able to execute from part 3 onwards.

LangSmith setup

So what is LangSmith? LangSmith is another part of the LangChain ecosystem that will help us during the development and debugging of our LLM applications

  • LLM Debugging and Testing: It will make it easier to identify and fix errors and test our applications to ensure they work as expected.
  • Monitoring and Evaluation: It also provides tools to monitor performance and effectiveness, especially helpful if your project needs fast response times.
  • Easy integration: LangSmith integrates seamlessly with LangChain and is very easy to set up as you will see.

First we’ll need to get an API key for LangSmith, so it can keep track of our traces for us using our unique identifier. This is free for single-user accounts with up to 3000 traces per month, which is more than enough for general development and testing. You shouldn’t have to provide any payment details unless you want to switch to a heavier plan later on.

Go to and sign up using your GitHub, Google, or email address:

After you have made your account and logged in at find the βš™οΈgear icon in the bottom left corner and click it, then find the Create Api Key button to generate your API key:

Copy your API key and then let’s open our existing .env file in the root of our project and edit it by adding the LangSmith API key (no spaces or quotation marks):


Save and close your .env file. We don’t need to install LangSmith as it is already included in the LangChain package. Let’s move on to our existing file to add the LangSmith setup to our reusable setup script.

In order to enable LangSmith tracing, we need to do three things.

  • Provide the LangSmith API key
  • Set the tracing environment variable to true
  • Set the project name so we can distinguish between different projects in our LangSmith dashboard

Replace all the code so far in the file with the following:

import os
from datetime import date

from decouple import config

def set_environment_variables(project_name: str = "") -> None:
    if not project_name:
        project_name = f"Test_{}"

    os.environ["OPENAI_API_KEY"] = str(config("OPENAI_API_KEY"))

    os.environ["LANGCHAIN_TRACING_V2"] = "true"
    os.environ["LANGCHAIN_API_KEY"] = str(config("LANGCHAIN_API_KEY"))
    os.environ["LANGCHAIN_PROJECT"] = project_name

    print("API Keys loaded and tracing set with project name: ", project_name)

We added the date from datetime import so we can use the date as the project name. Then we added an argument project_name to the function so we can set a custom project name for the LangChain dashboard. If no project name is provided, it will default to Test_{} so we still have something to distinguish it by even if we forget to set the name.

The OPENAI_API_KEY environment variable was already there, but now we have added three more environment variables for LangSmith. LANGCHAIN_TRACING_V2 enables LangSmith tracing when set to true, and then we have the LANGCHAIN_API_KEY and LANGCHAIN_PROJECT environment variables which LangSmith will read to know who we are and group the traces per project in our dashboard.

Make sure you use the exact same names for the environment variables. Save and close the file. Now let’s see what LangSmith will do for us by giving it a test run. Open the file that we created in part 1 and change only the following line:


to add a project name:

set_environment_variables("Simple LangChain test")

Now go ahead and run the file from part 1 again without changing anything about the code. LangSmith will now trace the execution of the code as we are using the updated set_environment_variables script.

After running the script, go to the LangSmith dashboard at and make sure you’re logged in. In your dashboard you will see the project name you set in the overview:

We can see that our Simple LangChain test project has been run a total of 2 times (1 run for each chain), with an error rate of 0%. We can see how many of the responses were streamed and how many tokens have been used in total for this project name.

Scrolling to the right reveals additional details:

We can see that our total cost for all runs on this project so far is $0.000237 and we have a latency of around 3 seconds per run. We also have the most recent run for reference. Go ahead and click the project for more details:

We have two entries, one for the french_german_chain and one for the check_answer_chain. When we use graphs later these will no longer be separate but combined into a single trace. Go ahead and click the lower one with and input of strawberries to see the details:

We can see the RunnableSequence which is the overall chain, and then the three sub-elements that we had in our chain, the ChatPromptTemplate, the LLM, and the StrOutputParser. On this page we see the input and output for the entire chain, and if you click on any of the steps like ChatOpenAI you will see the in- and output for that specific step:

Now our trace here is not that helpful as it is both very simple and broken up into two separate parts for each chain we ran, but this will be very helpful for easy feedback and debugging when we get to our graphs, which will combine complex systems into a single trace.

Tools – Image generator

Now let’s continue on and take a look at tools. If we want to have powerful multi AI-agent teams working away for us we need to be able to give them tools or functions to call. Naturally LangChain also comes with a handy integration for writing tools using a somewhat more pleasant syntax than the vanilla OpenAI tools.

We will be writing two tools, both of which we will use in our LangGraph graph in the next part. One of the tools will use Dall-e to generate an image (using our OpenAI key we already have) and download and save the image to disk. The other tool is going to get the current weather in a certain location. There are multiple ways in which tools can be defined in LangChain, but we will be using the latest convenient syntax here using the @tool decorator.

First let’s create a new folder called images and another one called tools in the root of our project, and then inside the tools folder create a new file named

    πŸ“‚ images          ✨New empty folder
    πŸ“‚ tools           ✨New folder
        πŸ“„    ✨New file
    πŸ“„ .env
    πŸ“„ Pipfile
    πŸ“„ Pipfile.lock

In the file we will define our first tool and see how this works. Let’s get started with our imports:

import uuid
from pathlib import Path

import requests
from decouple import config
from import tool
from openai import OpenAI
from pydantic import BaseModel, Field

As we will also download the image, we import uuid to create a unique filename so we don’t get clashes. We will use pathlib to define the path where we will save the image and requests to send an HTTP request to download the generated image from the internet.

We also import config from decouple to read our .env file, tool from to define our tool, OpenAI from openai to make a request to Dall-e, and BaseModel and Field from pydantic to define the input of our tool.

requests is already installed as a dependency of LangChain itself, and we already installed openai. Let’s make sure we install pydanctic as well by running:

pipenv install pydantic==1.10.13

Make sure you use this version as it plays nicely with the current LangChain versions. If you install V2 instead you will have to use different imports from mine.

As this is the only place where we will use the vanilla OpenAI client, we’ll just declare it here instead of integrating it into the script. Add the following:

IMAGE_DIRECTORY = Path(__file__).parent.parent / "images"
CLIENT = OpenAI(api_key=str(config("OPENAI_API_KEY")))

To get a path to the images folder in the root of our project we first use Path(__file__) to get the path to the current file, then parent to go up one level to the tools folder, and then another parent to go up to the root of our project. We then add /images to get the path to the images folder.

We also create a CLIENT object using the OpenAI class and our API key from the .env file.

Image downloader

Let’s first create a helper function that takes an image URL and downloads and saves that image to our /images folder. This is not our tool but just a quick helper we can call from inside our tool later on. continuing in add the following:

def image_downloader(image_url: str | None) -> str:
    if image_url is None:
        return "No image URL returned from API."
    response = requests.get(image_url)
    if response.status_code != 200:
        return "Could not download image from URL."
    unique_id: uuid.UUID = uuid.uuid4()
    image_path = IMAGE_DIRECTORY / f"{unique_id}.png"
    with open(image_path, "wb") as file:
    return str(image_path)

We define a function image_downloader that takes an image URL as input and returns a string with the path to the downloaded image. If the image URL is None we return a message saying that no image URL was returned from the API. We then use requests.get to download the image from the URL and check if the status code is 200 which means the request was successful, again sending a message if it was not successful.

We then create a unique ID using by instantiating a new UUID class object using uuid.uuid4(). We then create a path to the image using the IMAGE_DIRECTORY we defined earlier and the unique ID with a .png extension. Finally, we open the file in write binary mode (wb) and write the content of the response to the file, returning the path to the image as a string.

The reason we do not raise an error but send a string if the download fails is that an error will blow up our LLM application, but if we return a string instead the LLM agent will see that something went wrong and it can try to fix it or try calling the function again.

Input interface

Before defining our tool itself, we’re going to define the exact input interface that our tool will accept. Behind the scenes LangChain will use this to generate the JSON schema that the OpenAI API requires for function and tool calling. Add the following:

class GenerateImageInput(BaseModel):
    image_description: str = Field(
        description="A detailed description of the desired image."

We use pydantic to define a GenerateImageInput class which inherits from BaseModel This will allow us to clearly define the input arguments our tool will need in order to run, as the LLM will need this information when calling a tool or deciding whether to call a tool or not.

We define a single field image_description which is a string and we use Field to add a description to the field. So we want an input argument of image_description which is a string that describes the image we want to generate. If you need multiple arguments you can define these here as well in the same fashion. For our uses, this one argument will do here.

Tool definition

Now it’s time to write our actual tool using the @tool decorator. Add the following:

@tool("generate_image", args_schema=GenerateImageInput)
def generate_image(image_description: str) -> str:
    """Generate an image based on a detailed description."""
    response = CLIENT.images.generate(
        quality="standard",  # standard or hd
    image_url =[0].url
    return image_downloader(image_url)

We start with the @tool decorator which takes the name of the tool as the first argument and the schema of the input arguments as the second argument, passing in our GenerateImageInput class we defined earlier.

After that, we declare the function itself, which takes a string as input with the image description and will return an image path in string format. Note that we included a docstring that describes what the tool does: """Generate an image based on a detailed description.""".

This docstring is required when defining tools using the @tool decorator and is the description that will be used for the OpenAI tool schema generated behind the scenes that helps the LLM agent choose which function(s) to call. For this reason you must make sure it is an adequate description of what the tool does and what it’s purpose is.

After that we simply make a vanilla Dall-e image generation API request using CLIENT.images.generate with the model set to dall-e-3, the prompt set to the image_description we received as input, the size set to 1024x1024, the quality set to standard, and the number of images to generate set to 1. You can of course call on any image generation API you want, but as we already have an OpenAI key set we will use Dall-e here to keep things simple.

We then extract the URL by accessing[0].url and return the result of calling the image_downloader function we defined earlier with the image URL as input. As the image_downloader function will save the image to file and return a path to it in stringform that fulfills our promise of having the generate_image function return a string file path to the image requested.

Test run

Tools are just functions except we clearly defined the input arguments, name, and the purpose of the function using a docstring. Now let’s give our tool a test run by adding the following to the bottom of the file:

if __name__ == "__main__":
    print("A picture of sharks eating pizza in space."))

If this file is the main file being run, the generate_image function will be called for a quick test. If we import the tool from elsewhere this code block will not be triggered. Note that we call the run method on a tool in order to run it, this is part of the defined interface for LangChain tools.

So go ahead and run this file and you should see an image appear in the images folder in the root of your project, indicating that it worked. Make sure you didn’t forget to create the empty images folder in the root of your project.

My image here is pretty epic, I must say πŸ¦ˆπŸ•πŸš€:

It is interesting to see that Dall-e choose peperoni pizza as a default pizza. Sorry if I made you hungry yet again πŸ˜…πŸ•πŸ•.

Weather tool

Ok with that settled, save and close up this file, and let’s move on to our second tool which will get the current weather in a certain location. We’ll go through this one quickly as the process is very similar to the first tool.

First, sign up for a free account at They will give you pro for 14 days for free but it will automatically switch back to free afterward and you don’t have to provide any payment or credit card information, so don’t worry about it, the sign up will be pretty fast and totally free.

Signup and then get yourself an API key:

Now add your new API key to your .env file:


Save and close that and now lets create a new file in the tools folder called

    πŸ“‚ images
    πŸ“‚ tools
        πŸ“„    ✨New file
    πŸ“„ .env
    πŸ“„ Pipfile
    πŸ“„ Pipfile.lock

In the file we will define our second tool. Let’s get started with our imports:

from json import dumps

import requests
from decouple import config
from import tool
from pydantic import BaseModel, Field

We import dumps from json too which will allow us to convert a dictionary to string format, as LLMs can only handle strings. The rest of the imports are familiar from the generate_image tool we made. Let’s define the input interface for our weather tool using a pydantic model:

class WeatherInput(BaseModel):
    location: str = Field(description="Must be a valid location in city format.")

This is the same as the other tool, again make sure the description is a good one as the LLM agent will make use of this. Let’s define our function that will call the weather API and return the response. Add the following:

@tool("get_weather", args_schema=WeatherInput)
def get_weather(location: str) -> str:
    """Get the current weather for a specified location."""
    if not location:
        return (
            "Please provide a location and call the get_current_weather_function again."
    API_params = {
        "key": config("WEATHER_API_KEY"),
        "q": location,
        "aqi": "no",
        "alerts": "no",
    response: requests.models.Response = requests.get(
        "", params=API_params
    str_response: str = dumps(response.json())
    return str_response

We start with the @tool decorator with the name of the tool and the input schema as before. We then define the function itself which takes a string as input with the location and will return a string with the weather data. We include a docstring that describes what the tool does and is for so the LLM agent can make use of this.

If the location is not provided we return a message asking the LLM to provide a location and call the function again. We then define the API parameters as a dictionary with the API key which we read from the .env file using config, the location (q), and two optional parameters aqi (air quality index) and alerts set to no.

We then make a request to the weather API using requests.get with the URL and the API parameters. This will return a Response object from requests.models which we can convert to a dictionary using it’s .json() method. We then convert the dictionary to a string using the dumps (dump string) function we imported and return the string with the weather data.

Let’s add a quick test just like with the other tool:

if __name__ == "__main__":
    print("New York"))

Now go ahead and give it a test run and you should see something like the following:

{"location": {"name": "New York", "region": "New York", "country": "United States of America", "lat": 40.71, "lon": -74.01, "tz_id": "America/New_York", "localtime_epoch": 1711278898, "localtime": "2024-03-24 7:14"}, "current": {"last_updated_epoch": 1711278000, "last_updated": "2024-03-24 07:00", "temp_c": -0.6, "temp_f": 30.9, "is_day": 1, "condition": {"text": "Sunny", "icon": "//", "code": 1000}, "wind_mph": 2.2, "wind_kph": 3.6, "wind_degree": 2, "wind_dir": "N", "pressure_mb": 1020.0, "pressure_in": 30.13, "precip_mm": 0.0, "precip_in": 0.0, "humidity": 49, "cloud": 0, "feelslike_c": -5.9, "feelslike_f": 21.5, "vis_km": 16.0, "vis_miles": 9.0, "uv": 2.0, "gust_mph": 15.8, "gust_kph": 25.4}}

Excellent! We now have some functions for our agents to play around with while we explore building more complex systems using graphs.

Simplifying tool imports

There is one quick thing left to do before we move on to the next part. The way our tools folder is set up right now we would have to import the tools from the tools folder in a kind of awkward way:

# Example, no need to copy - we will not use this code
from tools import weather, image

    "A T-rex made from kentucky fried chicken is attacking the white house."

This weather.get_weather is kind of awkward so let’s create a file in the tools folder to make it easier to import the tools. Create a new file called in the tools folder:

    πŸ“‚ images
    πŸ“‚ tools
        πŸ“„    ✨New file
    πŸ“„ .env
    πŸ“„ Pipfile
    πŸ“„ Pipfile.lock

In the file add the following:

from .image import generate_image
from .weather import get_weather

This will import the generate_image and get_weather tools from their respective files and make them available when importing the tools folder. It has effectively made the tools folder a package that can be imported from as a single entity.

Now the above example can be changed to this:

# Example, no need to copy - we will not use this code
from tools import get_weather, generate_image

generate_image("A T-rex made from kentucky fried chicken is attacking the white house.")

This is a lot more sensible. Save and close the file and we are done with this part. In the next part, it is time to dive into LangGraph and start building some more complex systems using agents and tool calls to interlink them into a graph that can do some cool stuff. See you there!

P.S. I know you are secretly curious what the T-rex made from KFC attacking the white house looks like πŸ˜…πŸ—πŸ¦–πŸ›οΈ. Here is is:

Kentucky Fried T-rex, anyone?