Multi-Agent LangGraph Course – Setting up our multi-agent team

Welcome back to part 5, where we’ll set up our multi-agent team. So buckle up and let’s jump right in. Create a new file named in your project root:

    πŸ“‚ images
    πŸ“‚ output
    πŸ“‚ tools
    πŸ“„ .env
    πŸ“„    ✨New file
    πŸ“„ Pipfile
    πŸ“„ Pipfile.lock

Open up the file and start with the imports:

import functools
import operator
from typing import Annotated, Sequence, TypedDict

from colorama import Fore, Style
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
from import TavilySearchResults
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_openai import ChatOpenAI
from langgraph.graph import END, StateGraph

from setup_environment import set_environment_variables
from tools import generate_image, markdown_to_pdf_file

We have a lot of imports again, many of which will be familiar. We import our own two tools from the tools folder and also the TavilySearchResults from the langchain_community tools. There are some new imports like functools and the AgentExecutor but we’ll cover each one and how they are used as we go along.

Environment variables and constants

Let’s load up our environment variables and create a bunch of constants we’ll need:


TRAVEL_AGENT_NAME = "travel_agent"
LANGUAGE_ASSISTANT_NAME = "language_assistant"
VISUALIZER_NAME = "visualizer"
DESIGNER_NAME = "designer"

TEAM_SUPERVISOR_NAME = "team_supervisor"

We load our environment variables and set the project name to Multi_Agent_Team. We then define a bunch of constants for the names of our agents and the team supervisor. These are just strings but as we’ll have to type each of these strings multiple times it will be very annoying if we change or mistype one, hence storing these in a single place up top is the way to go.

Note that we have the travel_agent, language_assistant, and visualizer inside a list called members and we have the designer and team_supervisor on the outside. We also imported the END node we used last time. That leaves us with a situation like this:

The list named OPTIONS is going to be the potential options the team_supervisor can choose from each step along the way, so it has all three members in the team + the "FINISH" option to indicate this particular team has finished its work.

Add two more final constants below:

TAVILY_TOOL = TavilySearchResults()
LLM = ChatOpenAI(model="gpt-3.5-turbo-0125")

We have the TAVILY_TOOL which is the Tavily search tool we imported from the langchain_community tools and the LLM which is gpt-3.5-turbo-0125 here but feel free to use GPT-4-turbo instead if you want.

Agent creator function

We’re going to be creating a lot of agents here, so let’s create a function to handle the repetitive work of creating an agent:

def create_agent(llm: BaseChatModel, tools: list, system_prompt: str):
    prompt_template = ChatPromptTemplate.from_messages(
            ("system", system_prompt),
    agent = create_openai_tools_agent(llm, tools, prompt_template)
    agent_executor = AgentExecutor(agent=agent, tools=tools)  # type: ignore
    return agent_executor

We define a function named create_agent which takes an llm of the type BaseChatModel. This is just a type hint but it was part of our imports for clarity. BaseChatModel is the base class for all chat models in LangChain, including the ChatOpenAI variation we use here. You can pass any LLM you want and have different nodes of the same graph run on completely different LLMs. The other arguments are a list of tools and a system_prompt string.

We then declare a prompt_template using the ChatPromptTemplate.from_messages method that we used all the way back in part 1, but this time we use multiple messages. We have a "system" message that is the system prompt string passed into the function and then we have two placeholders for the messages and agent_scratchpad variables that we have seen before. The MessagesPlaceholder, as the name suggests, is just a placeholder for both of these so we can insert them later using the names we have defined under variable_name.

We then use the create_openai_tools_agent just like we did in part 3, but this time we go one step further and create an AgentExecutor in the step below. This AgentExecutor comes with LangChain and will basically combine the agent and the executor nodes we had in the previous part into a single node, handling the function call logic we did in the previous part for us! It takes an agent and a list of tools for that agent to use as arguments.

The # type: ignore comment is in case you use a type checker as it will complain here, and this series is not about type checking so we won’t go too deep into it as it’s no big deal here. We then return the agent_executor we created.

Agent state object

Now let’s declare the state object that we will be passing around in this particular graph:

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]
    next: str

This time we need two entries. The first is the messages which is a sequence of BaseMessage objects which again are just messages like ("human", "Hello, how are you doing?"), or ("ai", "I'm doing well, thanks!"),. We define it as a Sequence, so like a list or a tuple of these messages, and the operator.add again indicates that we will add to this sequence of messages with each step. Annotated is just used as it allows us to add the annotation of operator.add.

The second entry is the next which is a string that will be the name of the next agent to call. This is the agent that the team_supervisor will decide to call next based on the state object it receives and then we can use this field to see which agent to route to next. This field can just be overwritten as we don’t need the history, so a single string without any fancy annotations will do fine here.

Agent node function

Now let’s define a function that represents one of these agent nodes in general:

def agent_node(state, agent, name):
    result = agent.invoke(state)
    return {"messages": [HumanMessage(content=result["output"], name=name)]}

The function takes the state object, an agent, and the string name for the agent (the ones we defined up top as constants). Then we simply need to invoke the agent with the state and then keeping with the promise we made above in the AgentState object we defined the node needs to return a messages object with a message in it. We will simply use a HumanMessage, as it doesn’t really matter who the message comes from, and get the result from result["output"] which is the output of the agent’s call.

Team supervisor’s next member choice

Next, we’re going to need a way to have the team_supervisor choose which agent to invoke next. The easiest way to do this reliably is to pretend this is a function that the agent supervisor has to call for us. The only possible input arguments are the names of our agents and we tell the team_supervisor that it must call nonexistent_function(agent_name) to invoke the agent.

This is a bit of a hack, but it makes it very easy for us to extract the agent_name consistently and easily to see which agent node needs to run next. We will also include one extra option of “FINISH” so the team_supervisor can tell us when it’s done and needs to break out of the team. Doing this will also let us use the JsonOutputFunctionsParser later on in our code, as the function call will be sent in a correct JSON format, making the parsing of the output easier.

For this function that doesn’t actually exist, we’re going to define an old-school vanilla OpenAI function description that describes how the function works to the LLM team supervisor. Add the following variable:

router_function_def = {
    "name": "route",
    "description": "Select the next role.",
    "parameters": {
        "title": "routeSchema",
        "type": "object",
        "properties": {
            "next": {
                "title": "next",
                "anyOf": [
                    {"enum": OPTIONS},
        "required": ["next"],

This is actually JSON Schema vocabulary, but is quite readable. We define the name of the function as route and give it a description of what the function does. We then define the parameters that the function takes, giving the parameter object a title of routeSchema and defining that it is an object. Then we define the properties of this object, which is just a single property named next. This property has a title of next and the options available are anyOf the enumerate (list) of OPTIONS we defined up top. We then define that the next property is required.

This JSON Schema style is what the OpenAI API normally uses for function/tool calls, but LangChain has done this under the hood for the functions we have used so far. Again, this function will not actually exist, but that doesn’t stop us from feeding it to the LLM and extracting the next property from the arguments the LLM provides for us.

Team supervisor system prompt

Now let’s create a secondary file to store our prompt system setup messages as we’re going to be using quite a lot of them here. Create a new file named in your project root:

    πŸ“‚ images
    πŸ“‚ output
    πŸ“‚ tools
    πŸ“„ .env
    πŸ“„    ✨New file
    πŸ“„ Pipfile
    πŸ“„ Pipfile.lock

We’ll use this file to store the prompt string variables for the system messages our agents will use. If you’re watching the video tutorial version of this please be advised that there is a written blog version of this tutorial where you can copy these prompts so you don’t have to type them all over again, as we have a lot more of them coming. Let’s start with the team supervisor. Inside the file add:

You are a supervisor tasked with managing a conversation between the following workers: {members}. Given the following user request, respond with the worker to act next. Each worker will perform a task and respond with their results and status. The end goal is to provide a good travel itinerary for the user, with things to see and do, practical tips on how to deal with language difficulties, and a nice visualization that goes with the travel plan (in the form of an image path, the visualizer will save the image for you and you only need the path).

Make sure you call on each team member ({members}) at least once. Do not call the visualizer again if you've already received an image file path. Do not call any team member a second time unless they didn't provide enough details or a valid response and you need them to redo their work. When finished, respond with FINISH, but before you do, make sure you have a travel itinerary, language tips for the location, and an image file-path. If you don't have all of these, call the appropriate team member to get the missing information.

So we have some basic instructions for the team supervisor on how to manage the team here. We have the placeholder {members} in there twice which will be replaced with the actual list of members. We tell it we want a travel itinerary with things to do and sightseeing, language tips, and a visualization for the itinerary. The prompt here is far from perfect and you can tweak it further if you like.

Save the file and let’s get back to the file. First of all, add an extra import up top with the other imports:

#... all the other imports ...

from multi_agent_prompts import TEAM_SUPERVISOR_SYSTEM_PROMPT

Note that we could just use from multi_agent_prompts import * as the * will simply import everything from the file, even the variables we add later, but this is a bad practice as it makes it hard to see where the variables come from and leads to namespace pollution. It’s better to explicitly define and keep track of what you’re importing or sooner or later you’re going to have multiple variables with the same name and you won’t know where they come from.

Team supervisor prompt template

Now scroll all the way back down past the router_function_def and add the following code to define our team supervisor’s prompt template manually as it will be different from all the other agents:

team_supervisor_prompt_template = ChatPromptTemplate.from_messages(
            "Given the conversation above, who should act next?"
            " Or should we FINISH? Select one of: {options}",
).partial(options=", ".join(OPTIONS), members=", ".join(MEMBERS))

We use the same ChatPromptTemplate.from_messages method we used before, but this time we have three messages. The first is the TEAM_SUPERVISOR_SYSTEM_PROMPT we defined in the file. The second is a MessagesPlaceholder for the messages variable and the third is a short system message that reminds the team supervisor what it’s task is and what options it has available to choose from.

This team supervisor prompt template will need 3 variables to be filled in and used properly.

  • The first is inside the TEAM_SUPERVISOR_SYSTEM_PROMPT where we used the members placeholder twice.
  • The second one is the messages for the MessagesPlaceholder in the middle.
  • The third is the options for the options placeholder in the last message.

We have two of these available, namely the options and the members, but we don’t have the messages yet. the .partial chained on method will let us fill in the two parts that we have and leave the messages part to be added later, so we can go ahead and pass our OPTIONS to the options placeholder and the MEMBERS to the members placeholder ahead of time using this partial filling in method.

Note that we use the join method on the OPTIONS and MEMBERS lists to turn them into a single string with the members separated by a comma and a space as we cannot pass list variables to LLMs.

Team supervisor node

So the team supervisor is basically going to act like a router between our agents, deciding who is up next. Remember in part 1 where we used LCEL with the | pipe operator to create chains by piping a prompt into an LLM and then into an output parser? These simple vanilla LangChain chains can also be used as nodes in LangGraph. As the team supervisor node is going to be special we will use our part 1 vanilla LangChain knowledge to simply chain it together manually:

team_supervisor_chain = (
    | LLM.bind_functions(functions=[router_function_def], function_call="route")
    | JsonOutputFunctionsParser()

So we simply define the team_supervisor_chain as the prompt template we just made for it, then we pipe that into the LLM, and pipe that into a JsonOutputFunctionsParser. As we’re using a function here we can use the JSON output parser to extract the next property from the arguments the LLM provides for us.

The LLM here uses the bind_functions method to bind the router_function_def JSON Schema we defined as the available functions for this LLM call, and by passing in the second optional argument function_call="route" we tell the LLM that it MUST call the route function we defined earlier, meaning we are actually forcing it to call this function and not do anything else as this is its only purpose. Remember we added an entry in the AgentState to store the next parameter.

The system prompts for our other agents

Ok, now we need to create the agents that will make up the rest of our graph. These are going to be a lot easier as we’ll be able to use the create_agent function we wrote earlier. But first, we need some system setups which are going to be unique for each agent. Let’s move back over to the file and add the following below the existing TEAM_SUPERVISOR_SYSTEM_PROMPT, starting with the travel agent:

You are a helpful assistant that can suggest and review travel itinerary plans, providing critical feedback on how the trip can be enriched for enjoyment of the local culture. If the plan already includes local experiences, you can mention that the plan is satisfactory, with rationale.

Assume a general interest in popular tourist destinations and local culture, do not ask the user any follow-up questions.

You have access to a web search function for additional or up-to-date research if needed. You are not required to use this if you already have sufficient information to answer the question.

So we just have some basic instructions here, and notice how we say that if the plan already includes local experiences the agent can mention that the plan is satisfactory already, to make sure we’re not forcing it to do pointless work. The second paragraph is to stop it from asking questions and expecting an answer from the user, it should just help us without asking stuff.

Finally, we tell it that we give it access to a web search function to do more research if it needs to, but it won’t use these much as it has most travel info hard-wired into the LLM already. (We’ll use these search functions more extensively in the last part). I’ve taken some inspiration for these agents and prompts from the Autogen demo agents here, but this is just a starting point, and these can be tweaked much further.

Now for the language assistant:

You are a helpful assistant that can review travel plans, providing feedback on important/critical tips about how best to address language or communication challenges for the given destination. If the plan already includes language tips, you can mention that the plan is satisfactory, with rationale.

You have access to a web search function for additional or up-to-date research if needed. You are not required to use this if you already have sufficient information to answer the question.

This is basically the same but with a focus on language tips instead of travel itinerary plans. Let’s move on to the visualizer:

You are a helpful assistant that can generate images based on a detailed description. You are part of a travel agent team and your job is to look at the location and travel itinerary and then generate an appropriate image to go with the travel plan. You have access to a function that will generate the image as long as you provide a good description including the location and visual characteristics of the image you want to generate. This function will download the image and return the path of the image file to you.

Make sure you provide the image, and then communicate back as your response only the path to the image file you generated. You do not need to give any other textual feedback, just the path to the image file.

This one is a bit different as it’s going to generate an image for us. We tell it that it should only provide the path to the image file and not any other feedback. This is of course because the image generation tool that we wrote ourselves will save the image to disk and return the path to the image file, so we don’t need any other feedback from the agent other than the path which means that the image generation was successful.

Now we have one last agent’s system prompt to define, the designer, which is going to exist outside of our team of three agents above. We will also need the path to the images folder in our project to insert into this prompt. First scroll all the way back up to the top of the file, and add the following import:

from tools.image import IMAGE_DIRECTORY

Now scroll all the way back down again and add the designer’s system prompt, this time using a multi-line f string:

You are a helpful assistant that will receive a travel itinerary in parts. Some parts will be about the travel itinerary and some will be the language tips, and you will also be given the file path to an image. Your job is to call the markdown_to_pdf_file function you have been given, with the following argument:

markdown_text: A summary of the travel itinerary and language tips, with the image inserted, all in valid markdown format and without any duplicate information.

Make sure to use the following structure when inserting the image:
![Alt text]({str(IMAGE_DIRECTORY)}/image_name_here.png) using the correct file path. Make sure you don't add any stuff like 'file://'.

Start with the image and itinerary first and the language tips after, creating a neat and organized final travel itinerary with the appropriate markdown headings, bold words and other formatting.

We explain that it’s function is to call the markdown_to_pdf_file function we wrote passing in a full markdown summary with the image inserted as well. We give it specific instructions on how to format the image link in the markdown so it will work with our converter, and finally give it some last instructions on the structure we want.

Inside your file you now have the following constants:


Creating our agents and nodes

Go ahead and save and close the file and let’s get back to the file. First lets update our import up top with the other imports, changing it like this:

#... all the other imports ...

from multi_agent_prompts import (

Then go ahead and scroll all the way back down to the bottom of the file and let’s start creating some agents and nodes! First up is the travel agent:

travel_agent = create_agent(LLM, [TAVILY_TOOL], TRAVEL_AGENT_SYSTEM_PROMPT)
travel_agent_node = functools.partial(
    agent_node, agent=travel_agent, name=TRAVEL_AGENT_NAME

First we create the travel_agent by calling our create_agent function and passing in the LLM, a list with the TAVILY_TOOL in it as our list of tools, as we promised it an internet tool if it needed one, and the TRAVEL_AGENT_SYSTEM_PROMPT. We now have our travel agent / executor.

To get the travel agent’s node we need to use the agent_node function we defined before, which needs three arguments, the agent, the state and the name of the agent in string format. We have the agent and the name already, but the state will only be available at runtime. To solve this problem we can use the functools.partial function to create a new function that has the agent and name already filled in, and then we can pass in the state at runtime.

If you’re unfamiliar with functools.partial, it basically works like this:

########### Example, not part of the code ############
# Original function
def multiply(x, y):
    return x * y

# Create a new function that multiplies by 2
multiply_by_two = functools.partial(multiply, x=2)

result = multiply_by_two(3)
print(result)  # Output: 6

So it takes a function and creates a new function based on the original with a portion of the arguments already filled in, reducing the number of arguments the function takes in it’s new form. This is very useful as we now have our complete travel_agent_node that needs only the state object to be passed in for it to work.

Now in exactly the same manner we can create our language_assistant, visualizer, and designer agents and nodes:

language_assistant = create_agent(LLM, [TAVILY_TOOL], LANGUAGE_ASSISTANT_SYSTEM_PROMPT)
language_assistant_node = functools.partial(
    agent_node, agent=language_assistant, name=LANGUAGE_ASSISTANT_NAME

visualizer = create_agent(LLM, [generate_image], VISUALIZER_SYSTEM_PROMPT)
visualizer_node = functools.partial(agent_node, agent=visualizer, name=VISUALIZER_NAME)

designer = create_agent(LLM, [markdown_to_pdf_file], DESIGNER_SYSTEM_PROMPT)
designer_node = functools.partial(agent_node, agent=designer, name=DESIGNER_NAME)

The language assistant takes the TAVILY_TOOL, while our visualizer needs the generate_image and the designer the markdown_to_pdf_file tool. We then create the nodes for each of these agents in the same way we did for the travel agent above, passing in their respective names using the ...NAME constants we defined up top.

Creating the graph

Time to create our graph and the nodes:

workflow = StateGraph(AgentState)
workflow.add_node(TRAVEL_AGENT_NAME, travel_agent_node)
workflow.add_node(LANGUAGE_ASSISTANT_NAME, language_assistant_node)
workflow.add_node(VISUALIZER_NAME, visualizer_node)
workflow.add_node(DESIGNER_NAME, designer_node)
workflow.add_node(TEAM_SUPERVISOR_NAME, team_supervisor_chain)

We initialize the StateGraph passing in our AgentState format we defined. Then we simply create a node for each agent passing in the name first, and the actual node second. Note that we’ve used these ...NAME variables several times now, which is why we defined them up top as constants to give them only a single point of definition instead of repeating strings all over the place.

Now that we have the nodes let’s start building some connections:

for member in MEMBERS:
    workflow.add_edge(member, TEAM_SUPERVISOR_NAME)

workflow.add_edge(DESIGNER_NAME, END)

For every member in the list of team MEMBERS we add an edge back to the team supervisor, as it will decide where to go next between each step. We also add an edge from the designer to the END node as the designer is the last step in our graph and will exist outside of the team.

So far we have this, and these are all hard edges with no conditions. Now it is time for us to add some conditional edges:

conditional_map = {name: name for name in MEMBERS}
conditional_map["FINISH"] = DESIGNER_NAME
    TEAM_SUPERVISOR_NAME, lambda x: x["next"], conditional_map

We create a conditional_map dictionary that maps each member to itself, and then we add a key "FINISH" that maps to the DESIGNER_NAME. So if the team supervisor calls on the "visualizer" this will simply map like {"visualizer": "visualizer"} but the one exception is the {"FINISH": "designer"} mapping.

We then call the add_conditional_edges method on the workflow object. This method takes the start point, so we pass in the TEAM_SUPERVISOR_NAME, a function that will return a value, and then a mapping that will map that value to the next desired node.

The function is a lambda that takes the state object as input and simply returns the state’s next key that the team supervisor has put in there. The conditional_map is the mapping we defined above, so if the team supervisor calls on a team member it will map to that team member’s node, but if it calls "FINISH" it will map to the "designer" node.

Now set the entry point and compile the graph:


travel_agent_graph = workflow.compile()

Our completed graph now looks like this:

Where the white lines represent the fixed edges and the dotted lines represent conditional ones. Now let’s actually give this a test run and see what happens!:

for chunk in
    {"messages": [HumanMessage(content="I want to go to Paris for three days")]}
    if "__end__" not in chunk:

So we’re going to call stream on the travel_agent_graph and pass in a dictionary with the messages key and a list with a single HumanMessage object in it, saying that we want to visit Paris. for three days. We then loop over the chunks and print them out, and then print a line of #s in green to visually separate the chunks.

Now go ahead and run this and let’s see what happens! You may see some printer message popup, again just click X on it if it pops up for now. When it’s done running have a look in your output folder for the final result:

That is pretty darn cool right! Our whole team of AI agents is working together to do our bidding without any work on our part! Everything worked exactly as expected with the routing and everything, which you can confirm in your LangSmith dashboard ( as well by checking out the trace for the run:

We can see that after each step the system returns to the team supervisor and at the end it breaks out of the team towards the designer. I’ve done a bunch more test runs to verify that it works well and here are some example runs for other destinations:

Remember that I’ve been using the gpt-3.5-turbo-0125 model all this time. You can easily swap out any of the models for gpt-4-turbo if you want more detail, or if you have some trouble with a specific node. Say the designer has trouble working consistently, you could just swap out only that node for a different model with a higher quality and leave the rest as is.

You can literally create just about any combination of agents, nodes, edges, and conditional edges you want. The combination possibilities are mind-boggling. We decided to have one agent outside of the team here, no problem! We can also have 2 teams or even more if we want, each with their own manager. Your imagination is the limit here.

That’s it for part 5! In the next and last part, we’ll take a look at writing and integrating asynchronous tools into our systems. I’ll see you there!