Python A* Search Algorithm

This tutorial guides you into the fascinating A* (A-Star) using the Python programming language. First, feel free to watch the video guide—we’ll give a detailed textual explanation below.

The slides can be found as a Gif here:

Okay, so let’s dive into the algorithm motivation, explanation, and Python code next!

What is the A* Search algorithm?

A very interesting graph traversal algorithm we will learn about next is the A* algorithm, constructed by the authors Peter Hart, Nils Nilsson, and Bertram Raphael. The A* algorithm belongs to the family of best-first search algorithms and is an extension to the Dijkstra algorithm in the sense that it takes into account both the weights of the graph edges and the heuristic functions of the connected vertices. It is suitable for application in various domains of computer science because of its three key properties: completeness, optimality, and optimal efficiency.

What’s the Purpose of A* Search?

Common applications of the A* algorithm are in domains of optimal pathfinding for various distribution networks. Some of the example usages are power-aware routing of messages in large communication networks, point-to-point path planning tasks, or finding the shortest path in games and web-based maps.

How Does A* Search Work?

The A* algorithm assigns a heuristic function to all the vertices. The heuristic function approximates a cost of reaching the goal vertex from a visited vertex in terms of e.g. (commonly Euclidean) distance or time. The total cost of any vertex is calculated as a sum of weights of the connecting edges between the starting vertex and the visited vertex, and the heuristic function of the visited vertex.

When visited, the cost of each unexplored, adjoining vertex is updated according to the weights associated with the connecting edges. After being visited, each adjoining vertex is added to the priority queue.

In each following iteration, the vertex with the lowest cost is taken out of the priority queue and its processing starts by visiting and conditionally updating all its adjoining (visited), non-explored vertices. The update operation implies two steps: lowering the cost of the visited node and associating with the processed (explored, the terms are used interchangeably) vertex for later reconstruction of the shortest path. Finally, the processed vertex is marked as explored and does not participate in any further cost calculations.

The update condition is determined by comparing each visited vertex’s current cost with its new,  potentially lower cost. Its new cost is calculated in the following way: current cost of the explored vertex – its heuristic function + the weight of the adjoining edge (the edge weight between the vertex being explored and the visited vertex) + the heuristic function of the visited vertex.

If the current cost of the visited vertex is still lower than the potential new cost, the vertex cost will not be updated. Otherwise, the visited vertex will be updated to the new cost (its cost will decrease) and form an association with the explored vertex. Vertex cost reduction is also referred to as a relaxation procedure. After visiting and conditionally updating all the adjoining, non-explored vertices, the vertex being processed will be marked as explored and will not participate in any further algorithm calculations. The described process continues until there are no unexplored vertices left in the priority queue.

When the algorithm ends, all vertices are assigned with the lowest possible costs, and the traversal algorithm yields the shortest possible path between the starting and target vertices. For comparison with the previously described Dijkstra’s algorithm, the A* algorithm is superior given that it does not only follow the shortest path available (pure greedy approach) but is also guided by the notion of a right direction, contained in the heuristic function of each vertex.

What Are the Properties of A* Search?

The A* algorithm uses the exact information represented by the edge’s weights and a heuristic function for distance estimation between the goal vertex and other connected vertices in a graph. As the initial costs for all the non-starting vertices are set to infinity, the algorithm successively decreases vertices costs until they reach their minimum.

This behavior leads to a property of being optimal: minimal costs assigned to vertices enable the A* algorithm to always find the shortest path between the starting vertex and any other vertex in the graph. As the shortest paths always start from the starting vertex, the algorithm is attributed as the “single-source” algorithm.

Besides being optimal, the algorithm is also complete, i.e. it will always take a finite time to find a solution.

The third important property is the optimal efficiency, reflected in the fact that vertices positioned further from the target vertex may not be explored at all, as their heuristic function distinguishes and delays the exploration of such vertices among those with equally weighted paths.

The heuristic functions used in the A* algorithm also have two notable properties: admissibility and consistency.

  • Admissibility implies that the heuristic function cost estimation is at most as high as the lowest possible cost from the current point in a path towards the target vertex.
  • The consistent or monotone heuristic function is constrained by a requirement that its cost estimation is always less than or equal to the estimated distance from any adjoining, successor vertex to the goal, plus the cost of reaching that vertex.

How is A* Search Implemented in Python?

The implementation of the A* algorithm is achieved by the function a_star() and a modification of the underlying class Graph.

The a_star() function takes three parameters:

  • The graph parameter takes an initialized Graph object (see the blog on the breadth-first search algorithm, the section on graphs).
  • The start_vertex parameter takes the starting vertex, which we choose freely (remember, a graph is not a tree, there is no absolute root).
  • The target parameter is the entity we want to find in the graph, enclosed in a vertex.

For a better understanding of the algorithm and its implementation, each step is precisely described in the code below.

There have been some further upgrades on the Graph class, so its entire listing follows:

class Graph:

    def __init__(self, directed=False):
        self._outgoing = {}
        # If the graph is undirected, 'self._outgoing'
        # is the universal storage.
        self._incoming = {} if directed else self._outgoing

    # If the graph is directed, the 'self._incoming' 
    # dictionary differs from the 'self._outgoing'.
    def is_directed(self):
        return self._incoming is not self._outgoing

    # The function returns a generator of incoming
    # or outgoing (default) edges of a vertex.
    def adjacent_edges(self, vertex, outgoing=True):
        # References the corresponding outer dictionary
        # (dictionary of dictionaries)
        adj_edges = self._outgoing if outgoing else self._incoming

        # Access each of the edges for this endpoint vertex.
        for edge in adj_edges[vertex].values():
            yield edge

    def add_vertex(self, entity=None, h=None, cost=None):
        # Constructs a new vertex from the entity.
        vertex = self.Vertex(entity, h, cost)
        # The vertex becomes a key in the outer dictionary,
        # but the value is an internal dictionary (as we model
        # both dimensions for each edge: origin and destination).
        # e.g. {vertex_1a:{vertex_b:edge_a_b}, vertex_b:{vertex_c:edge_b_c}}.
        self._outgoing[vertex] = {}
        if self.is_directed():
            self._incoming[vertex] = {}

    def add_edge(self, origin, destination, weight=None):
        # Constructs a new edge from the vertices.
        edge = self.Edge(origin, destination, weight)
        # Adds the edge to the dictionary (dictionaries are
        # the same if the graph is undirected). The outer key
        # represents the origin, i.e. the component 'a' of
        # the edge-defining pair (a, b). The inner key stands
        # for the component 'b' of the edge-defining pair (a, b).
        self._outgoing[origin][destination] = edge
        # Even if the graph is undirected, each edge has to
        # be added twice, i.e. once for each of its endpoints.
        self._incoming[destination][origin] = edge

    def vertices(self):
        return self._outgoing.keys()

    def edges(self):
        # All the edges are collected into a set.
        result = set()
        for inner_dict in self._outgoing.values():
            result.update(inner_dict.values())
        return result

    class Vertex:
        __slots__ = '_entity', '_h', '_cost'

        def __init__(self, entity, h=None, cost=None):
            self.entity = entity
            self.h = h
            self.cost = cost

        # The real-world entity is represented by the Vertex object.
        @property
        def entity(self):
            return self._entity

        @entity.setter
        def entity(self, entity):
            self._entity = entity

        # The real-world entity has a heuristic value of 'h'.
        @property
        def h(self):
            return self._h

        @h.setter
        def h(self, h):
            self._h = h

        # The real-world entity has a cost of 'cost'.
        @property
        def cost(self):
            return self._cost

        @cost.setter
        def cost(self, cost):
            self._cost = cost

        # We have to implement __hash__ to use the object as a dictionary key.
        def __hash__(self):
            return hash(id(self))

        def __lt__(self, other):
            if self.cost is None:
                return False
            elif other.cost is None:
                return True
            else:
                return self.cost < other.cost

    class Edge:
        __slots__ = '_origin', '_destination', '_weight'

        def __init__(self, origin, destination, weight=None):
            self._origin = origin
            self._destination = destination
            self.weight = weight

        def endpoints(self):
            return self._origin, self._destination

        # Returns the other component of the edge-defining pair (a, b)
        # for a given component a or b, respectively.
        def opposite(self, vertex):
            return self._destination if self._origin is vertex \
                else self._origin

        # Returns the weight of the edge.
        @property
        def weight(self):
            return self._weight

        # Sets the weight of the edge
        @weight.setter
        def weight(self, weight):
            self._weight = weight

        def __hash__(self):
            return hash((self._origin, self._destination))

The most significant differences to the previous version of the Graph class are highlighted in the code.

With these changes in place, implementation of the core function, a_star() is:

from graph import Graph
from queue import PriorityQueue


def a_star(graph, start_vertex, target):
    # Create the priority queue for open vertices.
    vertices_pq = PriorityQueue()

    start_vertex.cost = start_vertex.h

    # Adds the start vertex to the priority queue.
    print(f'Visiting/queueing vertex {start_vertex.entity}')
    vertices_pq.put(start_vertex)
    print('Prioritized vertices (v, cost(v)):',
          *((vert.entity, vert.cost) for vert in vertices_pq.queue), end=2 * '\n')

    # The starting vertex is visited first and has no leading edges.
    # If we did not put it into 'visited' in the first iteration,
    # it would end up in 'visited' during the second iteration, pointed to
    # by one of its children vertices as a previously unvisited vertex.
    visited[start_vertex] = None

    # Loops until the priority list gets empty.
    while not vertices_pq.empty():
        # Gets the vertex with the lowest cost.
        vertex = vertices_pq.get()
        # If the vertex being explored is a target vertex, ends the algorithm.
        print(f'Exploring vertex {vertex.entity}')
        if vertex.entity == target:
            return vertex
        # Examines each non-visited adjoining edge/vertex.
        for edge in graph.adjacent_edges(vertex):
            # Gets the second endpoint.
            v_2nd_endpoint = edge.opposite(vertex)

            # Skips the explored vertices.
            if v_2nd_endpoint in explored:
                continue

            # Checks if the endpoint has a weight and is the weight the cheapest one.
            if v_2nd_endpoint.cost is None \
                    or vertex.cost - vertex.h + edge.weight < v_2nd_endpoint.cost - v_2nd_endpoint.h:
                # Adds the second endpoint to 'visited' and maps
                # the leading edge for the search path reconstruction.
                v_2nd_endpoint.cost = vertex.cost - vertex.h + edge.weight + v_2nd_endpoint.h
                # Prevents reinsertion to the priority queue. The
                # endpoint distance value will be updated.
                if v_2nd_endpoint not in visited:
                    print(f'Visiting/queueing vertex {v_2nd_endpoint.entity}')
                    vertices_pq.put(v_2nd_endpoint)
                # Forces the priority queue to recalculate in case of an
                # inner vertex update resulting with the highest priority
                vertices_pq.put(vertices_pq.get())
                # Replaces the previous vertex' ancestor with a cheaper one.
                visited[v_2nd_endpoint] = edge
        print('Prioritized vertices (v, cost(v)):',
              *((vert.entity, vert.cost) for vert in vertices_pq.queue), end=2 * '\n')
        # The vertex is used for update and put aside.
        explored.append(vertex)
    return None

Before we can test the algorithm, we have to initialize a graph and build it by adding vertices and edges to it:

# Initializes an empty graph (object).
g = Graph()

# Loads the graph with the first seven vertices.
g.add_vertex(0, 4)
g.add_vertex(1, 4)
g.add_vertex(2, 2)
g.add_vertex(3, 7)
g.add_vertex(4, 5)
g.add_vertex(5, 10)
g.add_vertex(6, 0)

# Constructs the 'vertices' dictionary for a more
# convenient access during the graph construction.
vertices = {k.entity: k for k in g.vertices()}

# Constructs an arbitrary graph from
# the existing vertices and edges.
g.add_edge(vertices[0], vertices[1], 4)
g.add_edge(vertices[0], vertices[2], 2)
g.add_edge(vertices[2], vertices[4], 1)
g.add_edge(vertices[4], vertices[3], 3)
g.add_edge(vertices[3], vertices[5], 2)
g.add_edge(vertices[0], vertices[5], 4)
g.add_edge(vertices[2], vertices[6], 5)

# Initializes the search path and a dictionary of visited vertices.
path = []
explored = []
visited = {}

Now that we have prepared everything, we can test a_star() and see how it works. Here is the part of the code that runs the algorithm, constructs the search path (if there is one), and shows in a step-by-step manner how it proceeds through the graph:

# Starts the search.
result = a_star(g, vertices[5], 6)

# If the entity is found...
if result is not None:
    # The search path ends with the found vertex (entity).
    # Each vertex is a container for its real-world entity.
    path_vertex = result
    # The entity is added to the 'path'.
    path.append(path_vertex.entity)
    # Constructs the rest of the search path (if it exists)...
    while True:
        # Gets a discovery edge leading to the vertex.
        path_edge = visited.get(path_vertex)
        # If the path vertex is the root, it has no discovery edge...
        if path_edge is None:
            break
        # Otherwise, gets the second (parent vertex) endpoint.
        path_vertex = path_edge.opposite(path_vertex)
        # The entity is added to the 'path'.
        path.append(path_vertex.entity)
    print('Search path found:', end=' ')
    # The path is reversed and starts with the root vertex.
    print(*reversed(path), sep=' -> ')
# Otherwise...
else:
    print('\nEntity is not found')

The test run gave us the output:

Visiting/queueing vertex 5
Prioritized vertices (v, cost(v)): (5, 10)

Exploring vertex 5
Visiting/queueing vertex 3
Visiting/queueing vertex 0
Prioritized vertices (v, cost(v)): (0, 8) (3, 9)

Exploring vertex 0
Visiting/queueing vertex 1
Visiting/queueing vertex 2
Prioritized vertices (v, cost(v)): (2, 8) (1, 12) (3, 9)

Exploring vertex 2
Visiting/queueing vertex 4
Visiting/queueing vertex 6
Prioritized vertices (v, cost(v)): (3, 9) (6, 11) (1, 12) (4, 12)

Exploring vertex 3
Prioritized vertices (v, cost(v)): (4, 10) (1, 12) (6, 11)

Exploring vertex 4
Prioritized vertices (v, cost(v)): (6, 11) (1, 12)

Exploring vertex 6
Search path found: 5 -> 0 -> 2 -> 6

Based on the output, we can see that the search started from vertex 5 and that the a_star() has found the entity vertex 6. The entire search path is also displayed, and we should note that the search path will always be the shortest one: 5 -> 0 -> 2 -> 6. However, a modification of just one heuristic function value, effectively moving the vertex further away from the goal might lead to a different solution, as we will demonstrate with the next example. With that in mind, let us tweak the weight on one of our edges:

# Loads the graph with the first seven vertices.
g.add_vertex(0, 6)
g.add_vertex(1, 4)
g.add_vertex(2, 2)
g.add_vertex(3, 7)
g.add_vertex(4, 5)
g.add_vertex(5, 10)
g.add_vertex(6, 0)

The re-run gave us the output:

Visiting/queueing vertex 5
Prioritized vertices (v, cost(v)): (5, 10)

Exploring vertex 5
Visiting/queueing vertex 3
Visiting/queueing vertex 0
Prioritized vertices (v, cost(v)): (3, 9) (0, 10)

Exploring vertex 3
Visiting/queueing vertex 4
Prioritized vertices (v, cost(v)): (4, 10) (0, 10)

Exploring vertex 4
Visiting/queueing vertex 2
Prioritized vertices (v, cost(v)): (2, 8) (0, 10)

Exploring vertex 2
Visiting/queueing vertex 6
Prioritized vertices (v, cost(v)): (0, 10) (6, 11)

Exploring vertex 0
Visiting/queueing vertex 1
Prioritized vertices (v, cost(v)): (6, 11) (1, 12)

Exploring vertex 6
Search path found: 5 -> 3 -> 4 -> 2 -> 6

After a re-run, we got a different solution only by changing one of our heuristic function values. Our simple demonstration just proved how important the heuristic function value, i.e. the quality distance estimation is.

Efficiency Analysis

The algorithm’s worst-case time complexity depends on the heuristic function. In the worst case, i.e. of unbounded search space, the time complexity degenerates to an exponential function O(bd), where b is the branching factor (the average number of unexplored, adjoining vertices) and d stands for the depth of the shortest path to a solution.

The space complexity of the A* algorithm is O(v+e) in terms of vertices and edges since it keeps all generated vertices and edges in memory. Expressed in terms of a branching factor and the solution depth, the space complexity of the A* algorithm is O(bd). High memory requirement renders the A* algorithm less suitable as the size and density of a graph increase, which is considered to be its significant disadvantage.

The A* algorithm is optimal, as it will always yield an optimal, shortest possible search path. Furthermore, the A* algorithm will always find a solution if there is one, so it is also complete. Finally, A* is optimally efficient, meaning it will explore as few vertices as possible.

Conclusion

In this article, we learned about the A* search algorithm.

  • First, we explained what the A* algorithm is.
  • Second, we took a look at what are its common purposes and applications.
  • Third, we went through an explanation of how the algorithm works.
  • Fourth, we examined the algorithm’s main properties.
  • Fifth, we went through the implementation of the algorithm, which is based on the Graph abstract data structure (Graph class implementation is given above). We also tested the algorithm by calling its main function, a_star(), and analyzed its steps of execution for two slightly different edge weight scenarios.
  • Sixth, we analyzed the algorithm efficiency.

In the end, we concluded that the algorithm efficiency is optimal, and if the solution exists, the A* algorithm will always find it in its optimal form and with optimal efficiency. The algorithm always takes finite time in reaching the solution and is driven by the edges’ weights, vertices’ heuristic function, and the graph structure.

Leave a Comment