Binary Search on Graphs

Binary search is one of the most basic algorithms I know. Given a sorted list of comparable items and a target item being sought, binary search looks at the middle of the list, and compares it to the target. If the target is larger, we repeat on the smaller half of the list, and vice versa.

With each comparison the binary search algorithm cuts the search space in half. The result is a guarantee of no more than $ \log(n)$ comparisons, for a total runtime of $ O(\log n)$. Neat, efficient, useful.

There’s always another angle.

What if we tried to do binary search on a graph? Most graph search algorithms, like breadth- or depth-first search, take linear time, and they were invented by some pretty smart cookies. So if binary search on a graph is going to make any sense, it’ll have to use more information beyond what a normal search algorithm has access to.

For binary search on a list, it’s the fact that the list is sorted, and we can compare against the sought item to guide our search. But really, the key piece of information isn’t related to the comparability of the items. It’s that we can eliminate half of the search space at every step. The “compare against the target” step can be thought of a black box that replies to queries of the form, “Is this the thing I’m looking for?” with responses of the form, “Yes,” or, “No, but look over here instead.”

binarysearch1

As long as the answers to your queries are sufficiently helpful, meaning they allow you to cut out large portions of your search space at each step, then you probably have a good algorithm on your hands. Indeed, there’s a natural model for graphs, defined in a 2015 paper of Emamjomeh-Zadeh, Kempe, and Singhal that goes as follows.

You’re given as input an undirected, weighted graph $ G = (V,E)$, with weights $ w_e$ for $ e \in E$. You can see the entire graph, and you may ask questions of the form, “Is vertex $ v$ the target?” Responses will be one of two things:

  • Yes (you win!)
  • No, but $ e = (v, w)$ is an edge out of $ v$ on a shortest path from $ v$ to the true target.

Your goal is to find the target vertex with the minimum number of queries.

Obviously this only works if $ G$ is connected, but slight variations of everything in this post work for disconnected graphs. (The same is not true in general for directed graphs)

When the graph is a line, this “reduces” to binary search in the sense that the same basic idea of binary search works: start in the middle of the graph, and the edge you get in response to a query will tell you in which half of the graph to continue.

binarysearch2.png

And if we make this example only slightly more complicated, the generalization should become obvious:

binarysearch3

Here, we again start at the “center vertex,” and the response to our query will eliminate one of the two halves. But then how should we pick the next vertex, now that we no longer have a linear order to rely on? It should be clear, choose the “center vertex” of whichever half we end up in. This choice can be formalized into a rule that works even when there’s not such obvious symmetry, and it turns out to always be the right choice.

Definition: median of a weighted graph $ G$ with respect to a subset of vertices $ S \subset V$ is a vertex $ v \in V$ (not necessarily in $ S$) which minimizes the sum of distances to vertices in $ S$. More formally, it minimizes

$ \Phi_S(v) = \sum_{u \in S} d(v, u)$,

where $ d(u,v)$ is the sum of the edge weights along a shortest path from $ v$ to $ u$.

And so generalizing binary search to this query-model on a graph results in the following algorithm, which whittles down the search space by querying the median at every step.

Algorithm: Binary search on graphs. Input is a graph $ G = (V,E)$.

  • Start with a set of candidates $ S = V$.
  • While we haven’t found the target and $ |S| > 1$:
    • Query the median $ v$ of $ S$, and stop if you’ve found the target.
    • Otherwise, let $ e = (v, w)$ be the response edge, and compute the set of all vertices $ x \in V$ for which $ e$ is on a shortest path from $ v$ to $ x$. Call this set $ T$.
    • Replace $ S$ with $ S \cap T$.
  • Output the only remaining vertex in $ S$

Indeed, as we’ll see momentarily, a python implementation is about as simple. The meat of the work is in computing the median and the set $ T$, both of which are slight variants of Dijkstra’s algorithm for computing shortest paths.

The theorem, which is straightforward and well written by Emamjomeh-Zadeh et al. (only about a half page on page 5), is that this algorithm requires only $ O(\log(n))$ queries, just like binary search.

Before we dive into an implementation, there’s a catch. Even though we are guaranteed only $ \log(n)$ many queries, because of our Dijkstra’s algorithm implementation, we’re definitely not going to get a logarithmic time algorithm. So in what situation would this be useful?

Here’s where we use the “theory” trick of making up a fanciful problem and only later finding applications for it (which, honestly, has been quite successful in computer science). In this scenario we’re treating the query mechanism as a black box. It’s natural to imagine that the queries are expensive, and a resource we want to optimize for. As an example the authors bring up in a followup paper, the graph might be the set of clusterings of a dataset, and the query involves a human looking at the data and responding that a cluster should be split, or that two clusters should be joined. Of course, for clustering the underlying graph is too large to process, so the median-finding algorithm needs to be implicit. But the essential point is clear: sometimes the query is the most expensive part of the algorithm.

Alright, now let’s implement it! The complete code is on Github as always.

Always be implementing

We start with a slight variation of Dijkstra’s algorithm. Here we’re given as input a single “starting” vertex, and we produce as output a list of all shortest paths from the start to all possible destination vertices.

We start with a bare-bones graph data structure.

from collections import defaultdict
from collections import namedtuple

Edge = namedtuple('Edge', ('source', 'target', 'weight'))

class Graph:
    # A bare-bones implementation of a weighted, undirected graph
    def __init__(self, vertices, edges=tuple()):
        self.vertices = vertices
        self.incident_edges = defaultdict(list)

        for edge in edges:
            self.add_edge(
                edge[0],
                edge[1],
                1 if len(edge) == 2 else edge[2]  # optional weight
            )

    def add_edge(self, u, v, weight=1):
        self.incident_edges[u].append(Edge(u, v, weight))
        self.incident_edges[v].append(Edge(v, u, weight))

    def edge(self, u, v):
        return [e for e in self.incident_edges[u] if e.target == v][0]

And then, since most of the work in Dijkstra’s algorithm is tracking information that you build up as you search the graph, we define the “output” data structure, a dictionary of edge weights paired with back-pointers for the discovered shortest paths.

class DijkstraOutput:
    def __init__(self, graph, start):
        self.start = start
        self.graph = graph

        # the smallest distance from the start to the destination v
        self.distance_from_start = {v: math.inf for v in graph.vertices}
        self.distance_from_start[start] = 0

        # a list of predecessor edges for each destination
        # to track a list of possibly many shortest paths
        self.predecessor_edges = {v: [] for v in graph.vertices}

    def found_shorter_path(self, vertex, edge, new_distance):
        # update the solution with a newly found shorter path
        self.distance_from_start[vertex] = new_distance

        if new_distance < self.distance_from_start[vertex]:
            self.predecessor_edges[vertex] = [edge]
        else:  # tie for multiple shortest paths
            self.predecessor_edges[vertex].append(edge)

    def path_to_destination_contains_edge(self, destination, edge):
        predecessors = self.predecessor_edges[destination]
        if edge in predecessors:
            return True
        return any(self.path_to_destination_contains_edge(e.source, edge)
                   for e in predecessors)

    def sum_of_distances(self, subset=None):
        subset = subset or self.graph.vertices
        return sum(self.distance_from_start[x] for x in subset)

The actual Dijkstra algorithm then just does a “breadth-first” (priority-queue-guided) search through $ G$, updating the metadata as it finds shorter paths.

def single_source_shortest_paths(graph, start):
    '''
    Compute the shortest paths and distances from the start vertex to all
    possible destination vertices. Return an instance of DijkstraOutput.
    '''
    output = DijkstraOutput(graph, start)
    visit_queue = [(0, start)]

    while len(visit_queue) > 0:
        priority, current = heapq.heappop(visit_queue)

        for incident_edge in graph.incident_edges[current]:
            v = incident_edge.target
            weight = incident_edge.weight
            distance_from_current = output.distance_from_start[current] + weight

            if distance_from_current <= output.distance_from_start[v]:
                output.found_shorter_path(v, incident_edge, distance_from_current)
                heapq.heappush(visit_queue, (distance_from_current, v))

    return output

Finally, we implement the median-finding and $ T$-computing subroutines:

def possible_targets(graph, start, edge):
    '''
    Given an undirected graph G = (V,E), an input vertex v in V, and an edge e
    incident to v, compute the set of vertices w such that e is on a shortest path from
    v to w.
    '''
    dijkstra_output = dijkstra.single_source_shortest_paths(graph, start)
    return set(v for v in graph.vertices
               if dijkstra_output.path_to_destination_contains_edge(v, edge))

def find_median(graph, vertices):
    '''
    Compute as output a vertex in the input graph which minimizes the sum of distances
    to the input set of vertices
    '''
    best_dijkstra_run = min(
         (single_source_shortest_paths(graph, v) for v in graph.vertices),
         key=lambda run: run.sum_of_distances(vertices)
    )
    return best_dijkstra_run.start

And then the core algorithm

QueryResult = namedtuple('QueryResult', ('found_target', 'feedback_edge'))

def binary_search(graph, query):
    '''
    Find a target node in a graph, with queries of the form "Is x the target?"
    and responses either "You found the target!" or "Here is an edge on a shortest
    path to the target."
    '''
    candidate_nodes = set(x for x in graph.vertices)  # copy

    while len(candidate_nodes) > 1:
        median = find_median(graph, candidate_nodes)
        query_result = query(median)

        if query_result.found_target:
            return median
        else:
            edge = query_result.feedback_edge
            legal_targets = possible_targets(graph, median, edge)
            candidate_nodes = candidate_nodes.intersection(legal_targets)

    return candidate_nodes.pop()

Here’s an example of running it on the example graph we used earlier in the post:

'''
Graph looks like this tree, with uniform weights

    a       k
     b     j
      cfghi
     d     l
    e       m
'''
G = Graph(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i',
           'j', 'k', 'l', 'm'],
          [
               ('a', 'b'),
               ('b', 'c'),
               ('c', 'd'),
               ('d', 'e'),
               ('c', 'f'),
               ('f', 'g'),
               ('g', 'h'),
               ('h', 'i'),
               ('i', 'j'),
               ('j', 'k'),
               ('i', 'l'),
               ('l', 'm'),
          ])

def simple_query(v):
    ans = input("is '%s' the target? [y/N] " % v)
    if ans and ans.lower()[0] == 'y':
        return QueryResult(True, None)
    else:
        print("Please input a vertex on the shortest path between"
              " '%s' and the target. The graph is: " % v)
        for w in G.incident_edges:
            print("%s: %s" % (w, G.incident_edges[w]))

        target = None
        while target not in G.vertices:
            target = input("Input neighboring vertex of '%s': " % v)

    return QueryResult(
        False,
        G.edge(v, target)
    )

output = binary_search(G, simple_query)
print("Found target: %s" % output)

The query function just prints out a reminder of the graph and asks the user to answer the query with a yes/no and a relevant edge if the answer is no.

An example run:

is 'g' the target? [y/N] n
Please input a vertex on the shortest path between 'g' and the target. The graph is:
e: [Edge(source='e', target='d', weight=1)]
i: [Edge(source='i', target='h', weight=1), Edge(source='i', target='j', weight=1), Edge(source='i', target='l', weight=1)]
g: [Edge(source='g', target='f', weight=1), Edge(source='g', target='h', weight=1)]
l: [Edge(source='l', target='i', weight=1), Edge(source='l', target='m', weight=1)]
k: [Edge(source='k', target='j', weight=1)]
j: [Edge(source='j', target='i', weight=1), Edge(source='j', target='k', weight=1)]
c: [Edge(source='c', target='b', weight=1), Edge(source='c', target='d', weight=1), Edge(source='c', target='f', weight=1)]
f: [Edge(source='f', target='c', weight=1), Edge(source='f', target='g', weight=1)]
m: [Edge(source='m', target='l', weight=1)]
d: [Edge(source='d', target='c', weight=1), Edge(source='d', target='e', weight=1)]
h: [Edge(source='h', target='g', weight=1), Edge(source='h', target='i', weight=1)]
b: [Edge(source='b', target='a', weight=1), Edge(source='b', target='c', weight=1)]
a: [Edge(source='a', target='b', weight=1)]
Input neighboring vertex of 'g': f
is 'c' the target? [y/N] n
Please input a vertex on the shortest path between 'c' and the target. The graph is:
[...]
Input neighboring vertex of 'c': d
is 'd' the target? [y/N] n
Please input a vertex on the shortest path between 'd' and the target. The graph is:
[...]
Input neighboring vertex of 'd': e
Found target: e

A likely story

The binary search we implemented in this post is pretty minimal. In fact, the more interesting part of the work of Emamjomeh-Zadeh et al. is the part where the response to the query can be wrong with some unknown probability.

In this case, there can be many shortest paths that are valid responses to a query, in addition to all the invalid responses. In particular, this rules out the strategy of asking the same query multiple times and taking the majority response. If the error rate is 1/3, and there are two shortest paths to the target, you can get into a situation in which you see three responses equally often and can’t choose which one is the liar.

Instead, the technique Emamjomeh-Zadeh et al. use is based on the Multiplicative Weights Update Algorithm (it strikes again!). Each query gives a multiplicative increase (or decrease) on the set of nodes that are consistent targets under the assumption that query response is correct. There are a few extra details and some postprocessing to avoid unlikely outcomes, but that’s the basic idea. Implementing it would be an excellent exercise for readers interested in diving deeper into a recent research paper (or to flex their math muscles).

But even deeper, this model of “query and get advice on how to improve” is a classic  learning model first formally studied by Dana Angluin (my academic grand-advisor). In her model, one wants to design an algorithm to learn a classifier. The allowed queries are membership and equivalence queries. A membership is essentially, “What’s its label of this element?” and an equivalence query has the form, “Is this the right classifier?” If the answer is no, a mislabeled example is provided.

This is different from the usual machine learning assumption, because the learning algorithm gets to construct an example it wants to get more information about, instead of simply relying on a randomly generated subset of data. The goal is to minimize the number of queries before the target hypothesis is learned exactly. And indeed, as we saw in this post, if you have a little extra time to analyze the problem space, you can craft queries that extract quite a lot of information.

Indeed, the model we presented here for binary search on graphs is the natural analogue of an equivalence query for a search problem: instead of a mislabeled counterexample, you get a nudge in the right direction toward the target. Pretty neat!

There are a few directions we could take from here: (1) implement the Multiplicative Weights version of the algorithm, (2) apply this technique to a problem like ranking or clustering, or (3) cover theoretical learning models like membership and equivalence queries in more detail. What interests you?

Until next time!

Well Orderings and Search

Binary Search

Binary search is perhaps the first and most basic nontrivial algorithm a student learns. For the mathematicians out there, binary search is a fast procedure to determine whether a sorted list contains a particular element. Here is a pseudocode implementation:

# Binary Search:
# Given a list L, sorted via the total order <, and a sought
# element x, return true iff L contains x.

function binarySearch(L, x, <):
   # base case
   if(length(L) == 1):
      return L[0] == x
   middleIndex = floor(length(L) / 2)
   if (L[middleIndex] == x):
      return true

   # inductive step, with ellipsis notation meaning slices of L
   # from the beginning and to the end, respectively
   if (x < L[middleIndex]):
      return binarySort(L[...middleIndex-1], x, <)
   else:
      return binarySort(L[middleIndex+1...], x, <)

Colloquially, this is the optimal strategy in a game of “guess the number,” where the guesser is told if her guess is correct, too high, or too low. Try the middle number in the range of possible numbers. If the guess is too high, try the number which is 1/4th in the ordering, otherwise try 3/4ths, continuing this process until the number is guessed. This algorithm is obviously made for recursion (and for those advanced programmers, we resign to hope our working language supports tail-call optimization).

Binary search’s runtime is rather easy to analyze. At each step of the algorithm, we either finish, or cut the problem size in half. By no stretch of the imagination, binary search runs in $ O(\log n)$ where $ n$ is the length of the input list. In other words, in at worst $ \log_2 n$ steps, we will reduce the input list to have size 1, and can definitively say whether the list as a whole contains the sought element.

Notice that the success of the algorithm depends on the fact that the list is sorted, but the specific total order $ <$ is irrelevant. We will investigate this idea further, but first we need some deeper maths.

Well and Total Orders

For those of us who aren’t set theorists, it isn’t fair to talk about total orders and well orders without defining them. So here comes a definition:

Definition: A strict total order $ <$ is a relation on a set $ S$ with the following statements holding for all $ a, b, c \in S$:

  • It is never the case that $ a < a$. (anti-reflexivity)
  • Exactly one of the following are true: $ a < b, b < a,$ or $ a = b$. (trichotomy)
  • If $ a < b$ and $ b < c$, then $ a < c$. (transitivity)

These conditions create an environment in which sorting is possible: we need to be able to compare any two elements (trichotomy), we need to be able to inspect two elements at a time and know that our analysis generalizes to the whole list (transitivity), and we can’t break the world of strictness (anti-reflexivity).

Aside: The use of a strict total order as opposed to a non-strict total order is irrelevant, because each strict total order corresponds bijectively to a non-strict total order. Hence, there are two equivalent formulations of sorting with respect to strict and non-strict total orders, and we may choose one arbitrarily.

Now, we may elevate a total order $ <$ to a well order if every non-empty subset $ R \subset S$ has a least element with respect to $ <$. We computer scientists only sort finite lists, so every total order is automatically a well order. However, the reader may be interested in the mathematical need for such a distinction, so here it is:

Consider the integers $ \mathbb{Z}$ with the standard ordering $ <$. While $ \mathbb{Z}$ itself has no smallest element, neither does any subset which has infinitely many negative numbers, such as the evens or odds. More generally, any open interval in the real numbers $ \mathbb{R}$ obviously doesn’t have a least element with respect to the natural order. In contrast, we rely on the crucial fact that the set of natural numbers $ \mathbb{N}$ is well-ordered to apply mathematical induction.

Interestingly enough, a theorem due to Ernst Zermelo states that every set can be well ordered, and it is equivalent to the Axiom of Choice. While many people have a hard time visualizing a well-ordering of the real numbers $ \mathbb{R}$, we simply resign (as mystical as it is) to admit there is one out there somewhere, though we may never know what it is.

As another aside, it turns out that we only need one of the inequalities in $ (<, \leq, >, \geq)$ and the standard logical operations and (infix &&), or (infix ||), and not (prefix !) in order to define the other three (and indeed, to define $ =, \neq$ as well). This is a computer science trick that comes in handy, as we’ll see later, so here is the described equivalence. Given $ <$, we define the remaining operations as follows:

  • $ x > y$ is equivalent to $ y < x$
  • $ x \geq y$ is equivalent to $ !(x < y)$
  • $ x \leq y$ is equivalent to $ !(y < x)$
  • $ x = y$ is equivalent to $ !(x < y) \textup{ and } !(y < x)$
  • $ x \neq y$ is equivalent to $ x < y \textup{ or } y < x$

So if we are interested in sorting a set via some procedure, all we need from the user is the $ <$ operation, and then we may compare any way our heart desires.

Defining a New Well Order

Consider a deck of cards which is initially sorted (with some arbitrary ordering on the suits), and then is “cut” at some arbitrary point and the bottom part is placed on top of the deck. We may simplify this “cut” operation to a list of numbers, say ten, and provide the following example of a cut:

(5,6,9,10,11,13,1,3,3,4)

To pick a standard working language, we say the “cut point” of this list is 5, not 4.

We have a few (naive) options to search through cut data: we may sort it with respect to the natural total order on $ \mathbb{N}$, and then search through it; we may stick the elements of the list into a hash set (a constant-time lookup table), and then query existence that way; or we may traverse the list element-by element looking for a particular value.

The problem with the first two methods, though they determine existence, is that they don’t allow us to know where the value is in the list. If this it not important, and we are searching many times (compared to the size of the list) on many different values, then a has table would be the correct choice. However, if we are searching only a few times, and need to know where the value is hidden, all three of the above approaches are slow and inelegant.

Enter well orders. You may have noticed that a cut list of numbers has a very simple well ordering in terms of the natural order on $ \mathbb{N}$. Verbally, if the two numbers are separated across the cut point, then the larger number is in fact the smaller number. Otherwise, we may appeal to the regular ordering. Here it is in pseudocode:

function lessThan(x, y):
   if (y &lt; cutPoint &lt;= x):
      return true
   else if (x &lt; cutPoint &lt;= y):
      return false
   else:
      return x &lt; y

And we may compress these if statements into a single condition:

function lessThan(cutPoint, x, y): 
   y &lt; cutPoint &lt;= x || (!(x &lt; cutPoint &lt;= y) &amp;&amp; x &lt; y)

function makeCutOrdering(cutPoint): 
   lambda x,y: lessThan(cutPoint, x, y)

So we have found that a cut list of numbers is in fact well ordered with respect to this new relation. Forget about preprocessing the data, we can just do a binary search using this new ordering! Here’s a Mathematica script that does just that. Here we assume constant-time list length calculations, and fast slicing (which Mathematica has). Note that list indexing and slicing has double square bracket [ [ ] ] syntax, while function application is single square brackets [ ]. It should look very similar to the earlier pseudocode for binary search.

eq[x_, y_, lt_] := !lt[x,y] &amp;&amp; !lt[y,x];

(* base case *)
BinarySearch[{oneElt_}, sought_, lt_] := eq[oneElt, sought, lt];

(* inductive step *)
BinarySearch[lst_List, sought_, lt_] :=
   Module[{size = Length[lst], midVal, midNdx},
      midNdx = Floor[size/2];
      midVal = lst[[midNdx]];
      If[eq[midVal, sought, lt],
         True,
         If[lt[sought, midVal],
            BinarySearch[lst[[ ;; midNdx - 1]], sought, lt]
            BinarySearch[lst[[midNdx + 1 ;; ]], sought, lt]
         ]
      ]
   ];

Notice that if we use the standard $ <$ (in Mathematica, the function Less), then the BinarySearch function reverts to a standard binary search. Marvelous! Now we have a reusable piece of code that searches through any well-ordered set, provided we provide the correct well order.

The lesson to take from this is know your data! If your input list is not sorted, but still structured in some way, then there’s a good chance it is sorted with respect to a non-natural total order. For example, many operating systems order filenames which end in numbers oddly (e.g. “file1”, “file11”, “file12”, “file2”), and in certain places in the world, financial calendars are ordered differently (In Australia, the fiscal year starts in July). So take advantage of that, and you’ll never need to write binary search again.