Featured Posts

Making Hybrid Images Neural Networks
and Backpropagation
Elliptic Curves
and Cryptography
bezier-dog-image-small circle-wedge-sphere baby_programmers
Bezier Curves and Picasso Computing Homology Probably Approximately
Correct – A Formal Theory
of Learning

Bayesian Ranking for Rated Items

Problem: You have a catalog of items with discrete ratings (thumbs up/thumbs down, or 5-star ratings, etc.), and you want to display them in the “right” order.

Solution: In Python

'''
  score: [int], [int], [float] -> float

  Return the expected value of the rating for an item with known
  ratings specified by `ratings`, prior belief specified by
  `rating_prior`, and a utility function specified by `rating_utility`,
  assuming the ratings are a multinomial distribution and the prior
  belief is a Dirichlet distribution.
'''
def score(self, ratings, rating_prior, rating_utility):
    ratings = [r + p for (r, p) in zip(ratings, rating_prior)]
    score = sum(r * u for (r, u) in zip(ratings, rating_utility))
    return score / sum(ratings)

Discussion: This deceptively short solution can lead you on a long and winding path into the depths of statistics. I will do my best to give a short, clear version of the story.

As a working example I chose merely because I recently listened to a related podcast, say you’re selling mass-market romance novels—which, by all accounts, is a predictable genre. You have a list of books, each of which has been rated on a scale of 0-5 stars by some number of users. You want to display the top books first, so that time-constrained readers can experience the most titillating novels first, and newbies to the genre can get the best first time experience and be incentivized to buy more.

The setup required to arrive at the above code is the following, which I’ll phrase as a story.

Users’ feelings about a book, and subsequent votes, are independent draws from a known distribution (with unknown parameters). I will just call these distributions “discrete” distributions. So given a book and user, there is some unknown list (p_0, p_1, p_2, p_3, p_4, p_5) of probabilities (\sum_i p_i = 1) for each possible rating a user could give for that book.

But how do users get these probabilities? In this story, the probabilities are the output of a randomized procedure that generates distributions. That modeling assumption is called a “Dirichlet prior,” with Dirichlet meaning it generates discrete distributions, and prior meaning it encodes domain-specific information (such as the fraction of 4-star ratings for a typical romance novel).

So the story is you have a book, and that book gets a Dirichlet distribution (unknown to us), and then when a user comes along they sample from the Dirichlet distribution to get a discrete distribution, which they then draw from to choose a rating. We observe the ratings, and we need to find the book’s underlying Dirichlet. We start by assigning it some default Dirichlet (the prior) and update that Dirichlet as we observe new ratings. Some other assumptions:

  1. Books are indistinguishable except in the parameters of their Dirichlet distribution.
  2. The parameters of a book’s Dirichlet distribution don’t change over time, and inherently reflect the book’s value.

So a Dirichlet distribution is a process that produces discrete distributions. For simplicity, in this post we will say a Dirichlet distribution is parameterized by a list of six integers (n_0, \dots, n_5), one for each possible star rating. These values represent our belief in the “typical” distribution of votes for a new book. We’ll discuss more about how to set the values later. Sampling a value (a book’s list of probabilities) from the Dirichlet distribution is not trivial, but we don’t need to do that for this program. Rather, we need to be able to interpret a fixed Dirichlet distribution, and update it given some observed votes.

The interpretation we use for a Dirichlet distribution is its expected value, which, recall, is the parameters of a discrete distribution. In particular if n = \sum_i n_i, then the expected value is a discrete distribution whose probabilities are

\displaystyle \left (  \frac{n_0}{n}, \frac{n_1}{n}, \dots, \frac{n_5}{n} \right )

So you can think of each integer in the specification of a Dirichlet as “ghost ratings,” sometimes called pseudocounts, and we’re saying the probability is proportional to the count.

This is great, because if we knew the true Dirichlet distribution for a book, we could compute its ranking without a second thought. The ranking would simply be the expected star rating:

def simple_score(distribution):
   return sum(i * p for (i, p) in enumerate(distribution))

Putting books with the highest score on top would maximize the expected happiness of a user visiting the site, provided that happiness matches the user’s voting behavior, since the simple_score is just the expected vote.

Also note that all the rating system needs to make this work is that the rating options are linearly ordered. So a thumbs up/down (heaving bosom/flaccid member?) would work, too. We don’t need to know how happy it makes them to see a 5-star vs 4-star book. However, because as we’ll see next we have to approximate the distribution, and hence have uncertainty for scores of books with only a few ratings, it helps to incorporate numerical utility values (we’ll see this at the end).

Next, to update a given Dirichlet distribution with the results of some observed ratings, we have to dig a bit deeper into Bayes rule and the formulas for sampling from a Dirichlet distribution. Rather than do that, I’ll point you to this nice writeup by Jonathan Huang, where the core of the derivation is in Section 2.3 (page 4), and remark that the rule for updating for a new observation is to just add it to the existing counts.

Theorem: Given a Dirichlet distribution with parameters (n_1, \dots, n_k) and a new observation of outcome i, the updated Dirichlet distribution has parameters (n_1, \dots, n_{i-1}, n_i + 1, n_{i+1}, \dots, n_k). That is, you just update the i-th entry by adding 1 to it.

This particular arithmetic to do the update is a mathematical consequence (derived in the link above) of the philosophical assumption that Bayes rule is how you should model your beliefs about uncertainty, coupled with the assumption that the Dirichlet process is how the users actually arrive at their votes.

The initial values (n_0, \dots, n_5) for star ratings should be picked so that they represent the average rating distribution among all prior books, since this is used as the default voting distribution for a new, unknown book. If you have more information about whether a book is likely to be popular, you can use a different prior. For example, if JK Rowling wrote a Harry Potter Romance novel that was part of the canon, you could pretty much guarantee it would be popular, and set n_5 high compared to n_0. Of course, if it were actually popular you could just wait for the good ratings to stream in, so tinkering with these values on a per-book basis might not help much. On the other hand, most books by unknown authors are bad, and n_5 should be close to zero. Selecting a prior dictates how influential ratings of new items are compared to ratings of items with many votes. The more pseudocounts you add to the prior, the less new votes count.

This gets us to the following code for star ratings.

def score(self, ratings, rating_prior):
    ratings = [r + p for (r, p) in zip(ratings, rating_prior)]
    score = sum(i * u for (i, u) in enumerate(ratings))
    return score / sum(ratings)

The only thing missing from the solution at the beginning is the utilities. The utilities are useful for two reasons. First, because books with few ratings encode a lot of uncertainty, having an idea about how extreme a feeling is implied by a specific rating allows one to give better rankings of new books.

Second, for many services, such as taxi rides on Lyft, the default star rating tends to be a 5-star, and 4-star or lower mean something went wrong. For books, 3-4 stars is a default while 5-star means you were very happy.

The utilities parameter allows you to weight rating outcomes appropriately. So if you are in a Lyft-like scenario, you might specify utilities like [-10, -5, -3, -2, 1] to denote that a 4-star rating has the same negative impact as two 5-star ratings would positively contribute. On the other hand, for books the gap between 4-star and 5-star is much less than the gap between 3-star and 4-star. The utilities simply allow you to calibrate how the votes should be valued in comparison to each other, instead of using their literal star counts.

Advertisements

The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm

papad

Christos Papadimitriou, who studies multiplicative weights in the context of biology.

Hard to believe

Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more.

What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of n objects, and each object has a weight.

Set all the object weights to be 1.
For some large number of rounds:
   Pick an object at random proportionally to the weights
   Some event happens
   Increase the weight of the chosen object if it does well in the event
   Otherwise decrease the weight

The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step t is w_t before the event, and G represents how well the object did in the event, then we’ll update the weight according to the rule:

\displaystyle w_{t+1} = w_t (1 + G)

Think of this as increasing the weight by a small multiple of the object’s performance on a given round.

Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable.

This is how we use multiplicative weights: if we number the experts 1, \dots, N, we give each expert a weight w_i which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss G. Then we update the weight of the chosen expert by multiplying it by (1 + G / 1000). Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later).

In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo.

mwua

This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led:

  1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem.
  2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm.
  3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error.
  4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology.
  5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion.
  6. You can get easy \log(n)-approximation algorithms for many NP-hard problems, such as set cover.

Additional, more technical examples can be found in this survey of Arora et al.

In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository.

The generic MWUA algorithm

Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality.

In general we have some set X of objects and some set Y of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table M whose rows are objects, whose columns are outcomes, and whose i,j entry M(i,j) is the reward produced by object x_i when the outcome is y_j. We will also write this as M(x, y) for object x and outcome y. The only assumption we’ll make on the rewards is that the values M(x, y) are bounded by some small constant B (by small I mean B should not require exponentially many bits to write down as compared to the size of X). In symbols, M(x,y) \in [0,B]. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table M just exists for analysis, and the algorithm does not know its values. Moreover, while the values in M are static, the choice of outcome y for a given round may be nondeterministic.

The MWUA algorithm randomly chooses an object x \in X in every round, observing the outcome y \in Y, and collecting the reward M(x,y) (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round.

Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds.

def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   ...

We define for object x a nonnegative number w_x we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number t, i.e. w_{x,t} is the weight of object x in round t. Initially, all the weights are 1. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round.

# draw: [float] -> int
# pick an index from the given list of floats proportionally
# to the size of the entry (i.e. normalize to a probability
# distribution and draw according to the probabilities).
def draw(weights):
    choice = random.uniform(0, sum(weights))
    choiceIndex = 0

    for weight in weights:
        choice -= weight
        if choice <= 0:
            return choiceIndex

        choiceIndex += 1

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      ...

Sampling objects in this way is the same as associating a distribution D_t to each round, where if S_t = \sum_{x \in X} w_{x,t} then the probability of drawing x, which we denote D_t(x), is w_{x,t} / S_t. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis.

Next comes the weight update step. Let’s call our learning rate variable parameter \varepsilon. In round t say we have object x_t and outcome y_t, then the reward is M(x_t, y_t). We update the weight of the chosen object x_t according to the formula:

\displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)

In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects x \in X. This turns into the following Python snippet, where we hide the division by B into the choice of learning rate:

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      for i in range(len(weights)):
         weights[i] *= (1 + learningRate * reward(objects[i], outcome))

One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round t can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post.

But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee.

Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus \log(n), where n is the number of objects. (Exact formula at the end of the proof)

The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script:

  1. Define potential function, in our case S_t.
  2. State what seems like trivial facts about the potential function to write S_{t+1} in terms of S_t, and hence get general information about S_T for some large T.
  3. Theorem is proved.
  4. Wait, what?

Clearly, coming up with a useful potential function is a difficult and prized skill.

In this proof our potential function is the sum of the weights of the objects in a given round, S_t = \sum_{x \in X} w_{x, t}. Now the lemma.

Lemma: Let B be the bound on the size of the rewards, and 0 < \varepsilon < 1/2 a learning parameter. Recall that D_t(x) is the probability that MWUA draws object x in round t. Write the expected reward for MWUA for round t as the following (using only the definition of expected value):

\displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)

 Then the claim of the lemma is:

\displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}

Proof. Expand S_{t+1} = \sum_{x \in X} w_{x, t+1} using the definition of the MWUA update:

\displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)

Now distribute w_{x, t} and split into two sums:

\displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)

Using the fact that D_t(x) = \frac{w_{x,t}}{S_t}, we can replace w_{x,t} with D_t(x) S_t, which allows us to get R_t

\displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned}

And then using the fact that (1 + x) \leq e^x (Taylor series), we can bound the last expression by S_te^{\varepsilon R_t / B}, as desired.

\square

Now using the lemma, we can get a hold on S_T for a large T, namely that

\displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}

If |X| = n then S_1=n, simplifying the above. Moreover, the sum of the weights in round T is certainly greater than any single weight, so that for every fixed object x \in X,

\displaystyle S_T \geq w_{x,T} \leq  (1 + \varepsilon)^{\sum_t M(x, y_t) / B}

Squeezing S_t between these two inequalities and taking logarithms (to simplify the exponents) gives

\displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t

Multiply through by B, divide by \varepsilon, rearrange, and use the fact that when 0 < \varepsilon < 1/2 we have \log(1 + \varepsilon) \geq \varepsilon - \varepsilon^2 (Taylor series) to get

\displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) - \frac{B \log n}{\varepsilon}

The bracketed term is the payoff of object x, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object x \in X, and hence to the best one. This proves the theorem.

\square

Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity B \log(n) / \varepsilon hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed \delta. The answer is roughly: let \varepsilon = O(\delta / B) and pick T = O(B^2 \log(n) / \delta^2). See this survey for more.

MWUA for linear programs

Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product c \cdot x, where c is a fixed vector and x = (x_1, x_2, \dots, x_n) is a vector of non-negative variables the algorithm gets to choose. The choices for x are also constrained by a set of m linear inequalities, A_i \cdot x \geq b_i, where A_i is a fixed vector and b_i is a scalar for i = 1, \dots, m. This is usually summarized by putting all the A_i in a matrix, b_i in a vector, as

x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}

We can further simplify the constraints by assuming we know the optimal value Z = c \cdot x_{\textup{OPT}} in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint Ax \geq b, the “easy feasible region” of possible x‘s includes \{ x \mid x \geq 0, c \cdot x = Z \}.

In order to fit linear programming into the MWUA framework we have to define two things.

  1. The objects: the set of linear inequalities A_i \cdot x \geq b_i.
  2. The rewards: the error of a constraint for a special input vector x_t.

Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily.

The special input x_t depends on the weights in round t (which is allowed, recall). Specifically, if the weights are w = (w_1, \dots, w_m), we ask for a vector x_t in our “easy feasible region” which satisfies

\displaystyle (A^T w) \cdot x_t \geq w \cdot b

For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector \alpha and a scalar \beta and a convex region R, finding a vector x \in R satisfying \alpha \cdot x \geq \beta. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, \alpha = A^T w, \beta = w \cdot b, are particular to the linear programming formulation.

Two remarks on this choice of inputs. First, the vector A^T w is a weighted average of the constraints in A, and w \cdot b is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such x exists, then the original linear program has no solution. Indeed, given a solution x^* to the original linear program, each constraint, say A_1 x^*_1 \geq b_1, is unaffected by left-multiplication by w_1.

Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them.

At the end, our final output is an average of the x_t produced in each round, i.e. x^* = \frac{1}{T}\sum_t x_t. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above.

Implementing the oracle

Fix the convex region R = \{ c \cdot x = Z, x \geq 0 \} for a known optimal value Z. Define \textup{oracle}(\alpha, \beta) as the problem of finding an x \in R such that \alpha \cdot x \geq \beta.

For the case of this linear region R, we can simply find the index i which maximizes \alpha_i Z / c_i. If this value exceeds \beta, we can return the vector with that value in the i-th position and zeros elsewhere. Otherwise, the problem has no solution.

To prove the “no solution” part, say n=2 and you have x = (x_1, x_2) a solution to \alpha \cdot x \geq \beta. Then for whichever index makes \alpha_i Z / c_i bigger, say i=1, you can increase \alpha \cdot x without changing c \cdot x = Z by replacing x_1 with x_1 + (c_2/c_1)x_2 and x_2 with zero. I.e., we’re moving the solution x along the line c \cdot x = Z until it reaches a vertex of the region bounded by c \cdot x = Z and x \geq 0. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions.

The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read.

def makeOracle(c, optimalValue):
    n = len(c)

    def oracle(weightedVector, weightedThreshold):
        def quantity(i):
            return weightedVector[i] * optimalValue / c[i] if c[i] > 0 else -1

        biggest = max(range(n), key=quantity)
        if quantity(biggest) < weightedThreshold:
            raise InfeasibleException

        return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)])

    return oracle

Implementing the core solver

The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of c.

def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1):
    m, n = A.shape  # m equations, n variables
    oracle = makeOracle(linearObjective, optimalValue)

    def reward(i, specialVector):
        ...

    def observeOutcome(_, weights, __):
        ...

    numRounds = 1000
    weights, cumulativeReward, outcomes = MWUA(
        range(m), observeOutcome, reward, learningRate, numRounds
    )
    averageVector = sum(outcomes) / numRounds

    return averageVector

First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to A and b. Note that neither c nor the optimal value show up here.

    def reward(i, specialVector):
        constraint = A[i]
        threshold = b[i]
        return threshold - numpy.dot(constraint, specialVector)

    def observeOutcome(_, weights, __):
        weights = numpy.array(weights)
        weightedVector = A.transpose().dot(weights)
        weightedThreshold = weights.dot(b)
        return oracle(weightedVector, weightedThreshold)

Implementing the binary search, and an example

Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise.

def solve(A, b, linearObjective, maxRange=1000):
    optRange = [0, maxRange]

    while optRange[1] - optRange[0] > 1e-8:
        proposedOpt = sum(optRange) / 2
        print("Attempting to solve with proposedOpt=%G" % proposedOpt)

        # Because the binary search starts so high, it results in extreme
        # reward values that must be tempered by a slow learning rate. Exercise
        # to the reader: determine absolute bounds for the rewards, and set
        # this learning rate in a more principled fashion.
        learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective)
        learningRate = min(learningRate, 0.1)

        try:
            result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate)
            optRange[1] = proposedOpt
        except InfeasibleException:
            optRange[0] = proposedOpt

    return result

Finally, a simple example:

A = numpy.array([[1, 2, 3], [0, 4, 2]])
b = numpy.array([5, 6])
c = numpy.array([1, 2, 1])

x = solve(A, b, c)
print(x)
print(c.dot(x))
print(A.dot(x) - b)

The output:

Attempting to solve with proposedOpt=500
Attempting to solve with proposedOpt=250
Attempting to solve with proposedOpt=125
Attempting to solve with proposedOpt=62.5
Attempting to solve with proposedOpt=31.25
Attempting to solve with proposedOpt=15.625
Attempting to solve with proposedOpt=7.8125
Attempting to solve with proposedOpt=3.90625
Attempting to solve with proposedOpt=1.95312
Attempting to solve with proposedOpt=2.92969
Attempting to solve with proposedOpt=3.41797
Attempting to solve with proposedOpt=3.17383
Attempting to solve with proposedOpt=3.05176
Attempting to solve with proposedOpt=2.99072
Attempting to solve with proposedOpt=3.02124
Attempting to solve with proposedOpt=3.00598
Attempting to solve with proposedOpt=2.99835
Attempting to solve with proposedOpt=3.00217
Attempting to solve with proposedOpt=3.00026
Attempting to solve with proposedOpt=2.99931
Attempting to solve with proposedOpt=2.99978
Attempting to solve with proposedOpt=3.00002
Attempting to solve with proposedOpt=2.9999
Attempting to solve with proposedOpt=2.99996
Attempting to solve with proposedOpt=2.99999
Attempting to solve with proposedOpt=3.00001
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3  # note %G rounds the printed values
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
[ 0.     0.987  1.026]
3.00000000425
[  5.20000072e-02   8.49831849e-09]

So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs.

Discussion

One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology.

This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution.

The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming.

If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different.

Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet.

Until next time!

A Spectral Analysis of Moore Graphs

For fixed integers r > 0, and odd g, a Moore graph is an r-regular graph of girth g which has the minimum number of vertices n among all such graphs with the same regularity and girth.

(Recall, A the girth of a graph is the length of its shortest cycle, and it’s regular if all its vertices have the same degree)

Problem (Hoffman-Singleton): Find a useful constraint on the relationship between n and r for Moore graphs of girth 5 and degree r.

Note: Excluding trivial Moore graphs with girth g=3 and degree r=2, there are only two known Moore graphs: (a) the Petersen graph and (b) this crazy graph:

hoffman_singleton_graph_circle2

The solution to the problem shows that there are only a few cases left to check.

Solution: It is easy to show that the minimum number of vertices of a Moore graph of girth 5 and degree r is 1 + r + r(r-1) = r^2 + 1. Just consider the tree:

500px-petersen-as-moore-svg

This is the tree example for r = 3, but the argument should be clear for any r from the branching pattern of the tree: 1 + r + r(r-1)

Provided n = r^2 + 1, we will prove that r must be either 3, 7, or 57. The technique will be to analyze the eigenvalues of a special matrix derived from the Moore graph.

Let A be the adjacency matrix of the supposed Moore graph with these properties. Let B = A^2 = (b_{i,j}). Using the girth and regularity we know:

  • b_{i,i} = r since each vertex has degree r.
  • b_{i,j} = 0 if (i,j) is an edge of G, since any walk of length 2 from i to j would be able to use such an edge and create a cycle of length 3 which is less than the girth.
  • b_{i,j} = 1 if (i,j) is not an edge, because (using the tree idea above), every two vertices non-adjacent vertices have a unique neighbor in common.

Let J_n be the n \times n matrix of all 1’s and I_n the identity matrix. Then

\displaystyle B = rI_n + J_n - I_n - A.

We use this matrix equation to generate two equations whose solutions will restrict r. Since A is a real symmetric matrix is has an orthonormal basis of eigenvectors v_1, \dots, v_n with eigenvalues \lambda_1 , \dots, \lambda_n. Moreover, by regularity we know one of these vectors is the all 1’s vector, with eigenvalue r. Call this v_1 = (1, \dots, 1), \lambda_1 = r. By orthogonality of v_1 with the other v_i, we know that J_nv_i = 0. We also know that, since A is an adjacency matrix with zeros on the diagonal, the trace of A is \sum_i \lambda_i = 0.

Multiply the matrices in the equation above by any v_i, i > 1 to get

\displaystyle \begin{aligned}A^2v_i &= rv_i - v_i - Av_i \\ \lambda_i^2v_i &= rv_i - v_i - \lambda_i v_i \end{aligned}

Rearranging and factoring out v_i gives \lambda_i^2 - \lambda_i - (r+1) = 0. Let z = 4r - 3, then the non-r eigenvalues must be one of the two roots: \mu_1 = (-1 + \sqrt{z}) / 2 or \mu_2 = (-1 - \sqrt{z})/2.

Say that \mu_1 occurs a times and \mu_2 occurs b times, then n = a + b + 1. So we have the following equations.

\displaystyle \begin{aligned} a + b + 1 &= n \\ r + a \mu_1 + b\mu_2 &= 0 \end{aligned}

From this equation you can easily derive that \sqrt{z} is an integer, and as a consequence r = (m^2 + 3) / 4 for some integer m. With a tiny bit of extra algebra, this gives

\displaystyle m(m^3 - 2m - 16(a-b)) = 15

Implying that m divides 15, meaning m \in \{ 1, 3, 5, 15\}, and as a consequence r \in \{ 1, 3, 7, 57\}.

\square

Discussion: This is a strikingly clever use of spectral graph theory to answer a question about combinatorics. Spectral graph theory is precisely that, the study of what linear algebra can tell us about graphs. For an deeper dive into spectral graph theory, see the guest post I wrote on With High Probability.

If you allow for even girth, there are a few extra (infinite families of) Moore graphs, see Wikipedia for a list.

With additional techniques, one can also disprove the existence of any Moore graphs that are not among the known ones, with the exception of a possible Moore graph of girth 5 and degree 57 on n = 3250 vertices. It is unknown whether such a graph exists, but if it does, it is known that

You should go out and find it or prove it doesn’t exist.

Hungry for more applications of linear algebra to combinatorics and computer science? The book Thirty-Three Miniatures is a fantastically entertaining book of linear algebra gems (it’s where I found the proof in this post). The exposition is lucid, and the chapters are short enough to read on my daily train commute.

Voltage, Temperature, and Harmonic Functions

This is a guest post by my friend and colleague Samantha Davies. Samantha is a math Ph.D student at the University of Washington, and a newly minted math blogger. Go check out her blog, With High Probability.

If I said “let’s talk about temperature and voltage”, you might be interested, but few would react the same if instead I suggested an umbrella term: harmonic functions.

Harmonic functions are often introduced as foreign operators that follow a finicky set of rules, when really they’re extremely natural phenomena that everyone has seen before. Why the disconnect then? A complex analysis class won’t take the time to talk about potential energy storage, and a book on data science won’t discuss fluid dynamics. Emphasis is placed on instances of harmonic functions in one setting, instead of looking more broadly at the whole class of functions.

Screenshot 2016-09-16 09.24.25.png

ams.org ©Disney/Pixar

Understanding harmonic functions on both discrete and continuous structures leads to a better appreciation of any setting where these functions are found, whether that’s animation, heat distribution, a network of political leanings, random walks, certain fluid flows, or electrical circuits.

Harmonic?

Harmonic originated as a descriptor for something that’s undergoing harmonic motion. If a spring with an attached mass is stretched or compressed, releasing that spring results in the system moving up and down. The mass undergoes harmonic motion until the system returns to its equilibrium. These repetitive movements exhibited by the mass are known as oscillations. In physics lingo, a simple harmonic motion is a type of oscillation where the force that restores an object to its equilibrium is directly proportional to the displacement.

animated-mass-spring

Wiki

So, take the mass on a spring as an example: there’s some restoring force which works to bring the spring back to where it would naturally hang still. The system has more potential energy when it’s really stretched out from its equilibrium position, or really smooshed together from its equilibrium position (it’s more difficult to pull a spring really far out or push a spring really close together).

Equations representing harmonic motion have solutions which can be written as functions of sine and cosine, and these became known as harmonics.  Higher dimensional parallels of harmonics satisfy a certain partial differential equation called Laplace’s equation. As mathematicians are so skilled at using the same name for way too many things, today any real valued function with continuous second partial derivatives which satisfies Laplace’s equation is called a continuous harmonic function.

\Delta denotes the Laplace operator, also called the Laplacian. A continuous function u defined on some region \Omega satisfies Laplace’s equation when \Delta u=0, for all points in \Omega. My choice of the word “region” here is technical: being harmonic only makes sense in open, connected, nonempty sets.

The Laplacian is most easily described in terms of Cartesian coordinates, where it’s the sum of all u‘s unmixed second partial derivatives:

\begin{aligned} \Delta u(v)=\sum\limits_{i=1}^n \frac{\partial^2 f}{\partial x_i^2}(v). \end{aligned}

For example, u(x,y)=e^x \sin y is harmonic on all of \mathbb{R}^2 because

\begin{aligned} \Delta u(x,y)= \frac{\partial^2 f}{\partial x^2} \left (e^x \sin y\right ) + \frac{\partial^2 f}{\partial y^2} \left (e^x \sin y \right ) =e^x \sin y -e^x \sin y=0. \end{aligned}

Ironically, \cos(x) and \sin(x) are not harmonic in any region in \mathbb{R}, even though they define harmonic motion. Taking the second derivatives, \frac{d^2}{dx^2} \cos(x)=-\cos(x) and \frac{d^2}{dx^2} \sin(x)=-\sin(x) are only 0 at isolated points. In fact, harmonic functions on \mathbb{R} are exactly the linear functions.

The discrete Laplace operator is an analogue used on discrete structures like grids or graphs. These structures are all basically a set of vertices, where some pairs of vertices have an edge connecting them. A subset of vertices must be designated as the boundary, where the function has specified values called the boundary condition. The rest of the vertices are called interior.

A function u on a graph assigns a value to each interior vertex, while satisfying the boundary condition. There’s also a function \gamma which assigns a value to each edge in a way such that for every vertex, the sum of its outgoing edges’ values is 1 (without loss of generality, otherwise normalize arbitrary nonnegative weights). Keep in mind that the edge weight function is not necessarily symmetric, as it’s not required that \gamma(v,w)=\gamma(w,v). The discrete Laplacian is still denoted \Delta, and the discrete form of Laplace’s equation is described by:

\begin{aligned} (\Delta u)(v) = \sum\limits_{w \in N(v)} \gamma(v,w)\left( u(v)-u(w)\right)=0, \end{aligned}

where N(v), denotes the neighbors of v, which are the vertices connected to v by an edge.

Below is an example of a harmonic function \alpha on a grid which has symmetric edge weights. The function value is specified on the black boundary vertices, while the white interior vertices take on values that satisfy the discrete form of Laplace’s equation.

harmonic-grid

symmetric edge weights \\ function values

You can easily check that \alpha is harmonic at the corner vertices, but here’s the grosser computation for the 2 most central vertices on the grid (call the left one v_1 and the other v_2):

\begin{aligned} (\Delta \alpha)(v_1) &= \sum\limits_{w \in N(v_1)} \gamma(v_1,w)\left( \alpha(v_1)-\alpha(w)\right)=\frac{1}{4} \left( \frac{24}{55}-0\right)+\frac{1}{4} \left(\frac{24}{55}+4 \right)+\frac{3}{8} \left(\frac{24}{55}-\frac{64}{55}\right)+\frac{1}{8} \left(\frac{24}{55}-8\right )=0,\\ (\Delta \alpha)(v_2) &= \sum\limits_{w \in N(v_2)} \gamma(v_2,w)\left( \alpha(v_2)-\alpha(w)\right)=\frac{1}{8}  \left(\frac{64}{55}-8 \right )+\frac{3}{8} \left(\frac{64}{55}-0\right)+\frac{3}{8} \left(\frac{64}{55}-\frac{24}{55}\right)+\frac{1}{8} \left(\frac{64}{55}-0\right)=0. \end{aligned}

Properties of harmonic functions

The definition of a harmonic function is not enlightening… it’s just an equation. But, harmonic functions satisfy an array of properties that are much more helpful in eliciting intuition about their behavior. These properties also make the functions particularly friendly to work with in mathematical contexts.

Mean value property

The mean value property states that the value of a harmonic function at an interior point is the average of the function’s values around the point. Of course, the definition of “average” depends on the setting. For example, in \mathbb{R} harmonic functions are just linear functions. So given any linear function, say f(x)=3x-2, the value at a point can be found by averaging values on an interval around it:  f(1)= \frac14 \int_{-1}^{3} (3x-2)=1.

In \mathbb{R}^n, if u is harmonic in a region Ω which contains a ball centered around a, then u(a) is the average of u on the surface of the ball. So in the complex plane, or equivalently \mathbb{R}^2, if the circle has radius r_0, then for all 0 <r<r_0:

\begin{aligned} u(a)=\frac{1}{2 \pi} \int\limits_{0}^{2 \pi} u(a+re^{i \theta}) d \theta. \end{aligned}

In the discrete case, a harmonic function’s value on an interior vertex is the average of all the vertex’s neighbors’ values.  If u is harmonic at interior vertex v then the mean value property states that:

\begin{aligned} u(v)=\sum\limits_{w \in N(v)} \gamma(v,w)u(w). \end{aligned}

In the picture below, the black vertices are the boundary while the white vertices are interior, and the harmonic function \alpha has been defined on the graph. Here, \gamma(v,w)=\frac{1}{\deg(v)}, so this is an example with edge weights that are not necessarily symmetric. For instance, \gamma(v,w)=\frac14 while \gamma(w,v)=\frac13.

redone_pic

Foundations of Data Science pg 156

It’s easy to verify \alpha satisfies the mean value property at all interior vertices, but here’s the check for v:

\begin{aligned} \alpha(v)=\sum\limits_{x \in N(v)} \gamma(v,x)\alpha(x)=\frac14 \cdot 5+\frac14 \cdot 5+\frac14 \cdot 5+\frac14 \cdot 1=4. \end{aligned}

For the settings considered here, the mean value property is actually equivalent with being harmonic. If u is a continuous function which satisfies the mean value property on a region \Omega, then u is harmonic in \Omega (proof in appendix). Any function \alpha on a graph which satisfies the mean value property also satisfies the discrete Laplacian; just rearrange the mean value property to see this.

Maximum principle

The mean value property leads directly to the maximum/ minimum principle for harmonic functions. This principle says that a nonconstant harmonic function on a closed and bounded region must attain its maximum and minimum on the boundary. It’s a consequence that’s easy to see for continuous and discrete settings: if a max/min were attained at an interior point, that interior point is also the average of some set of surrounding points. But it’s impossible for an average to be strictly greater than or less than all of its surrounding points unless the function is constant.

x2y2_and_x2-y2

laussy.org

Above are plots of x^2+y^2 and x^2-y^2 on the set [-3,3] \times [-3,3]. x^2+y^2 is nowhere harmonic and x^2-y^2 is harmonic everywhere, which means that x^2-y^2 must satisfy the maximum/ minimum principle but there’s no guarantee that x^2+y^2 does. Looking at the graph above, x^2+y^2  doesn’t satisfy the the minimum principle because the minimum of x^2+y^2 on the domain is 0, which is achieved at the interior point (0,0). On the same set, the maximum of x^2-y^2 is 9, which is achieved at boundary points (3,0) and (-3,0), and the minimum of x^2-y^2 is -9, which is achieved at boundary points (0,3) and (0,-3).

Uniqueness

If a harmonic function is continuous on the boundary of a closed and bounded set and harmonic in the interior, then the values of the interior are uniquely determined by the values of the boundary. The proof of this is identical for the discrete and continuous case: if u_1 and u_2 were two such functions described above with the same boundary conditions, then u_1-u_2 is harmonic and its boundary values are 0. By the max/min principle, all the interior values are also 0, so u_1 = u_2 on the whole set.

Solution to a Dirichlet problem

Dirichlet problem is the problem of finding a function which solves some partial differential equation through out a region given its values on the region’s  boundary. When considering harmonic functions in the discrete or continuous setting, the PDE to be solved is the appropriate form of Laplace’s equation.

So, given a region and a set of values to take on the boundary, does there exist a function which takes the designated values on the boundary and is harmonic in the interior of the region? The answer in the continuous setting is yes, so long as the boundary condition is sufficiently smooth. In the discrete setting, it’s much more complicated, especially on networks with infinitely many vertices. For most practical instances though, like grids or finite graphs with edge weight function \gamma(v,w)=\frac{1}{\deg(v)}, the answer is still yes.

Examples

I’ll go into detail about a harmonic function on a continuous structure, steady state temperature, and on a discrete structure, voltage. These phenomena are already familiar to us, but weaving previous knowledge with a new appreciation for harmonic functions is crucial to understanding how they, and other natural instances of harmonic functions, behave.

Temperature

Suppose you have an insulated object, like a wire, plate, or cube, and apply some temperature on its boundary. The temperature on the boundary will affect the temperature of the interior, and eventually these interior temperatures stabilize to some distribution which doesn’t change as time continues. This is known as the steady state temperature distribution.

A temperature function u of spatial variables x_i and time variable t, satisfies the heat equation

\begin{aligned} \frac{\partial u}{\partial t}-\alpha \Delta u=0, \end{aligned}

where \alpha>0 is a constant which represents the material’s ability to store and conduct thermal energy (these modules for complex analysis have a nice derivation of the heat equation ). The requirement that the temperature distribution is steady state yields the necessary condition that \frac{\partial u}{\partial t}=0 for all t,x_i. So, functions solving the heat equation must satisfy \Delta u=0, which means they are harmonic.

Intuitively, this is no surprise because any continuous function which satisfies the mean value property is harmonic. Still, temperature is a great example because it creates a familiar visual for a continuous harmonic function. Below is a picture of a steady state temperature distribution on a square metal plate of length \pi millimeters, where the bottom edge has temperature set to \pi ℃ and all other edges are set to 0℃.

diffyqs5231x.png

jirka.org/diffyqs

Since in continuous settings the Dirichlet problem always has a unique solution for harmonic functions, there’s a unique temperature function which takes the designated values on the boundary and is harmonic in the interior of the region. That function is:

\begin{aligned} u(x,y)=\sum\limits_{n=0}^{\infty} \frac{4}{2n+1} \sin((2n+1) x) \left (\frac{\sinh((2n+1)(\pi-y))}{\sinh((2n+1) \pi)} \right ). \end{aligned}

(Here’s the derivation of that equation, but it’s too much Diff EQs for here). So the value of u at any point in the interior, like (1.5,0.5), can be calculated:

 \begin{aligned} u(1.5,0.5) = \sum\limits_{n=0}^{\infty} \frac{4}{2n+1} \sin((2n+1) 1.5) \left (\frac{\sinh((2n+1)(\pi-0.5))}{\sinh((2n+1) \pi)} \right ) \approx 2.17. \end{aligned}

Voltage

An electrical network can easily be abstracted to a graph. Any point in the network where circuit elements meet is a vertex, and the wires (or other channels) are edges. Each wire has some amount of conductance. As the name suggests, conductance is a measure for how easy it is for electric charge to pass through the wire.

Edges in the graph are assigned nonnegative values representing the wires’ conductance relative to the total conductance emanating from the vertex. These values are the edge weights in the graph, and again they’re not necessarily symmetric. If c_{v,w} is the conductance of the edge between v and w and c_v=\sum_{x:N(v)}c_{v,x} is the total conductance emanating from v then, \gamma(v,w)=\frac{c_{v,w}}{c_v} while \gamma(w,v)=\frac{c_{v,w}}{c_w}.

The ground node in a network is an arbitrary node in the circuit picked to have an electrical potential energy per charge of 0 volts. So, the voltage at a node is the difference in electrical potential energy per charge between that node and the ground node.

The picture below shows an electrical network after the positive and negative terminals of a battery have been attached to two nodes. In this network, the conductance for all the wires is the same, so \gamma(v,w)=\frac{1}{\deg(v)} for all edges (v,w). When the positive and negative terminals of a battery are attached to two nodes in a circuit, a flow of electric charge, called current, is created and carried by electrons or ions.

The vertices representing the nodes attached to the battery are the boundary vertices, and here the node with the negative terminal attached to it as the ground node. After the battery is attached, voltage is induced on the rest of the nodes. The voltage taken on by each node minimizes the total energy lost as heat in the system.

voltage.jpeg

source: Simons Foundation

The reader can quickly double check that the function is harmonic at all the interior points, or equivalently showing that the mean value property holds. After a couple extra facts about current it’s easy to show that the voltage at a node is always the average of its neighbors’ voltages, making voltage a harmonic function (proof in appendix). 

Wrap it up

At the end of the day, understanding harmonic functions makes you a better scientist (or better animation artist, apparently). If you want to understand phenomena like potential energy, data modeling, fluid flow, etc, then you’ve got to understand the math that controls it! Check out the appendix for proofs and extra facts that fell somewhere in the intersection of “things I didn’t think you’d care about” and “things I felt guilty leaving out”.

Appendix

There are tons of references with more information about discrete harmonic functions, but I’m specifically interesting in those relating random walks and electrical networks. Here is a book whose 5th chapter presents some introductory material. Also this, from the Simons Foundation, and that, a paper from Benjamini and Lovász, are sources which dive deeper. The piece from the Simons Foundation further explains the relation between harmonic functions and minimizing the loss of energy in a network, like the voltage example with heat.

Again, here is an article about harmonic functions in animation.

These complex analysis modules, have sections about temperature distribution and fluid flow. They’re a pretty good tool for all your complex analysis needs, including some sections on harmonic functions in complex analysis.

Harmonic \Longleftrightarrow mean value property

This is obvious for the discrete case, as the discrete Laplacian can be rewritten to show that the function’s value at a point is just the average of its neighbors. Given a continuous function, u, which satisfies the mean value property throughout a disk, there exists (by Poisson’s formula) a function, v, harmonic in the same disk and equal to u on the boundary. By uniqueness, u=v through out the disk and u is harmonic in the disk. For anyone concerned about a region which isn’t a disk, the general fact is that the Dirichlet problem for harmonic function always has a solution.

Proof that voltage is a harmonic function

Voltage satisfies the mean value property and is therefore harmonic. As before c_{u,v} is the conductance of the edge between u and v and c_u is the total conductance emanating from u, c_u =\sum_{x:N(u)}c_{u,x} . Let p_{u,v}=\frac{c_{uv,}}{c_u} be the edge value on the edge from u to v in the graph.

Let i_{u,x} denote the current flowing through node u to node x, v_{u} denote the voltage at node u, c_{u,x} denote the conductance of the edge between u and x, and r_{u,x}=\frac{1}{c_{u,x}} denote the resistance of the edge between u and x. Ohm’s law relates current, resistance, and voltage by stating that

\begin{aligned} i_{u,x} = \frac{v_u-v_x}{r_{u,x}}=\left (v_u-v_x \right )c_{u,x} \end{aligned}.

Kirchoff’s law states that the total current from a node is 0, so for any u in the network

\begin{aligned} \sum\limits_{x} i_{u,x}=0. \end{aligned}

Now, we’re ready to show that voltage is a harmonic function.

\begin{aligned} \sum\limits_{x} i_{u,x}=0 \qquad &\Longleftrightarrow \qquad \sum\limits_{x} \left ( v_u-v_x \right ) c_{u,x}=0 \\ \Longleftrightarrow \qquad  \sum\limits_{x}  v_u c_{u,x}= \sum\limits_{x} v_x  c_{u,x} \qquad &\Longleftrightarrow \qquad v_u c_u=\sum\limits_{x} v_x p_{u,x}c_{u}\\ &\Longleftrightarrow \qquad v_u=\sum\limits_{x} v_x p_{u,x}. \end{aligned}