*This is a guest post by my friend and colleague Adam Lelkes. Adam’s interests are in algebra and theoretical computer science. This gem came up because Adam gave a talk on probabilistic computation in which he discussed this technique.*

**Problem:** simulate a biased coin using a fair coin.

**Solution:** (in Python)

def biasedCoin(binaryDigitStream, fairCoin): for d in binaryDigitStream: if fairCoin() != d: return d

**Discussion:** This function takes two arguments, an iterator representing the binary expansion of the intended probability of getting 1 (let us denote it as ) and another function that returns 1 or 0 with equal probability. At first glance this might seem like an overcomplicated way of solving this problem: why can’t the probability be a floating point number?

The point is that can have infinite precision! Assuming that fairCoin() gives us a perfectly random stream of 1’s and 0’s (independently and with probability 1/2) and we can read each bit of the binary expansion of , this function returns 1 with probability *exactly* even if is irrational or a fraction with infinite decimal expansion. If we used floating point arithmetic there would be a small chance we get unlucky and exhaust the precision available. We would only get an approximation of the true bias at best.

Now let us explain why this algorithm works. We keep tossing our fair coins to get a sequence of random bits, until one of our random bits is different from the corresponding bit in the binary expansion of . If we stop after steps, that means that the first bits in the two binary sequences were the same, which happens with probability . Given that this happens, in the th step we will return the th bit of ; let us denote this bit by . Then the probability of returning 1 is , which is the binary expansion of .

This algorithm is also efficient. By efficient here we mean that the expected running time is constant. Of course, to show this we need to make some assumption about the computational complexity of calculating the bits of . If we assume that the bits of are efficiently computable in the sense that the time required to compute is bounded by a polynomial in , then this algorithm does run in constant expected time.

Indeed, the expected running time is . Showing that this sum is a constant is an easy calculus exercise: using the ratio test we get that

,

thus the series is convergent.

Now that we proved that our algorithm works, it’s time to try it! Let’s say that we want to simulate a coin which gives “heads” with probability 1/3.

We need to construct our binary digit stream. Since 1/3 is 0.010101… in binary, we could use the following simple generator:

def oneThird(): while True: yield 0 yield 1

However, we might want to have a more general generator that gives us the binary representation of any number. The following function, which takes a number between 0 and 1 as its argument, does the job:

def binaryDigits(fraction): while True: fraction *= 2 yield int(fraction) fraction = fraction % 1

We also need a fair coin simulator. For this simulation, let’s just use Python’s built-in pseudo-random number generator:

def fairCoin(): return random.choice([0,1])

Let us toss our biased coin 10000 times and take the sum. We expect the sum to be around 3333. Indeed, when I tried

>>> sum(biasedCoin(oneThird(), fairCoin) for i in range(10000)) 3330

It might be worth noting oneThird() is approximately ten times faster than binaryDigits(fractions.Fraction(1,3)), so when a large number of biased coins is needed, you can hardwire the binary representation of into the program.