*This is a guest post by my friend and colleague Adam Lelkes. Adam’s interests are in algebra and theoretical computer science. This gem came up because Adam gave a talk on probabilistic computation in which he discussed this technique.*

**Problem:** simulate a biased coin using a fair coin.

**Solution:** (in Python)

def biasedCoin(binaryDigitStream, fairCoin): for d in binaryDigitStream: if fairCoin() != d: return d

**Discussion:** This function takes two arguments, an iterator representing the binary expansion of the intended probability of getting 1 (let us denote it as ) and another function that returns 1 or 0 with equal probability. At first glance this might seem like an overcomplicated way of solving this problem: why can’t the probability be a floating point number?

The point is that can have infinite precision! Assuming that fairCoin() gives us a perfectly random stream of 1’s and 0’s (independently and with probability 1/2) and we can read each bit of the binary expansion of , this function returns 1 with probability *exactly* even if is irrational or a fraction with infinite decimal expansion. If we used floating point arithmetic there would be a small chance we get unlucky and exhaust the precision available. We would only get an approximation of the true bias at best.

Now let us explain why this algorithm works. We keep tossing our fair coins to get a sequence of random bits, until one of our random bits is different from the corresponding bit in the binary expansion of . If we stop after steps, that means that the first bits in the two binary sequences were the same, which happens with probability . Given that this happens, in the th step we will return the th bit of ; let us denote this bit by . Then the probability of returning 1 is , which is the binary expansion of .

This algorithm is also efficient. By efficient here we mean that the expected running time is constant. Of course, to show this we need to make some assumption about the computational complexity of calculating the bits of . If we assume that the bits of are efficiently computable in the sense that the time required to compute is bounded by a polynomial in , then this algorithm does run in constant expected time.

Indeed, the expected running time is . Showing that this sum is a constant is an easy calculus exercise: using the ratio test we get that

,

thus the series is convergent.

Now that we proved that our algorithm works, it’s time to try it! Let’s say that we want to simulate a coin which gives “heads” with probability 1/3.

We need to construct our binary digit stream. Since 1/3 is 0.010101… in binary, we could use the following simple generator:

def oneThird(): while True: yield 0 yield 1

However, we might want to have a more general generator that gives us the binary representation of any number. The following function, which takes a number between 0 and 1 as its argument, does the job:

def binaryDigits(fraction): while True: fraction *= 2 yield int(fraction) fraction = fraction % 1

We also need a fair coin simulator. For this simulation, let’s just use Python’s built-in pseudo-random number generator:

def fairCoin(): return random.choice([0,1])

Let us toss our biased coin 10000 times and take the sum. We expect the sum to be around 3333. Indeed, when I tried

>>> sum(biasedCoin(oneThird(), fairCoin) for i in range(10000)) 3330

It might be worth noting oneThird() is approximately ten times faster than binaryDigits(fractions.Fraction(1,3)), so when a large number of biased coins is needed, you can hardwire the binary representation of into the program.

Great post. My only concern is that this all works only when you can generate the binary digit stream. It looks like you’re aware that your process works only for rationals since you wrote binaryDigits(fraction) but I thought it might be worth pointing out explicitly. Of course, you should theoretically be able to do this with all computable numbers but then there’s the problem of generating the algorithm that determines the digit stream, a problem which is readily solved for rationals.

LikeLiked by 1 person

Hi! Shouldn’t the probability of returning 1 be ∑(Pi / 2^i), not 2^(i-1) ?

LikeLike

vladimirav is right, there’s an off-by-one error there. It still adds up nicely, because the probability of stopping on ith bit is in fact 1/2^i: it’s 1/2^(i-1) for matching the first i-1 bits and then 1/2 for the mismatch at ith bit.

Overall, great article!

LikeLike

Great article. The only thing I couldn’t understand was where you say that the probability of returning 1 is \sum_{i=1}^\infty \frac{p_i}{2^{i-1}}

How did you come up with that ? How the probability of returning 0th bit, 1st bit, 2nd bit etc of p is related to the probability of returning 1?

LikeLike