In this article we’ll implement a global optimization pass, and show how to use the dataflow analysis framework to verify the results of our optimization.
The code for this article is in this pull request, and as usual the commits are organized to be read in order.
The noisy arithmetic problem
This demonstration is based on a simplified model of computation relevant to the HEIR project. You don’t need to be familiar with that project to follow this article, but if you’re wondering why someone would ever want the kind of optimization I’m going to write, that project is why.
The basic model is “noisy integer arithmetic.” That is, a program can have integer types of bounded width, and each integer is associated with some unknown (but bounded with high probability) symmetric random noise. You can imagine the “real” integer being the top 5 bits of a 32-bit integer, and the bottom 27 bits storing the noise. When a new integer is created, it magically has a random signed 12-bit integer added to it. When you apply operations to combine integers, the noise grows. Adding two integers adds their noise, and at worst you get one more bit of noise. Scaling an integer by a statically-known constant scales the noise by a constant. Multiplying two integers multiplies their noise values, and you get twice the bits of noise. As long as your program stays below 27 bits of noise, you can still “recover” the original 5-bit integer at the end of the program. Such a program is called legal, and otherwise, the output is random junk and the program is called illegal.
Finally, there is an expensive operation called reduce_noise
that can explicitly reduce the noise of a noisy integer back to the base level of 12 bits. This operation has a statically known cost relative to the standard operations.
Note that starting from two noisy integers, each with 12 bits of noise, a single multiplication op brings you jarringly close to ruin. You would have at most 24 bits of noise, which is close to the maximum of 26. But the input IR we start with may have arbitrary computations that do not respect the noise limits. The goal of our optimization is to rewrite an MLIR region so that the noisy integer math never exceeds the noise limit at any given step.
A trivial way to do that would be to insert reduce_noise
ops greedily, whenever an op would bring the noise of a value too high. Inserting such ops may be necessary, but a more suitable goal would be to minimize the overall cost of the program, subject to the constraint that the noisy arithmetic is legal. One could do this by inserting reduce_noise
ops more strategically, or by rewriting the program to reduce the need for reduce_noise
ops, or both. We’ll focus on the former: finding the best place to insert reduce_noise
ops without rewriting the rest of the program.
The noisy dialect
We previously wrote about defining a new dialect, and the noisy
dialect we created for this article has little new to show. This commit defines the types and ops, hard coding the 32-bit width and 5-bit semantic input type for simplicity, as well as the values of 12 bits of initial noise and 26 bits of max noise.
Note that the noise bound is not expressed on the type as an attribute. If it were, we’d run into a few problems: first, whenever you insert a reduce_noise
op, you’d have to update the types on all the downstream ops. Second, it would prevent you from expressing control flow, since the noise bound cannot be statically inferred from the source code when there are two possible paths that could result in different noise values.
So instead, we need a way to compute the noise values, and associate them with each SSA value, and account for control flow. This is what an analysis pass is designed to do.
An analysis pass is just a class
The typical use of an analysis pass is to construct a data structure that encodes global information about a program, which can then be re-used during different parts of a pass. I imagined there would be more infrastructure around analysis passes in MLIR, but it’s quite simple. You define a C++ class with a constructor that takes an Operation *
, and construct it basically whenever you need it. The only infrastructure for it involves storing and caching the constructed analysis within a pass, and understanding when an analysis needs to be recomputed (always between passes, by default).
By way of example, over at the HEIR project I made a simple analysis that chooses a unique variable name for every SSA value in a program, which I then used to generate code in an output language that needed variable names.
For this article we’ll see two analysis passes. One will formulate and solve the optimization problem that decides where to insert reduce_noise
operations. This will be one of the “class that does anything” kind of analysis pass. The other analysis pass will rely on MLIR’s data flow analysis framework to propagate the noise model through the IR. This one will actually not require us to write an analysis from scratch, but instead will be implemented by means of the existing IntegerRangeAnalysis
, which only requires us to implement an interface on each op that describes how the op affects the noise. This will be used in our pass to verify that the inserted reduce_noise
operations ensure, if nothing else, that the noise never exceeds the maximum allowable noise.
We’ll start with the data flow analysis.
Reusing IntegerRangeAnalysis
Data flow analysis is a classical static analysis technique for propagating information through a program’s IR. It is one part of Frances Allen’s Turing Award. This article gives a good introduction and additional details, but I will paraphrase it briefly here in the context of IntegerRangeAnalysis
.
The basic idea is that you want to get information about what possible values an integer-typed value can have at any point in a given program. If you see x = 7
, then you know exactly what x
is. If you see something like
func (%x : i8) {
%1 = arith.extsi %x : i8 to i32
%2 = arith.addi %x, %x : i32
}
then you know that %2
can be at most a signed 9-bit integer, because it started as an 8-bit integer, and adding two such integers together can’t fill up more than one extra bit.
In such cases, one can find optimizations, like the int-range-optimizations
pass in MLIR, which looks at comparison ops arith.cmpi
and determines if it can replace them with constants. It does this by looking at the integer range analysis for the two operands. E.g., given the op x > y
, if you know y
‘s maximum value is less than x
‘s minimum value, then you can replace it with a constant true
.
Computing the data flow analysis requires two ingredients called a transfer function and a join operation. The transfer function describes what the output integer range should be for a given op and a given set of input integer ranges. This can be an arbitrary function. The join operation describes how to combine two or more integer ranges when you get to a point at the program in which different branches of control flow merge. For example,
def fn(branch):
x = 7
if branch:
y = x * x
else:
y = 2*x
return y
The value of y
just before returning cannot be known exactly, but in one branch you know it’s 14, and in another it’s 49. So the final value of y
could be estimated as being in the range [14, 49]
. Here the join function computes the smallest integer range containing both estimates. [Aside: it could instead use the set {14, 49} to be more precise, but that is not what IntegerRangeAnalysis
happens to do]
In order for a data flow analysis to work properly, the values being propagated and the join
function must together form a semilattice, which is a partially-ordered set in which every two elements have a least upper bound, that upper bound is computed by join
, and join
itself must be associative, commutative, and idempotent. For dataflow analysis, the semilattice must also be finite. This is often expressed by having distinct “top” and “bottom” elements as defaults. “Top” represents “could be anything,” and sometimes expresses that a more precise bound would be too computationally expensive to continue to track. “Bottom” usually represents an uninitialized value.
Once you have this, then MLIR provides a general algorithm to propagate values through the IR via a technique called Kildall’s method, which iteratively updates the SSA values, applying the transfer function and joining at the merging of control flow paths, until the process reaches a fixed point.
Here are MLIR’s official docs on dataflow analysis, and here is the RFC where the current data flow solver framework was introduced. In our situation, we want to use the solver framework with the existing IntegerRangeAnalysis
, which only asks that we implement the transfer function by implementing InferIntRangeInterface
on our ops.
This commit does just that. This requires adding the DeclareOpInterfaceMethods<InferIntRangeInterface>
to all relevant ops. That in turn generates function declarations for
void MyOp::inferResultRanges(
ArrayRef<ConstantIntRanges> inputRanges, SetIntRangeFn setResultRange);
The ConstantIntRange
is a dataclass holding a min and max integer value. inputRanges
represents the known bounds on the inputs to the operation in question, and SetIntRangeFn
is the callback used to produce the result.
For example, for AddOp
we can implement it as
ConstantIntRanges unionPlusOne(ArrayRef<ConstantIntRanges> inputRanges) {
auto lhsRange = inputRanges[0];
auto rhsRange = inputRanges[1];
auto joined = lhsRange.rangeUnion(rhsRange);
return ConstantIntRanges::fromUnsigned(joined.umin(), joined.umax() + 1);
}
void AddOp::inferResultRanges(ArrayRef<ConstantIntRanges> inputRanges,
SetIntRangeFn setResultRange) {
setResultRange(getResult(), unionPlusOne(inputRanges));
}
A MulOp
is similarly implemented by summing the maxes. Meanwhile, EncodeOp
and ReduceNoiseOp each set the initial range to [0, 12]
. So the min will always be zero, and we really only care about the max.
The next commit defines an empty pass that will contain our analyses and optimizations, and this commit shows how the integer range analysis is used to validate an IR’s noise growth. In short, you load the IntegerRangeAnalysis
and its dependent DeadCodeAnalysis
, run the solver, and then walk the IR, asking the solver via lookupState
to give the resulting value range for each op’s result, and comparing it against the maximum.
void runOnOperation() {
Operation *module = getOperation();
DataFlowSolver solver;
solver.load<dataflow::DeadCodeAnalysis>();
solver.load<dataflow::IntegerRangeAnalysis>();
if (failed(solver.initializeAndRun(module)))
signalPassFailure();
auto result = module->walk([&](Operation *op) {
if (!llvm::isa<noisy::AddOp, noisy::SubOp, noisy::MulOp,
noisy::ReduceNoiseOp>(*op)) {
return WalkResult::advance();
}
const dataflow::IntegerValueRangeLattice *opRange =
solver.lookupState<dataflow::IntegerValueRangeLattice>(
op->getResult(0));
if (!opRange || opRange->getValue().isUninitialized()) {
op->emitOpError()
<< "Found op without a set integer range; did the analysis fail?";
return WalkResult::interrupt();
}
ConstantIntRanges range = opRange->getValue().getValue();
if (range.umax().getZExtValue() > MAX_NOISE) {
op->emitOpError() << "Found op after which the noise exceeds the "
"allowable maximum of "
<< MAX_NOISE
<< "; it was: " << range.umax().getZExtValue()
<< "\n";
return WalkResult::interrupt();
}
return WalkResult::advance();
});
if (result.wasInterrupted())
signalPassFailure();
Finally, in this commit we add a test that exercises it:
func.func @test_op_syntax() -> i5 {
%0 = arith.constant 3 : i5
%1 = arith.constant 4 : i5
%2 = noisy.encode %0 : i5 -> !noisy.i32
%3 = noisy.encode %1 : i5 -> !noisy.i32
%4 = noisy.mul %2, %3 : !noisy.i32
%5 = noisy.mul %4, %4 : !noisy.i32
%6 = noisy.mul %5, %5 : !noisy.i32
%7 = noisy.mul %6, %6 : !noisy.i32
%8 = noisy.decode %7 : !noisy.i32 -> i5
return %8 : i5
}
Running tutorial-opt --noisy-reduce-noise
on this file produces the following error:
error: 'noisy.mul' op Found op after which the noise exceeds the allowable maximum of 26; it was: 48
%5 = noisy.mul %4, %4 : !noisy.i32
^
mlir-tutorial/tests/noisy_reduce_noise.mlir:11:8: note: see current operation: %5 = "noisy.mul"(%4, %4) : (!noisy.i32, !noisy.i32) -> !noisy.i32
And if you run in debug mode with --debug --debug-only=int-range-analysis
, you will see the per-op propagations printed to the terminal
$ bazel run tools:tutorial-opt -- --noisy-reduce-noise-optimizer $PWD/tests/noisy_reduce_noise.mlir --debug --debug-only=int-range-analysis
Inferring ranges for %c3_i5 = arith.constant 3 : i5
Inferred range unsigned : [3, 3] signed : [3, 3]
Inferring ranges for %c4_i5 = arith.constant 4 : i5
Inferred range unsigned : [4, 4] signed : [4, 4]
Inferring ranges for %0 = noisy.encode %c3_i5 : i5 -> !noisy.i32
Inferred range unsigned : [0, 12] signed : [0, 12]
Inferring ranges for %1 = noisy.encode %c4_i5 : i5 -> !noisy.i32
Inferred range unsigned : [0, 12] signed : [0, 12]
Inferring ranges for %2 = noisy.mul %0, %1 : !noisy.i32
Inferred range unsigned : [0, 24] signed : [0, 24]
Inferring ranges for %3 = noisy.mul %2, %2 : !noisy.i32
Inferred range unsigned : [0, 48] signed : [0, 48]
Inferring ranges for %4 = noisy.mul %3, %3 : !noisy.i32
Inferred range unsigned : [0, 96] signed : [0, 96]
Inferring ranges for %5 = noisy.mul %4, %4 : !noisy.i32
Inferred range unsigned : [0, 192] signed : [0, 192]
As a quick aside, there was one minor upstream problem preventing me from reusing IntegerRangeAnalysis
, which I patched in https://github.com/llvm/llvm-project/pull/72007. This means I also had to update the LLVM commit hash used by this project in this commit.
An ILP optimization pass
Next, we build an analysis that solves a global optimization to insert reduce_noise
ops efficiently. As mentioned earlier, this is a “do anything” kind of analysis, so we put all of the logic into the analysis’s construtor.
[Aside: I wouldn’t normally do this, because constructors don’t have return values so it’s hard to signal failure; but the API for the analysis specifies the constructor takes as input the Operation *
to analyze, and I would expect any properly constructed object to be “ready to use.” Maybe someone who knows C++ better will comment and shed some wisdom for me.]
This commit sets up the analysis shell and interface.
class ReduceNoiseAnalysis {
public:
ReduceNoiseAnalysis(Operation *op);
~ReduceNoiseAnalysis() = default;
/// Return true if a reduce_noise op should be inserted after the given
/// operation, according to the solution to the optimization problem.
bool shouldInsertReduceNoise(Operation *op) const {
return solution.lookup(op);
}
private:
llvm::DenseMap<Operation *, bool> solution;
};
This commit adds a workspace dependency on Google’s or-tools
package (“OR” stands for Operations Research here, a.k.a. discrete optimization), which comes bundled with a number of nice solvers, and an API for formulating optimization problems. And this commit implements the actual solver model.
Now this model is quite a bit of code, and this article is not the best place to give a full-fledged introduction to linear programming, modeling techniques, or the OR-tools API. What I’ll do instead is explain the model in detail here, give a few small notes on how that translates to the OR-tools C++ API. If you want a gentler background on linear programming, see my article series about diet optimization (part 1, part 2).
All linear programs specify a linear function as an objective to minimize, along with a set of linear equalities and inequalities that constrain the solution. In a standard linear program, the variables must be continuously valued. In a mixed-integer linear program, some of those variables are allowed to be discrete integers, which, it turns out, makes it possible to solve many more problems, but requires completely different optimization techniques and may result in exponentially slow runtime. So many techniques in operations research relate to modeling a problem in such a way that the number of integer variables is relatively small.
Our linear model starts by defining some basic variables. Some variables in the model represent “decisions” that we can make, and others represent “state” that reacts to the decisions via constraints.
- For each operation $x$, a $\{0, 1\}$-valued variable $\textup{InsertReduceNoise}_x$. Such a variable is 1 if and only if we insert a
reduce_noise
op after the operation $x$. - For each SSA-value $v$ that is input or output to a
noisy
op, a continuous-valued variable $\textup{NoiseAt}_v$. This represents the upper bound of the noise at value $v$.
In particular, the solver’s performance will get worse as the number of binary variables increases, which in this case corresponds to the number of noisy
ops.
The objective function, with a small caveat explained later, is simply the sum of the decision variables, and we’d like to minimize it. Each reduce_noise
op is considered equally expensive, and there is no special nuance here about scheduling them in parallel or in serial.
Next, we add constraints. First, $0 \leq \textup{NoiseAt}_v \leq 26$, which asserts that no SSA value can exceed the max noise. Second, we need to enforce that an encode
op fixes the noise of its output to 12, i.e., for each encode
op $x$, we add the constraint $\textup{NoiseAt}_{\textup{result}(x)} = 12$.
Finally, we need constraints that say that if you choose to insert a reduce_noise
op, then the noise is reset to 12, otherwise it is set to the appropriate function of the inputs. This is where the modeling gets a little tricky, but multiplication is easier so let’s start there.
Fix a multiplication op $x$, its two input SSA values $\textup{LHS}, \textup{RHS}$, and its output $\textup{RES}$. As a piecewise function, we want a constraint like:
\[ \textup{NoiseAt}_\textup{RES} = \begin{cases} \textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS} & \text{ if } \textup{InsertReduceNoise}_x = 0 \\\ 12 & \text{ if } \textup{InsertReduceNoise}_x = 1 \end{cases} \]This isn’t linear, but we can combine the two branches to
\[ \begin{aligned} \textup{NoiseAt}_\textup{RES} &= (1 – \textup{ InsertReduceNoise}_x) (\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS}) \\\ & + 12 \textup{ InsertReduceNoise}_x \end{aligned} \]This does the classic trick of using a bit as a controlled multiplexer, but it’s still not linear. We can make it linear, however, by replacing this one constraint with four constraints, and an auxiliary constant $C=100$ that we know is larger than the possible range of values that the $\textup{NoiseAt}_v$ variables can attain. Those four linear constraints are:
\[ \begin{aligned} \textup{NoiseAt}_\textup{RES} &\geq 12 \textup{ InsertReduceNoise}_x \\\ \textup{NoiseAt}_\textup{RES} &\leq 12 + C(1 – \textup{InsertReduceNoise}_x) \\\ \textup{NoiseAt}_\textup{RES} &\geq (\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS}) – C \textup{ InsertReduceNoise}_x \\\ \textup{NoiseAt}_\textup{RES} &\leq (\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS}) + C \textup{ InsertReduceNoise}_x \\\ \end{aligned} \]Setting the decision variable to zero results in the first two equations being trivially satisfied. Setting it to 1 causes the first two equations to be equivalent to $\textup{NoiseAt}_\textup{RES} = 12$. Likewise, the second two constraints are trivial when the decision variable is 1, and force the output noise to be equal to the sum of the two input noises when set to zero.
The addition op is handled similarly, except that the term $(\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS})$ is replaced by something non-linear, namely $1 + \max(\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS})$. We can still handle that, but it requires an extra modeling trick. We introduce a new variable $Z_x$ for each add
op $x$, and two constraints:
Together these ensure that $Z_v$ is at least 1 plus the max of the two input noises, but it doesn’t force equality. To achieve that, we add $Z_v$ to the minimization objective (alongside the sum of the decision variables) with a small penalty to ensure the solver tries to minimize them. Since they have trivially minimal values equal to “1 plus the max,” the solver will have no trouble optimizing them, and this will be effectively an equality constraint.
[Aside: Whenever you do this trick, you have to convince yourself that the solver won’t somehow be able to increase $Z_v$ as a trade-off against lower values of other objective terms, and produce a lower overall objective value. Solvers are mischievous and cannot be trusted. In our case, there is no risk: if you were to increase $Z_v$ below its minimum value, that would only increase the noise propagation through add
ops, meaning the solver would have to compensate by potentially adding even more reduce_noise
ops!]
Then, the constraint for an add
op uses $Z_v$ in place of $(\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS})$ the mul
op.
The only other minor aspect of this solver model is that these constraints enforce the consistency of the noise propagation after a reduce_noise
op may be added, but if a reduce_noise
op is added, it doesn’t necessarily enforce the noise growth of the output of the op before it’s input to reduce_noise
. We can achieve this by adding new constraints expressing $(\textup{NoiseAt}_{LHS} + \textup{NoiseAt}_{RHS}) \leq 26$ and $Z_v \leq 26$ for multiplication and addition ops, respectively.
When converting this to the OR-tools C++ API, as we did in this commit, a few minor things to note:
- You can specify upper and lower bounds on a variable at variable creation time, rather than as separate constraints. You’ll see this in
solver->MakeNumVar(min, max, name)
. - Constraints must be specified in the form
min <= expr <= max
, wheremin
andmax
are constants andexpr
is a linear combination of variables, meaning that one has to manually re-arrange and simplify all the equations above so the variables are all on one side and the constants on the other. (The OR-tools Python API is more expressive, but we don’t have it here.) - The constraints and the objective are specified by
SetCoefficient
, which sets the coefficient of a variable in a linear combination one at a time.
Finally, this commit implements the part of the pass that uses the solver’s output to insert new reduce_noise
ops. And this commit adds some more tests.
An example of its use:
// This test checks that the solver can find a single insertion point
// for a reduce_noise op that handles two branches, each of which would
// also need a reduce_noise op if handled separately.
func.func @test_single_insertion_branching() -> i5 {
%0 = arith.constant 3 : i5
%1 = arith.constant 4 : i5
%2 = noisy.encode %0 : i5 -> !noisy.i32
%3 = noisy.encode %1 : i5 -> !noisy.i32
// Noise: 12
%4 = noisy.mul %2, %3 : !noisy.i32
// Noise: 24
// branch 1
%b1 = noisy.add %4, %3 : !noisy.i32
// Noise: 25
%b2 = noisy.add %b1, %3 : !noisy.i32
// Noise: 25
%b3 = noisy.add %b2, %3 : !noisy.i32
// Noise: 26
%b4 = noisy.add %b3, %3 : !noisy.i32
// Noise: 27
// branch 2
%c1 = noisy.sub %4, %2 : !noisy.i32
// Noise: 25
%c2 = noisy.sub %c1, %3 : !noisy.i32
// Noise: 25
%c3 = noisy.sub %c2, %3 : !noisy.i32
// Noise: 26
%c4 = noisy.sub %c3, %3 : !noisy.i32
// Noise: 27
%x1 = noisy.decode %b4 : !noisy.i32 -> i5
%x2 = noisy.decode %c4 : !noisy.i32 -> i5
%x3 = arith.addi %x1, %x2 : i5
return %x3 : i5
}
And the output:
func.func @test_single_insertion_branching() -> i5 {
%c3_i5 = arith.constant 3 : i5
%c4_i5 = arith.constant 4 : i5
%0 = noisy.encode %c3_i5 : i5 -> !noisy.i32
%1 = noisy.encode %c4_i5 : i5 -> !noisy.i32
%2 = noisy.mul %0, %1 : !noisy.i32
%3 = noisy.reduce_noise %2 : !noisy.i32
%4 = noisy.add %3, %1 : !noisy.i32
%5 = noisy.add %4, %1 : !noisy.i32
%6 = noisy.add %5, %1 : !noisy.i32
%7 = noisy.add %6, %1 : !noisy.i32
%8 = noisy.sub %3, %0 : !noisy.i32
%9 = noisy.sub %8, %1 : !noisy.i32
%10 = noisy.sub %9, %1 : !noisy.i32
%11 = noisy.sub %10, %1 : !noisy.i32
%12 = noisy.decode %7 : !noisy.i32 -> i5
%13 = noisy.decode %11 : !noisy.i32 -> i5
%14 = arith.addi %12, %13 : i5
return %14 : i5
}
Want to respond? Send me an email, post a webmention, or find me elsewhere on the internet.