In my little corner of the FHE world, things have been steadily heating up.

For those who don’t know, my main work project right now is HEIR (Homomorphic Encryption Intermediate Representation), a compiler toolchain for fully homomorphic encryption (FHE). For an extended introduction see this talk from October 2023.

The primary focus of HEIR is to compile to FHE hardware accelerators. And boy there are a lot of them. There are GPU and FPGA accelerators, as well as special purpose ASICs and even optical accelerators (discrete Fourier transforms at the speed of light).

In May 2024, KU Leuven hosted a hardware summit specifically for FHE, and my team attended to present HEIR. To my delight, each of the hardware vendors called out HEIR as their path to integration with a larger software stack and eventual deployment. We’re working with many of these vendors, such as Intel, Optalysys, KU Leuven, and Niobium, to get HEIR generating code for their accelerators, though the ASICs are all still in the process of being taped out.

In October 2024, the Workshop on Applied Homomorphic Encryption took place in Salt Lake City (I couldn’t attend due to paternity leave). There our team gave three tutorials on HEIR and received a ton of useful feedback.

Arising out of these fruitful collaborations was FHETCH, the FHE Technical Consortium for Hardware. We’re planning to work with this group on standardized hardware interfaces for the HEIR compiler toolchain to target.

In the mean time, we’ve had a slew of talented interns, student researchers, and community contributors adding real value to HEIR, along with many others expressing interest to get involved.

Wouter Legiest developed compiler passes to target KU Leuven’s FGPA accelerator, as well as converting ML inference models to FHE.

Lawrence Lim interned with us for the summer, getting us started on a CKKS dialect implemented matrix multiplication optimizations, and helped design data layout transformations.

Meron Zerihun interned at Intel and worked on data-oblivious program transformations.

Jianming Tong recently started as a student researcher at Google to focus on TPU acceleration for CKKS.

Finn Plummer worked on optimizations for modular arithmetic and lowerings for number-theoretic transforms.

Hongren Zheng has also been working on optimizations and lowerings related to modular arithmetic.

Jaeho Choi worked on lowering BGV to the OpenFHE API, as well as multiplicative depth analysis and canonicalization patterns for number-theoretic transforms.

Shakil Ahmed worked together with me on a solver for optimal relinearization placement.

There are also many people working in silos, either on hardware integrations from behind corporate walls, or on compiler optimizations they hope to eventually publish, who have privately reached out to our group for help and advice. Some are even expressing incorporating other privacy-enhancing technologies into the compiler, like zero-knowledge proofs and secure multi-party computation. After all, many of the hardware acceleration efforts boil down to polynomial math number crunching—albeit with very different parameters—so there is likely much to collaborate on.

The biggest feature request we’ve been getting is a Python frontend, to make it easier for people to try to use HEIR. So that will be my primary focus for the next few months. We’ve also been looking for new ways to get people involved and lower the barrier to entry. We will have more topically-focused open meetings (e.g., for users vs contributors vs researchers) and I will start hosting an open office hours (see the calendar on this page). We use the #heir channel in the FHE.org discord server, and we are good at responding to issues on GitHub. I’ll also be writing more tutorial blog posts, some HEIR-specific, some FHE math, and some on MLIR compiler tooling.


Want to respond? Send me an email, post a webmention, or find me elsewhere on the internet.

This article is syndicated on: