About the Author

I’m Jeremy Kun. I’m currently an engineer at Google and I live in Portland, Oregon.


In 2018 I published A Programmer’s Introduction to Mathematics.

Currently I’m working on Practical Math for Programmers: A Tour of Math in Production Software.


I earned a PhD in mathematics from the University of Illinois at Chicago, where my advisor was Lev Reyzin. Here’s a long thing I wrote about my graduate school experience. I did my undergraduate degree at Cal Poly San Luis Obispo in mathematics and computer science. I attended the Budapest Semesters in mathematics program.


After my PhD I made the mistake of working for a Bitcoin startup for almost two years. Then I spent 5 years doing data center supply chain planning and optimization at Google. As of 2023 I worked in Google’s data privacy research group on Fully Homomorphic Encryption.


I reserve this blog for technical material. General audience writing goes in a newsletter and in one-off articles (see this historical archive).

Main Content” posts contain software, while “Primers” are usually math-only. All source code used in making posts is available at my Github page. Here are some other of my online profiles:

Creative Commons License
Math ∩ Programming by Jeremy Kun is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.

59 thoughts on “About the Author

    • Thanks! Now that I’m in graduate school I’m finding less and less time to work on it. Luckily next semester I’m gearing my courses toward topics that are quite blogworthy.

  1. Jeremy – I enjoy reading your Math Blog, and would like to respectfully ask a question.

    I was wondering if you had ever done any research on buying
    mathematical algorithms vs. programming them yourself? Especially for complicated
    mathematical subroutines, is it cost effective to subscribe to an algorithm library
    (like http://www.nag.com) for about $3000 per year, or let your programmers do all the work?
    Have you ever offered an opinion on this subject (or anyone else)?

    Thanks, Steve Chastain

    • That is a good question, especially because I’ve never heard of the concept of ‘buying algorithms’ being implemented before. In particular, an algorithm isn’t copyrightable material, so what they’re really selling you is either 1. a particular implementation with special features, or 2. their interface or service in deployment, support, and maintenance of said implementations. That’s how companies like Wolfram Research get away with charging for their wonderful Mathematica language.

      Of course, I’m going to be critical because I study the mathematical background and implement the algorithms for fun, and I wish all my friends and their mothers did too. But I do believe that the methods which have stood the industry test of time will always be open source, and before you go to pay for a software implementation, it’s cost-effective to attempt to integrate such a library into your own code. I think, at the very least, open source libraries suffer orders-of-magnitude more scrutiny than something like NAG, and hence are bound to have fewer bugs.

      To answer your question more directly: if you can, don’t pay for it and don’t implement it either. The optimized versions of these basic techniques have gone through decades of scrutiny to arrive at their present states, and you shouldn’t be reinventing the wheel for a real-world application. Instead, stand on the shoulders of giants, and realize that most giants which are accepted as ‘useful to stand on’ are free.

      Looking at NAG’s webesite, it looks like the majority of the algorithms they offer are as such. For instance, their C library touts a Fast Fourier Transform, but the numpy library has the same routines, and more (NAG doesn’t support > 3-dimensional FFT). Furthermore, there is no mention of a parallelized version of the FFT, despite their emphasis on high performance computing and the existence of the industry standard Fastest Fourier Transform in the West again being free, open-source, and implemented in all the same languages.

      Even more curiously, it seems that a number of their linear algebra routines are taken directly from LAPACK and BLAS, both free and open source linear algebra libraries specifically oriented for massive parallelism and used regularly in exascale computing applications. Many of their other listed algorithms seems pretty standard too. Although I don’t have much knowledge in statistical computing, I’m certain that the R programming language is far more capable than anything they have under statistics and nonparametric methods. Similarly, most nonlinear optimization techniques, including their packaged forms as the commonly used Support Vector Machine, are widely available. This reinforces my notion that their main service is support, and not the algorithms themselves.

      That being said, I have never seen a black-box approach to mesh generation, and they list that as one of their features. As far as I know (and I currently see this everywhere in my work at LLNL), efficient mesh computation on nontrivial geometric shapes is highly problem-specific, and largely an open problem.

      I also think it really depends on the focus of your software team and how cutting-edge the techniques are. If your devs are resistant to read a potentially dense paper on some new technique (and I’ll admit it, most papers are impossible to read), then hiring outside talent would allow your employees to focus on the work they enjoy instead of banging their heads against a wall over what can often be bunk or outdated research. It’s a philosophical debate for whether I should reimplement the algorithm or use a library, but for you it may easily be about team welfare.

      Did that answer your question?

  2. Hi Jeremy, I have (or would like to have) a similar background as you. I am finishing up my Bachelors in Mathematics from a local public university, but have a lot of expirience in IT and also in programming various fun mathematical based things for myself. I have long been interested in the “mathematical aspect” one could say of finding solutions or algorithms to problems and the analysis of them.

    Anywhozel, I really enjoy your blog (a friend just sent me a link to it), and hope one day to find myself as far along in my education as you.


  3. Maybe you will find MathJax (Javascript LaTeX Interpreter) interesting. It makes you able to write Math in both MathML and LaTeX syntax directly in the html. Everything is handled by javascript. Which means that there will be pretty math without having the trouble of generating and uploading images.

    • Good call. I never knew this was an opt-in kind of thing. EDIT: Looks like I have to be able to “verify” that I own this website in order to publish on Currents. Since WordPress owns the domain and I don’t have access to the HTML headers, looks like I’m out of luck.

  4. Hello j2 this is a great blog, which I reached via your StackExchange contributions. I just read and much appreciated your hitchhikers guide to mathematics for programmers (joke):

    My PhD is in proof-theory with a cs flavour but that was many years ago; I went on to research in machine-assisted formal reasoning with Burstall and Fourman at LFCS in Edinburgh and then taught CS at Sheffield Uni for a decade or so. I now work as a maths teacher with the Open University and have a very new 1-man band software consultancy business (too new to have a proper website yet). I’m learning and using F# and loving it; I’ve been around the block with a number of FP languages and F# is really up to scratch. [I don’t mean that it is as good as the language “scratch” (http://scratch.mit.edu/) :-] But my favourite platform is Mathematica and I use it daily for teaching, exploring, musical investigations, simulating, designing, accounting, drawing, mapping, testing out new programming language features. I use their browser plugin for interactive/animated content; see
    http://fairflow.org/wpOU/wp-content/uploads/2012/06/Lissajous1.cdf and use the slider or animation controls. But I guess you know about this already?

  5. Hey great blog, I really enjoy your posts!

    Any advice you could give to a current undergrad student who likes both Math and Computer Science on how to define possible graduate school research areas? I’m currently doing research at my university but I don’t think that’ll give me a good enough overview of all possible research areas. Thanks in Advanced and agan Great Blog!

    • It honestly depends on what you like. How much math have you done (in coursework or on your own)? How much CS? What topics are you most interested in?

    • Perhaps I should also say, my institution (UI Chicago) is looking for more Mathematical Computer Science graduate students. It’s more focused on the mathematical areas of CS (algorithms, data structures, learning theory) than the applied areas (systems, networks, security), but I think it’s a great department. And if you like the engineering aspects more, UI Urbana-Champaign is a great school for that too.

      • On the Math side I have taken Abstract and Linear Algebra, and am currently taking ODEs. On the CS side I have taken Data Structures, Computer Architecture and am currently taking Databases. I’ve been doing research in Finite Fields for a couple semesters now.

        There’s a couple topics that interest me, but since I haven’t had much exposure in them I’m not really sure if I’m completely interested. These topics include: Algorithms, Machine Learning, Theory of Computing and AI.

        Any advice on how to expand my knowledge on possible research fields, or finding out exactly what field I like the most is appreciated. Thanks again!

      • Heck, I say take as many classes as you can in all of these topics. If you’re interested in finite fields and abstract algebra, I’d image you’d enjoy theory of computing and the followup would be computational complexity theory. Algorithms is definitely essential. I personally hate differential equations (and I know nothing about PDEs), and it doesn’t appear to affect my work at all.

        Machine learning theory is a very hot field right now, and it sits at the crossroads between probability theory, computational complexity (part of the theory of computing), and game theory. It would be a comfortable middle between theory and applications, if that’s what you like. Two big names in that field are Kearns and Vazirani, and they have a lot of survey papers and books you can get to introduce you to the field. On the other hand, there are loads and loads of interesting topics just in complexity theory, and there’s no shortage of open questions. In fact, a lot of important arguments there rely on facts about finite fields, so you’ll get to see some interesting applications there.

        I think if you’re interested in going to graduate school you don’t have to pick a field now. I don’t think anyone is expected to know exactly what they want to do right away, but should know that there’s at least one topic they would enjoy studying. If you want, you can pretend in your application that you know exactly what you want to do, and nobody will hold you to it if you change your mind later. Having this finite fields research under your belt will help you a lot with that. If you are looking at schools and you contact a professor you’re interested in working with (and you can start that as early as you want, if you can manage to understand some of their papers), then they can give you a better idea of what their topic would be like.

  6. A friend of mine sent me a link to your blog. And within a glance I was overwhelmed with the content of your blog. You put a great quality in information, plus detail explanation and demo.
    I never take blogging seriously until I come to your site. Maybe I should start blogging interesting things I find, and open that knowledge with other people like you did. Great job, and truly adorable, Jeremy!

  7. Very nice blog. Good job. In fact you are the first academic that I see to take Apple-like approach. To care about the design(call it Art) of the website when dealing with hard science at the same time.

  8. Hi Jeremy, Your blog is the best I ever seen on Math and CS. I wonder if you could share your favourite books on the subjects, especially these below:

    Linear algebra
    Graph theory (Discrete Math)
    Probability theory

    I a CS professional, and want to fresh all the subjects from basic to advance levels, like you.

    • – Linear Algebra Done Right – Sheldon Axler
      – I have never used a Graph Theory text
      – Depends on your goals, but probably Measurement by Paul Lockhart (or Spivak’s Calculus on Manifolds, but Calculus is really not all that good for CS)
      – I have never actually read a full Probability Theory text

  9. Thanks, from another Jeremy!

    I’m a programmer obsessed with signal processing but lacking in formal education, especially in math. I realized linear algebra in particular is something I need to understand, but I was having a hard time figuring out where to start. Your primers gave me an excellent starting point, and from Googling things I didn’t understand I found a wealth of information I didn’t know how to find before.

    Slowly but surely I’m starting to get this!

    • You’re welcome! Yes, linear algebra is extremely important. My favorite book on linear algebra basics is Axler’s “Linear Algebra Done Right.”

  10. I read your series on elliptic curves today, I couldn’t stop, it was such an enjoyable experience just reading the carefully constructed well written explanation of it all. I specially liked the detour on the projective space, made perfect sense. I wonder if you are planning to expand on Montgomery curves and Twisted Edwards curves as I’m trying to learn the details of Bernstein’s Curve25519. Love your work!

    Thank you Jeremy!

    • I was actually unaware of these different forms of curves, and I’m surprised to see how they avoid issues like side-channel attacks! While it may take a bit longer to get to these topics (it looks like you need some algebraic geometry to understand why anyone would think about a twist), I will definitely be looking at them.

  11. Could you pls make an RSS feed…? most of your target audience hardly ever login to twitter, facebook etc…

  12. Hi Jeremy, nice blog! I thought you might want to know that a significant portion of the left hand side of thecontents are cut off when viewed on my Nexus 7.

  13. hello kun, thanks so much for your blog and i personally learned a lot from you. i am wondering have you ever explored expert learning in ML, something similar to bandit learning. thanks

  14. Hi Jeremy. I just started revisiting analysis with the book “Real Analysis: A Constructive Approach by Mark Bridger” [1] (because constructive mathematics whetted my interest after reading about it a couple of times), and, incidentally, I just stumbled upon a comment of yours on HN that you wished there was an analysis book which emphasizes computation. The introduction of Bridger’s book mentions that it was partly written with computer scientists in mind who want to know about computability and calculability of the reals, so perhaps you will find it interesting. If you already know this book I would appreciate feedback. Thanks for your great blog articles by the way!

    [1]: http://www.amazon.com/Real-Analysis-Constructive-Mark-Bridger/dp/111835706X/

      • That is the preview of the Kindle version and it’s possibly an issue of the Kindle format to HTML conversion. The Kindle versions are seldom well typeset, but I can’t imagine it being that bad. The paperback version is well typeset.

  15. Hi Jeremy

    I just wanted to say that I really enjoy reading articles on your blog. They are so easy to understand for a lay person as myself. I especially liked how to take a calculus test. I just wish it was available to me when I was in college taking calculus and differential equations which I at times struggled with. But the good news is that I did pass with satisfactory grades.

    I have a question for you. What is the focus of your PhD thesis or research?

    • I work in a field called “theoretical computer science,” but my work specifically has been in a mix of network science, computational complexity theory, and machine learning.

  16. Hey Jeremy!

    I know you write about stuff you’re interested in, so I’d like to interest you in something and maybe see a post on that in the future 🙂

    You see, with the advances in machine learning and related the role of floating point numbers has increased. Indeed, in “classical” computer science we mostly dealt with inherently discrete structures (integers or structures like graphs) whereas ML and Probability & Stats introduce continuous numbers to Computer Science. Unfortunately, our floating numbers (IEEE 754) aren’t as good substitute for reals as our integer arithmetic for mathematical integers. Programmers often (and I’m guilty, too) use reals as they were reals, which may lead to problems.

    It’s best summarized IMO by a legend of CS, William Kahan who seems to be the only person in the world to be concerned with floating point arithmetic issues. See his talk at Heidelberg Laureate Forum: it’s definitely worth it

    I think this is an important topic and more programmers and researchers should be introduced to it.
    Looking forward to your opinion!

  17. Hi,
    Thanks for such a great website. I came across your article on “Where there is no Hitch hikers guide to mathematics for programmers” and I am deeply inspired to restart my journey on mathematics. I have basically always stumbled through it. I’m studying for a masters degree in Machine learning and my choice of course was as a result of my love for programming. However, on getting to class, I am usually lost after the first few mathematical sentences. I was considering dropping out, until i found out some of my class mates feel the same way. It’s only the third week. I’ll be happy to give myself another chance. I wish I found this blog much earlier. Thanks again!

  18. This looks great 🙂 I’ve only just stumbled upon this blog by being recommended! Will definitely take some time out as the material covered looks fascinating and just what I’d wanted! Thank you!

  19. I love your blog. I was introduced to it about three and a half years ago by a peer of yours from Cal Poly. Since then I return to it every few months, using your blog as an unofficial ‘yard-stick’ for my own mathematical development. When I first visited, I found the topics fascinating but I could barely understand anything. Now I understand enough to use your blog as a guide while embarking upon individual study of the topics you cover here. Thank you, and keep up the great work.

  20. Oh my. I foresee many evenings lost to this blog.

    Looking through some of the topics you’ve covered, I think you might be interested in Maurice Herlihy’s (with others) work applying combinatorial topology to proofs in distributed computing. Basically, the state of a distributed computation is a simplicial complex; a protocol may be defined in a way that it represents a transformation from one simplicial complex to another. This allows one to provide impossibility proofs for many distributed problems (e.g., reaching two-party consensus with a protocol in which messages may be lost).

    Herlihy delivered a series of lectures on this topic at the Technion: https://www.youtube.com/watch?v=05knu1z3zOg&list=PL0DA9BFB82ACED0AF

    and he has a book: “Distributed Computing through Combinatorial Topology”. The book is a text for a class at Brown, slides for the class are available here: http://static.cs.brown.edu/courses/csci2951-s/

    Anyway, thanks for this blog.

Leave a Reply to JeremyCancel reply