I think that author's next article does a great job explaining. While they don't say it with these words, I'd say that by learning math you are able to speak the same language as the model. This does wonders for interpreting what is going on and why it is making certain decisions. The "black box" isn't transparent, but neither is it so dark. And it the space is so dark that you should be trying to shed any light that you can on it.
https://irregular-rhomboid.github.io/2022/12/27/math-is-a-la...
edit: On the math side I've encountered one that seemed unique, as I haven't seen anything like this elsewhere: https://irregular-rhomboid.github.io/2022/12/07/applied-math.... However, this only points out courses that he enrolled in his math education that he thinks is relevant to ML, each course is given a very short description and or motivation as to the usefulness it has to ML.
I like this concluding remarks:
Through my curriculum, I learned about a broad variety of subjects that provide useful ideas and intuitions when applied to ML. Arguably the most valuable thing I got out of it is a rough map of mathematics that I can use to navigate and learn more advanced topics on my own.
Having already been exposed to these ideas, I wasn’t confused when I encountered them in ML papers. Rather, I could leverage them to get intuition about the ML part.
Strictly speaking, the only math that is actually needed for ML is real analysis, linear algebra, probability and optimization. And even there, your mileage may vary. Everything else is helpful, because it provides additional language and intuition. But if you’re trying to tackle hard problems like alignment or actually getting a grasp on what large neural nets actually do, you need all the intuition you can get. If you’re already confused about the simple cases, you have no hope of deconfusing the complex ones.