When we talk about JS or Java JITs working well, we are making statements based on intense industry competition where if a JIT had literally any shortcoming then a competitor would highlight it in competitive benchmarking and blog posts. So, the competition forced aggressive improvements and created a situation where the top JITs deliver reliable perf across lots of workloads.
OTOH PyPy is awesome but just hasn’t had to face that kind of competitive challenge. So we probably can’t know how far off from JS JITs it is.
One thing I can say is when I compared it to JSC by writing the same benchmark in both Python and JS, JSC beat it by 4x or so.
For example, static initialization on classes. The JDK has a billion different classes and on startup a not insignificant fraction of those end up getting loaded for all but the simplest applications.
Essentially, Java and the JS jits are both initially running everything interpreted and when a hot method is detected they progressively start spending the time sending those methods and their statistics to more aggressive JIT compilers.
A non-insignificant amount of time is being spent to try and make java start faster and a key portion of that is resolving the class loading problem.
All commercial JVMs have had JIT caches for quite some time, and this is finally also available as free beer on OpenJDK, thus code can execute right away as if it was an AOT language.
In some of those implementations, the JIT cache gets updated after each execution taking into account profiling data, thus we have the possibilitiy to reach an optimal status across the lifetime of the executable.
The .NET and ART cousins also have similar mechanisms in place.
Which I guess is what your last sentence refers to, but I wasn't sure.
Yup, the CDS and now AOT stuff in openjdk is what I was referring to. Project Leyden.
That's similar to how js does things.
Java does have a "client" optimization mode for more short lived operations (like guis for example) and AFAIK it's basically unused at this point. The more aggressive "server" optimizations are faster than ever and get triggered pretty aggressively now. The nature of the jvm is also changing. With fast scaling and containerization, a slow start and long warmup aren't good. That's why part of the jdk dev has been dedicated to resolve that.