Vulkan. Any sort of binding to Vulkan over a non-trivial FFI (so like, not from C++, Rust, etc...) is going to be murdered by this FFI overhead cost. Especially since for bindings from something like Java you're either paying FFI on every field set on a struct, or you're paying non-trivial marshalling costs to convert from a Java class to a C struct to then finally call the corresponding Vulkan function.
My favorite example is something like Substance designer's node graph or Disney's SeExpr. You'd often want custom nodes that do often something trivial like a lookup from a custom data format or a small math evaluation, but you're calling the node potentially a handful of times per pixel, on millions of pixels. The calling overhead often comes out to take as much time or more than the operation, but there's no easy way to rearrange the operations without making things a lot more complicated for everyone.
I kind of like python's approach, make it so slow that it's easy to notice when you're hitting the bottleneck. Encourages you to write stuff that works in larger operations, and you get stuff like numpy and tensorflow which are some of the fastest things out there despite the slowest binding.
https://www.disneyanimation.com/technology/seexpr-expression...
Those commands and buffers are represented as C structs. If you're in a language that can't speak C structs (like Java, Go, Dart, JavaScript, etc...), all of those command & buffer setup become function calls rather than simple field writes.
I guess I wasn't clear, but I meant the difference between C and Luajit.
There's a reason a lot of gamedev uses luajit, I've personally had to refactor many interfaces to avoid JNI calls as much as possible as there was significant overhead(both in the call and from the VM not being able to optimize around it).
And that's not really even true anymore as the majority of gamedev is using Unreal or Unity, neither of which use luajit.
Unity+Unreal are the public engines out there but there's plenty of in-house engines and tool chains you don't really hear about. I wouldn't be surprised if it's still deployed in quite a few contexts.
Honestly I think of the difference (as discussed in Wellons’s post among others) not as a performance optimization but as an anti-stupidity optimization: regardless of the performance impact, it’s stupid that the standard ELF ABI forces us to jump through these hoops for every foreign call, and even stupider that plain inter- and even intra-compilation-unit calls can also be affected unless you take additional measures. Things are also being fixed on the C side with things such as -fvisibility=, -fno-semantic-interposition, -fno-plt, and new relocation types.
Can this be relevant to performance? Probably—aside from just doing more stuff, there are trickier-to-predict parts of the impact such as buffer pressure on the indirect branch predictor. Does it? Not sure. The theoretical possibility of interposition preventing inlining of publicly-accessible functions is probably much more important, at the very least I have seen it make a difference. But this falls outside the scope of FFI, strictly speaking, even if the cause is related.
---
I don’t have a readily available example, but in the LuaJIT case there are two considerations that I can mention:
- FFI is not just cheap but gets into the realm of a native call (perhaps an indirect one), so a well-adapted inner loop is not ruined even if it makes several FFI calls per iteration (it will still be slower, but this is fractions not multiples unless the loop did not allocate at all before the change). What this influences is perhaps not even the final performance but the shape of the API boundary: similarly to the impact of promise pipelining for RPC[1], you’re no longer forced into the “construct job, submit job” mindset and coarse-grained calls (think NumPy). Even calling libm functions through the FFI, while probably not very smart, isn’t an instant death sentence, so not as many things are forced to be reimplemented in the language as you’re used to.
- The JIT is wonderfully speedy and simple, but draws much of that speed and simplicity from the fact that it really only understands two shapes of control flow: straight-line code; and straight-line code leading into a loop with straight-line code in the body. Other control transfers aren’t banned as such, but are built on top of these, can only be optimized across to a limited extent, and can confuse the machinery that decides what to trace. This has the unpleasant corollary that builtins, which are normally implemented as baked-in bytecode, can’t usefully have loops in them. The solution uses something LuaJIT 2.1 calls trace stitching: the problematic builtins are implemented in normal C and are free to have arbitrarily complex control flow inside, but instead of outright aborting the trace due to an unJITtable builtin the compiler puts what is effectively an FFI call into it.
> both luajit and julia are significantly faster
I would be interested if anyone has an example where the difference matters in practice. As soon as you move to the more realistic scenario where you're writing a program that does something other than what is measured by these benchmarks, that's not going to be your biggest concern.