I would argue that given a certain ISA, it's probably easier to write an autocomplete extension for assembly targeting that ISA, rather than autocomplete for C, or goodness forbid, C++.
Likewise for structs, functions, jump targets, etc. One could probably set up snippets corresponding to different sorts of conditional execution—loops, if/else/while, switch, etc.
I'm not exposed to this space very often, so maybe you or someone else could give me some context. "Sabotage" is a deliberate effort to ruin/hinder something. Are compiler engineers deliberately hindering the efforts of cryptographers? If yes... is there a reason why? Some long-running feud or something?
Or, through the course of their efforts to make compilers faster/etc, are cryptographers just getting the "short end of the stick" so to speak? Perhaps forgotten about because the number of cryptographers is dwarfed by the number of non-cryptographers? (Or any other explanation that I'm unaware of?)
CPUs love to do branch prediction to have computation already performed in the case where it guesses the branch correctly, but cryptographic code needs equal performance no matter the input.
When a programmer asks for some register or memory location to be zeroed, they generally just want to be able to use a zero in some later operation and so it doesn’t really matter that a previous value was really overwritten. When a cryptographer does, they generally are trying to make it impossible to read the previous value. And they want to be able to have some guarantee that it wasn’t implicitly copied somewhere else in the interim.
A lot of software engineer are seeing this as compiler engineer only caring about performance as opposed to other aspect such as debuggability, safety, compile time and productivity etc... I think that's where the "sabotage" comes from. Basically the focus on performance at the detriment of other things.
My 2 cents : The core problem is programmers expecting invariant and properties not defined in the languange standard. The compiler only garanty things as defined in the standard, expecting anything else is problematic.
Yes, languages do lack good mechanisms to mark variables or sections as needing constant-time operation ... but compiler maintainers could have taken the view that that means all code should be compiled that way. Now instead we're marking data and section as "secret" so that they can be left unoptimized. But why not the other way around?
I understand how we get here; speed and size are trivial to measure and they each result in real-world cost savings. I don't think any maintainer could withstand this pressure. But it's still deliberate.
Worse cost-benefit tradeoff, perhaps? I'd imagine the amount of code that cares more about size/speed than constant-time operation far outnumbers the amount of code which prioritizes the opposite, and given the real-world benefits you mention and the relative newness of concerns about timing attacks I think it makes sense that compiler writers have defaulted to performance over constant-time performance.
In addition, I think a complicating factor is that compilers can't infer intent from code. The exact same pattern may be used in both performance- and timing-sensitive code, so absent some external signal the compiler has to choose whether it prioritizes speed or timing. If you think more code will benefit from speed than timing, then that is a reasonable default to go with.
Any side effect is a side channel. There are always going to be side channels in real code running on real hardware.
Sure you can change your code, compiler, or, or even hardware to account for this but at it's core that is security by obscurity.
This primitive we're trying to introduce is meant to make up for this shortcoming without having to introduce additional rules in the standard.