Just that code which is designed to be run in a separate process with the express intent of allowing termination at timeout should also be designed to not change state outside of itself. If somehow it really needs to (e.g. a 1TB working file), either that process or its invoker needs to have cleanup code that assumes processes have been terminated.
Doesn't mean that ALL code needs to be perfect, or even that this code will be, just that a thoughtful design will at least create the requirements and tests that make this model work.
Writing high quality code is not about being perfect — it’s about changing how you write code.
Writing code that can recover from unexpected errors at any point is simply by being mindful about ordering your instructions and persisting data precisely.
This is clearly false from a cursory skim of the parent comment, which says
> All processes should be prepared for a sudden crash without corrupting state.
...which is leagues away from "all code should be perfect" - for instance, an obvious example is programs that can crash at any time without state corruption, but contain logic bugs that result in invalid data being (correctly) written to disk, independent of crashing or being killed.
In particular, I'd challenge you to find one large program that handles OOM situations 100% correctly, especially since most code runs atop an OS configured to over-allocate memory. But even if it wasn't, I doubt there's any sizeable codebase that handles every memory allocation failure correctly and gracefully.