The point of the draft is best summarized as "if you can detect that the other side has a problem in its implementation, raise red flags early and noticeably." It's not safe to recover to some default, because that can make you think that things are working when they're not--imagine if the engine control software defaulted to assuming a different type of engine than what existed. The resulting confusion could equally destroy the engines; this is similar to what happened to the Ariane 5 rocket that caused it to explode.
You're misinterpreting 'fail fast' - it doesn't mean 'entire system should fail catastrophically at slightest problem' or 'systems should not be fault-tolerant'. It just means that components should report failure as soon as possible so the rest of the system can handle it accordingly instead of continuing operation with an unrecognized faulty component leading to unpredictable outcomes.
Fail hard and don't recover is absolutely fine in many scenarios, especially ones where no lives or expensive property are on the line.
Control software for jet engines is a whole different kettle of fish from sharing photos online. I would dare say most of us here have never worked on software that critical. The approach---from design to implementation to testing---is formalized to a degree most of us in the "agile" world of web apps could not tolerate.
"Fail early and hard, don't recover from errors" is a recipe for disaster.
That principle applied to critical systems software engineering leads to humans getting killed. E.g. in aerospace the result is airplanes falling out of the sky. Seriously. The Airbus A400M that recently crashed in Spain did so, because somewhere in the installation of the engine control software the control parameter files were rendered unusable. The result was, that the engine control software did fail hard, while this would have been a recoverable error (just have a set of default control parameters hardcoded into the software putting the engines into a fail safe operational regime); instead the engines shut off, because the engine control software failed hard.
In mission and life critical systems there are usually several redundant core systems and sensors, based on different working principles, so that there's always a workable set of information available. Failing hard renders this kind of redundancy futile.
No, Postel's Maxim holds as strong as ever. The key point here is: "Be conservative in what you send", i.e. your implementation should be strict in what it subjects other players to.
Also being string in what's expected can be easily exploited to DoS a system (Great Firewall RST packets anyone?)