I’d really, really like to know what microcontroller family this was found on. Assuming that this is a safety processor (lockstep, ECC, etc) it suggests that ECC was insufficient for the level of bit flips they’re seeing — and if the concern is data corruption, not unintended restart, it means it’s enough flips in one word to be undetectable. The environment they’re operating in isn’t that different from everyone else, so unless they ate some margin elsewhere (bad voltage corner or something), this can definitely be relevant to others. Also would be interesting to know if it’s NVM or SRAM that’s effected.
RealityVoid
See my other comments in the other threads. This does not have EDAC. I was as surprised as you but it doesn't seems to be an MCU but a composition of several distinct chips. That flight computer was designed in the 90's and updated in 2002 with a new hw variant that does have edac. So yes, for this kind of thing, I can buy that a bit flip happened.
> This does not have EDAC. I was as surprised as you but it doesn't seems to be an MCU but a composition of several distinct chips.
Wasn't the philosophy back then to run multiple independent (and often even designed and manufactured by different teams) computers and run a quorum algorithm at a very high level?
Maybe ECC was seen as redundant in that model?
RealityVoid
> Wasn't the philosophy back then to run multiple independent (and often even designed and manufactured by different teams) computers and run a quorum algorithm at a very high level?
It was, and they did (well, same design, but they were independent). I quote from the report:
"To provide redundancy, the ADIRS included three air data inertial reference units
(ADIRU 1, ADIRU 2, and ADIRU 3). Each was of the same design, provided the
same information, and operated independently of the other two"
> Maybe ECC was seen as redundant in that model?
I personally would not eschew any level of redundancy when it can improve safety, even in remote cases. It seems at the moment of the module's creation, EDAC was not required, and it probably was quite more expensive. The new variant apparently has EDAC. They retrofitted all units with the newer variants whenever one broke down. Overall, ECC is an extra layer of protection. The _presumably_ bit flip would be plausible to blame for data spikes. But even so, the data spikes should not have caused the controls issue. The controls issue is a separate problem, and it's highly likely THAT is what they are going to address, in another compute unit.
"There was a limitation in the algorithm used by the A330/A340 flight control
primary computers for processing angle of attack (AOA) data. This limitation
meant that, in a very specific situation, multiple AOA spikes from only one of
the three air data inertial reference units could result in a nose-down elevator
command. [Significant safety issue]"
This is most likely what they will address. The other reports confirm that the fix will be in the ELAC produced by Thales and the issue with the spikes detailed in the report was in an ADIRU module produced by Northrop Gruman.
JorgeGT
I don't know about the A320 but this was certainly the model for the Eurofighter. One of my university professors was in one of the teams, they were given the specs and not allowed to communicate with the other teams in any way during the hw and sw development.
RealityVoid
> they were given the specs and not allowed to communicate with the other teams in any way during the hw and sw development.
Jeez, it would drive me _up the wall_. Let's say I could somewhat justify the security concerns, but this seems like it severely hampers the ability to design the system. And it seems like a safety concern.
The recalled aircraft include the latest A320neo model, some of which are basically brand new. Why would they be using flight computers from before 2002? Why is an old report from 2008, relating to a completely different aircraft type (A330), relevant to the A320 issue today?
t0mas88
> Why would they be using flight computers from before 2002?
Because getting a new one certified is extremely expensive. And designing an aircraft with a new type certificate is unpopular with the airlines. Since pilots are locked into a single type at a time, a mixed fleet is less efficient.
Having a pilot switch type is very expensive, in the 50-100k per pilot range. And it comes with operational restrictions, you can't pair a newly trained (on type) captain with a newly trained first officer, so you need to manage all of this.
Reason077
I think you're confusing a type certificate (certifying the airworthiness of the aircraft type) with a type rating, which certifies the pilot is qualified to operate that type.
Significant internal hardware changes might indeed require re-certification, but it generally wouldn't mean that pilots need to re-qualify or get a new type rating.
Since the new versions of the same ADIRU have EDAC, they have been using it on planes since 2002 and they have been putting the EDAC variant in whenever an old one was being returned for repairs, I don't think this is the reason. I think the reason is that they had 3 ADIRU's and even if one got wonky, the algorithm on the ELAC flight computer would have to take the correct decision. It did not take the correct decision. The ELAC is the one being updated in this case.
LiamPowell
> Why would they be using flight computers from before 2002?
Why would you assume they're not? I don't know about aircraft specifically, but there's plenty of hardware that uses components older than that. Microchip still makes 8051 clones 45 years after the 8051 was released.
K0balt
That’s just wild to think about. We should all strive to build solutions that plague our descendants with their persistent utility.
hylaride
From a pure safety point of view, it's easier to deal with older, but well-understood products, only updating them if it's an actual safety issue. The alternative is having to deal with many generations of tech, as well as permutations with other components, that could get infinitely complicated. On top of that, it's extremely time consuming and expensive to certify new components.
There's a reason the airlines and manufacturers hem and haw about new models until the economics overwhelmingly make it worthwhile, and even then it can still be a shitshow. The MCAS issue is case in point of how introducing new tech can cause unexpected issues (made worse by Boeing's internal culture).
The 787 dreamliner is also a good example of how hard it is. By all accounts is a success, but it had some serious teething problems and still has some concerns about the long term wear and tear of the composite materials (though a lot of it's problems wasn't necessarily the application of new tech, but Boeing's simultaneous desire to overcomplicate the manufacturing pipeline via outsourcing and spreading out manufacturing).
RealityVoid
The issue detailed in the linked report details why the spike happened in the first place on the ADIRU (produced by Northrop Gruman). The recalled controller is the ELAC that comes from Thales. The problem chain was that despite the ADIRU spiking up, the ELAC should not have taken the reactions it took. So they are fixing it in the ELAC.
4ndrewl
The neo is not brand new - it's an incremental update to the 320. neo refers to New Engine Option
rkomorn
They wrote "some of which are basically brand new", which is technically correct.
They didn't say the design was brand new.
Havoc
> Why would they be using flight computers from before 2002?
Guessing that using previously certified stuff is an advantage
RealityVoid
Because the problem isn't just this. It's that the flight controller did not properly decide what to do when the data spiked because of this issue as well.
Liftyee
What does EDAC mean here? I wasn't able to find a definition. My guess is "error detection and correction"?
Difference between it and ECC?
K0balt
EDAC is the concept, ECC is a family of algorithmic solutions in the service of the concept. Specific implementations of ECC are the engineering solution that implement the specific form of ECC in specific devices at the hardware or software level.
It’s confusing because EDAC and ECC seem to mean the same thing, but ECC is a term primarily used in memory integrity, where EDAC is a system level concept.
RealityVoid
That was my initial confusion as well. It means exactly what you guessed, "Error detection and correction". The term is also spelled out in the report. I asked Claude about it (caveat emptor) and it said EDAC is the correct name for the circuitry and implementation itself whereas ECC is the algorithm. Gemini said that EDAC is the general technique and ECC is one implementation variant. So, at this point, I'm not sure. They are used interchangeably (maybe wrongly so), and in this case, we're referring to, essentially, the same thing, with maybe some small differences in the details. In my professional life, almost always I referred to ECC. In the report, they were only using EDAC. I thought I'd maintain consistency with the report so I tried using EDAC as well.
Normal_gaussian
Large portions of this comment provides zero to negative value. You've quoted two LLMs and couched it in "caveat emptor" and "so I'm not sure". The rest of your comment has then mused over this data you do not trust using generalities ("my profession" are you a JS S/W eng? A chip design specialist at ARM? A security researcher?).
All of the value of your comment comes from the first sentence and the last two.
- EDAC is a term that encompasses anything used to detect and correct errors. While this almost always involves redundancy of some sort, _how_ it is done is unspecified.
- The term ECC used stand-alone refers specifically to adding redundancy to data in the form of an error correcting code. But it is not a single algorithm - there are many ECC / FEC codes, from hamming codes used on small chunks of data such as data stored in RAM, to block codes like reed-solomon more commonly used on file storage data.
- The term ECC memory could really just mean "EDAC" memory, but in practice, error correcting codes are _the_ way you'd do this from a cost perspective, so it works out. I don't think most systems would do triple redundancy on just the RAM -- at that point you'd run an independent microcontroller with the RAM to get higher-level TMR.
Yokolos
EDAC is a general term for an error detection and correction system. It can encompass ECC memory or other solutions.
Specifically of the above document , APPENDIX G: ELECTROMAGNETIC RADIATION mentions in detail how radiation (possible from the sun ) can cause flipped bits and other errors in electronic circuits in some detail ..... We are also at the PEAK of the 11 year sunspot...
TehCorwiz
An early revision of the Raspberry Pi 2 would crash if you hit it with a bright light like a camera flash. Specifically a xenon flash.
Yah, but that's a case of the package not being opaque enough.
russdill
Completely unrelated and due to a design failure by the rpi folks.
hughw
Is it really so unrelated? Isn't it a case where a similar phenomenon -- radiation impacting a computer calculation -- happened and it's one we can all relate to more easily, and reproduce if we cared to, than high altitude avionics? Not necessarily disputing but it just seems like a relatable case that helps me understand the issue better. If it's a radically different case somehow I'm interested to learn.
russdill
No, because it's a completely different kind of radiation.
Other mitigations include completely disabling all CPU caches (with a big performance hit), and continuously refreshing the ECC RAM in background.
There are also a bunch of hardware mitigations to prevent "latch up" of the digital circuits.
rkagerer
In redundant systems like these, how do you avoid the voting circuit becoming a single point of failure?
Eg. I could understand if each subsystem had its own actuators and they were designed so any 3 could aerodynamically override the other 2, but I don't think that's how it works in practice.
simne
> how do you avoid the voting circuit becoming a single point of failure
They do not.
Just make voting circuit much more reliable than computing blocks.
As example, computing block could be CMOS, but voting circuit made from discrete components, which are just too large to be sensitive to particles.
Unfortunately, discrete components are sensitive to overall exposure (more than nm scale transistors), because large square gather more events and suffered by diffusion.
Other example from aviation world - many planes still have mechanic connection of steering wheel to control surfaces, because mechanic connection considered ideally reliable.
Unfortunately, at least one catastrophe happen because one pilot blocked his wheel and other cannot overcome this block.
BTW weird fact, modern planes don't have rod physically connected to engine, because engine have it's own computer, which emulate behavior of old piston carburetor, and on Boeing emulating stick have electronic actuator, so it automatically placed in position, corresponding to actual engine mode, but Airbus don't have such actuator.
I want to say - especially big planes (and planes overall), are weird mix of very conservative inherited mechanisms and new technologies.
anonymousiam
Electronics in high-radiation environments benefit from a large feature size with regard to SEU reduction, but you're correct that the larger parts degrade faster in such environments, so they've created "rad-hard" components to mitigate that issue.
It's interesting to me that triple-voting wasn't as necessary on the older (rad-hard) processors. Every foundry in the world is steering toward CPUs with smaller and smaller feature sizes, because they are faster and consume less power, but the (very small) market for space-based processors wants large feature sizes. Because those aren't available anymore, TMR is the work-around.
Most modern space processing systems use a combination of rad-hard CPUs and TMR.
cpgxiii
In some cases, it is exactly the case of multiple independent actuators, such that the "voting" is effectively performed by the physical mechanism of the control surface.
In other cases all of the subsystems implement the comparison logic and "vote themselves out" if their outputs diverge from the others. A lot of aircraft control systems are structured more as primary/secondary/backup where there is a defined order of reversion in case of disagreement, rather than voting between equals.
But, more generally, it is very hard to eliminate all possible single points of failure in complex control systems, and there are many cases of previously unknown failure points appearing years or decades into service. Any sort of multi-drop shared data bus is very vulnerable to common failures, and this is a big part of the switch to ethernet-derived switched avionics systems (e.g. Afdx) from older multi-drop serial busses.
AlphaSite
Voting can be coordinated between the N cpus rather than an external arbiter (even making that redundant eventually required the CPUs to decide what to do if they disagree so may as well handle it internally).
V__
Can't this be solved by having a "high refresh-rate"? Even if the voting circuit gets hit, if it updates 60 times a second it won't really affect any mechanical parts since the next signal will quickly override the error?
jasonwatkinspdx
My understanding is you're roughly right: the actuators will have their own microcontroller. It receives commands from the say 3 flight computers, then decides locally how respond if they mismatch. Ie for 2 out of 3 matching it may continue as commanded, but with only 1 out of 3 it may shift into a fail safe strategy for whatever that actuator is doing.
exe34
if the issue is radiation bit flipping, you could make that part overly shielded?
baq
Define ‘overly’. You can submerge it in a sphere of water, but that’s going to be expensive to launch.
exe34
I suspect a couple millimeters of lead in the right place would do it. cheaper to shield the voting mechanism than the whole thing.
aborsy
TMR and co are basically repetition codes, simplest performant least efficient ECC.
jayanmn
I am worried about a software fix for what looks like hardware problem.
afavour
It could be as simple as storing multiple copies of the relevant data and adding a checksum, something like that.
Hardware fix is the ultimate solution but it might be possible to paper over with software.
themerone
Gracefully handling hardware faults is a software problem. The Air France Flight 447 crash was the result of bad software and bad hardware.
exidy
Although the pitot tubes on AF447 were due to be replaced with a type more resistant to icing, nonetheless there's no such thing as a 100% reliable pitot tube and there were procedures to follow in the event of unreliable airspeed indication. Had they been followed the accident would not have happened. Instead the co-pilot held back on his stick until the aircraft fell out of the sky.
I don't believe there was any issue identified with the software of the plane.
f1shy
And bad pilot training, if I recall correctly.
amelius
I suppose because they were not instructed to work around the software and hardware flaws.
Crashes caused by pilots failing to execute proper stall recovery procedures are surprisingly common, and similar accidents have happened before in aircraft with traditional control schemes, so I’m skeptical that there are any hardware changes that would have made much difference. The official report doesn’t identify the hardware or software as significant factors.
The moment to avoid the accident was probably the very first moment when Bonin entered a steep climb when the plane was already at 35,000 feet, only 2000 feet below the maximum altitude for its configuration. This was already a sufficiently insane thing to do that the other less senior pilot should have taken control, had CRM been functioning effectively. What actually happened is that both of the pilots in the cockpit at the start of the incident failed to identify that the plane was stalled despite the fact that (i) several stall warnings had sounded and (ii) the plane had climbed above its maximum altitude (where it would inevitably either stall or overspeed) and was now descending. It’s never very satisfying to blame pilots, but this was a monumental fuck up.
If the pilots genuinely disagree about control inputs there is not much that hardware or software can do to help. Even on aircraft with traditional mechanically linked control columns like the 737, the linkage will break if enough pressure is applied in opposite directions by each pilot (a protection against jamming).
julik
True. I would say, however, that every "concept" of airliner flight deck has its own gimmicks that can kill. The Airbus "dual input" is such a gimmick. Even though there was, for example, an AF accident with a 777 where there was hardware linkage between yokes and the two pilots were fighting... each other. Physically.
foldr
The official report doesn't identify the lack of sidestick linkage as a factor in the accident. Neither of the two pilots who were at the controls had any idea what was happening. Both pulled back on their sticks repeatedly right up to the moment of impact. The captain, who eventually realized (too late) that the plane was stalled, was standing behind them, and so would not have benefited from linked sticks.
I'm reminded of the Apollo moon landing where the computer was rapidly rebooting and being in an OK-ish state to continue to be useful almost immediately
CrossVR
It wasn't rebooting, it ran out of memory and started aborting lower priority tasks. It was a excellent example of robust programming in the face of unexpected usage scenarios.
software fixes are totally fine since the chance of two redundant pairs failing within the time it takes to correct these errors is more zero's than there are atoms in the universe. (each pilot has a redundant computer and because there's two pilots there's two redundant pairs)
willis936
It's a system problem. The system is being fixed.
asdefghyk
RE "...The environment they’re operating in isn’t that different from everyone else...." NO this is incorrect. High flying aircraft more likely to suffer increased radiation caused by 11 year peak sunspot cycle . such aircraft should be using "radiation hardened electronics" , somewhat like spacecraft use...
jpollock
The design of the system is very interesting, particularly how it expects to handle errors.
In 90's Telco, you used to have a pair of systems and if they disagreed, they would decide which side was bad and disable it.
In modern cloud, you accept there are errors. There's another request in ~10+ms. You only look when the error rate becomes commercially important.
My understanding of spacecraft is that there would be 3 independent implementations and they would vote.
The plane has a matrix of sensors and systems, allowing faults to be bubbled up and bad elements disabled independently.
The ADIRU does compare values to detect failures (median of 3 sensors), but they could only detect errors that last >1s. The flight computer used the raw data - because the sensors aren't interchangeable (they won't have consistent readings in all flight modes)!
Very nifty.
One thing, they say "memorisation period", I don't think it's a memorisation period? From my reading of the algorithm, it should be more "last value retention period"? Or "sensor spurious fault reading delay"?
Section 2.1 A330/A340 flight control system design
"AOA computation logic"
"The algorithm did not effectively manage a specific situation where AOA 2 and AOA 3 on one side of the aircraft were temporarily incorrect and AOA 1 on the other side of the aircraft was correct, resulting in ADR 1 being rejected."
So, you've got a system where _two_ of the three sensors are bad, and you need to deal with it.
Loudergood
I'm in awe of the fact that two sensors can be wrong AND agree with each other.
Nextgrid
Those being analog sensors measuring analog, physical things, they will never exactly agree with each other; so there's a plausibility window. As long as the fault causes the sensors to remain within said window they will be considered as valid.
UltraSane
It is just like having range of values considered to be equal for floating point numbers.
rubatuga
Space computers are generally in 3 with a hot spare
sllabres
Space shuttle had five.
Four of them operating in a redundant set and the fifth performing non critical task, as descripted in [1].
The fifth is also programmed by a different contractor in a different programming language: #1-4 running the Primary Avionics Software System (PASS) programmed by IBM in HAL/S and #5 programmed by a different team of Rockwell International in assembly. [2]
Thanks for the link. This line in particular is concerning.
"This identified vulnerability could lead in the worst case scenario to an uncommanded elevator movement that may result in exceeding the aircraft structural capability."
isodev
Well, I think in the grand scheme of things (including on the ground), the range of safety faults that can be triggered by a simple bitflip at the wrong moment range from inconvenient to absolute disaster. So in that sense, I'm very happy that Airbus has managed to identify opportunities to improve their design to be even more resilient.
nickdothutton
I’d just like to point out that if you are in the computing industry long enough, you will get to see a few such incidents under different circumstances, not only in industries like aerospace. Mostly things like ECC save your a*, sometimes your software will be able to recognise a temporary spurious reading and disregard it because you had enough alternative checking logic, or in the case of realtime and safety critical maybe even your systems can take a vote between them. Got caught out by (cpu cache line) bit flips in the 90s, months of pain trying to track it down. Some of your will know :-)
LadyCailin
We noticed this in our logs once! We service a huge amount of traffic, and as part of that, we log what is effectively an enum. We did a summarization of this field once, and noticed that there were a couple of “impossible” values being logged. One of my coworkers realized that the string that actually got logged was exactly one bit off from a valid string, and we came to the conclusion that we were probably seeing cosmic rays in action, either in our service, or in the logging service.
tuetuopay
I had a similar story on my NAS that got one btrfs path corrupt. Plopped in on the btrfs IRC, one of the devs noticed the inconsistency was one bitflip away from the right value. Incredibly they were able to give me the right commands to fix it! Got to give credit where it is due, btrfs took the safe path and refused to touch the affected directory until fixed, and has enough tooling to fix this.
I won’t blame cosmic rays but more likely dying RAM. The NAS now runs ECC memory.
Philip-J-Fry
I also saw a similar thing. I also naively pointed at "cosmic rays". It wasn't until someone found the actual bug that I realised how unlikely that was.
The actual bug was unsafe code somewhere else in the application corrupting the memory. The application worked fine, but the log message strings were being slightly corrupted. Just a random letter here and there being something it shouldn't be.
The question really should have been, if this was truly cosmic interference, why only this service and why was the problem appearing more than once over multiple versions of the application?
Cosmic rays are a great excuse to problems you don't yet understand. But the reality of them is extremely rare and it's like 99% a memory corruption bug caused by application code.
Theodores
Is that you, Julian?
I jest, but, once upon a time I worked with an infallible developer. When my projects crashed and burned, I would assume that it was my lack of competence and take that as my starting point. However, my colleague would assume that it was a stray neutrino that had flipped a bit to trigger the failure, even if it was a reproducible error.
He would then work backwards from 93 million miles away to blame the client, blame the linux kernel, blame the device drivers and finally, once all of that and the 'three letter agencies' were eliminated, perhaps consider the problem was between his keyboard and his chair.
In all fairness, he was a genius, and, regarding the A320 situation, he would have been spot on!
pyb
The aerospace industry has had countermeasures in place against bit-flips for a long time, oftentimes thanks to redudancy
("une supervision interne du composant à l’origine de la défaillance ;
- un mécanisme de redémarrage automatique de ce composant dès lors que la défaillance
est détectée)
nolist_policy
The linked document is not related to this incident.
qaq
Has BoFesc vibes
"It's friday, so I get into work early, before lunch even. The phone rings. Shit!
I turn the page on the excuse sheet. "SOLAR FLARES" stares out at me. I'd better read up on that..."
suprjami
Solar Flares was always my favourite result on the BoFH Excuse Generator.
Solar flares are the best excuse. We just have to wait it out.
supernova87a
I wonder how the incident was diagnosed? Does the FDR record low level errors that might've contributed to this? I thought that it only recorded certain input parameters and high-level flight metrics but I'm no expert.
If a radiation event caused some bit-flip, how would you realize that's what triggered an error? Or maybe the FDR does record when certain things go wrong? I'm thinking like, voting errors of the main flight computers?
Anyway, would be very interested to know!
yread
From a comment on avherald:
"Had the same problem with low power CMOS 3 transistor memory cells used in implantable defibrillators in the 1990s. Needed software detection and correction upgrade for implanted devices, and radiation hardening for new devices. Issue was confirmed to be caused by solar radiation by flying devices between Sydney and Buenos Aires over the south pole multiple times, accumulating a statistically significant different error rate to control sample in Sydney."
My armchair guess is that they had a new control pathway not properly participating in their integrity hand-off protocols, doing some kind of transformation outside of that protection.
I once saw some HW engineers go nuts trying to find out why a storage device had an error rate several orders of magnitude higher than the extremely low error rate they expected (and triggering data corruption errors). It turns out to be one extremely deep VHDL-based control area for an FPGA that didn't properly do integrity. You'd have to flip a bit at an incredibly precise point in time for error to occur, but that's what was happening. When all the math was said and done, that FPGA control path integrity miss exactly accounted for the the higher error rate.
Do they really need to ground the entire fleet for that? One incident for ten thousand planes in the air for years. I'd think that giving airlines two months to fix it would be sufficient.
mrpippy
I don’t believe it’s been years, only the latest firmware version for the ELAC is affected. The fix is to downgrade (or replace hardware with a unit running earlier firmware)
jfoster
I wonder who eats the cost of this? I presume it's the airlines.
So the immediate cost to Airbus of grounding the fleet is quite low, whilst the downside of not grounding the fleet (risk of incident, lawsuits, reputation, etc.) could be substantial.
Havoc
Yeah should be airlines
It sounds like the fix is fairly quick so probably not as expensive as the max multi month groundings
I doubt anyone is going to sue. Repairs etc are a part of life when owning aircraft. So as long as Airbus makes this happen fast and smooth they’re probably ok
miyuru
this is Airbus, not Boeing
kijin
I imagine it could help with Airbus marketing.
"We take proactive measures, whereas our competitor only takes action after multiple fatal crashes!"
probably_wrong
I know someone who is stranded in another continent thanks to this. Trust me, all the understanding I could have as a technical user has been offset by the MASSIVE pain in the ass that is rebooking an international flight. And non-technical users have heard "the plane will not travel because it requires a software update", which does not inspire confidence.
As far as I'm concerned it has not helped with their marketing.
lxgr
> "the plane will not travel because it requires a software update", which does not inspire confidence.
It actually inspires a lot of confidence to people who can at least think economically, if not technically:
Grounding thousands of planes is very expensive (passengers get cash for that in at least the EU, and sometimes more than the ticket cost!), so doing it both shows that it’s probably a serious issue and it’s being taken seriously.
probably_wrong
First, I feel the implication that "if you aren't reassured is only because you're dumb" is unwarranted.
With that out of the way, being expensive does not preclude shoddy work. At the end of the day, the only difference between "they are so concerned about security that they are willing to lose millions[1]" and "their process must be so bad that they have no other choice but to lose millions before their death trap cost them ten times that" is how good your previous perception of their airplanes is.
I think that, had this exact same issue happened to Boeing, we would be having a very different conversation. As the current top-comment suggests, it would probably be less "these things happen" and more "they cheapened out on the ECC".
[1] Disclaimer: I have no idea who loses money in this scenario, if it's also Airbus or if it's exclusively the airlines who bought them.
brabel
Imagine an airplane crashed in these 2 months. I bet you would join the chorus and blame them for gross negligence.
kijin
There's a huge difference between "manufacturer recommended updates, but airline waited until the last week to apply them" and "manufacturer didn't even acknowledge the issue" in terms of who the chorus is going to blame.
f1shy
I would personally not want to seat in those planes in those 2 months.
upcoming-sesame
nothing worse than rushing a fix in production - only to find out the fix has caused more damage than the original bug
pyb
I get the feeling that they are doing this partly for marketing purposes.
refulgentis
Yeah, because the alternative is knowing you might kill people due to a mundane engineering known issue.
Bud
From their viewpoint, you have to think about what happens if, after they became aware of this vulnerability, there was then a crash because they weren't prompt and aggressive enough in addressing it. That's the kind of thing that ruins your entire company forever.
Esophagus4
Yep - Boeing is still dealing with it years later.
(As they should - I’m still very mad at them.)
owenthejumper
A friend works at Jetblue. They are scrambling hard to do the updates.
jfoster
I've noticed that some carriers seem to be suggesting that there might be no impact to flights, but isn't this an immediate grounding for each aircraft until the update is made?
How is it possible that this wouldn't impact upon flight schedules?
icegreentea2
The grounding is for 6000 of 11000 A320 series. I believe it's some combination of software and hardware configuration that is at risk.
jfoster
Thank you; that makes sense. I had the impression it was the entire fleet.
julik
It depends on whether the ELAC is an LRU (line-replaceable unit, i.e. a box with ports that can be swapped at an airport) and whether a software update can be uploaded into a unit that is installed (not all aircraft have a "firmware update via cable or floppy", so to speak)
simne
If possible for exact this plane, could make software update just as routine procedure.
But as I hear, air transporters could buy planes in different configurations, so for example, Emirates airlines, or Lufthansa always buy planes with all features included, but small Asian airlines could buy limited configuration (even without some safety indicators).
So for Emirates or Lufthansa, will need one empty flight to home airport, but for small airline will need to flight to some large maintenance base (or to factory base) and wait in queue there (you could find in internet images of Boeing factory base with lot of grounded 737-MAXes few years ago).
So for Emirates or Lufthansa will be minimal impact to flights (just like replacement of bus), but for small airlines things could be much worse.
arrel
N of 1, but I’m stuck in phoenix overnight because our flight was delayed an hour and a half by airbus maintenance and we missed our connection.
1970-01-01
They said the same thing at Toyota when the unintended accel problem was in the news, but never found a real world example. There are a lot more old Toyotas still on the road than Airbuses in the air, so distance to the sun makes all the difference here? I wonder if they only see issues when flying near the north pole?
albert_e
What if future aircraft had "OTA" updates to software... using this as an example of avoidable downtime.
OTA updates to cars makes me feel uneasy -- not knowing what new bugs it might introduce.
skx001
This video shows the the A320 computer and how the computer cooling system works
Why would a CME disrupt a single brand and model of aircraft, when the entire planet is covered in computers that almost never have bitflip issues when a CME rolls through every few months?
squarefoot
I would guess barely enough cable shielding paired with long enough paths along the aircraft so that the signals there would be more likely affected by EM induced currents.
1970-01-01
I'm guessing EM shielding flaw or something electronic. See my comment on the Toyotas. It doesn't make sense from a raw probability perspective.
jakub_g
From newspaper reporting on this, they are rolling back a software update. I wonder what was the original cause or the update? How often are flight computers software updated and why?
julik
This ELAC version is 100-something, and the A320 first flew around 1988. Why the updates - for example, there are updates to flight control law transitions, like after 1991 where the aircraft would limit flight control inputs during landing, thinking it would be preventing a stall - because it would not go into the flare law appropriately. See https://en.wikipedia.org/wiki/Iberia_Flight_1456
The cause could have also been an extra check introduced in one of the routines - which backfired in this particular failure scenario.
asdefghyk
Intense solar radiation will be at a peak, since it is NOW the peak of the 11 year sunspot cycle.
( I would be interested to find out how they actually test these systems. What combinations of hardware hardening and software logic. ALso do they actually subject to system to radiation as part of the testing )
Solar radiation like solar wind, or sunlight? They don’t say.
mr_toad
“Analysis of a recent event”
I presume they mean a Coronal Mass Ejection.
bparsons
There was a very large CME ten days ago. The NOAA scale had predicted a high likelihood of disruptions, and had specifically suggested that spacecraft and high altitude aircraft could be impacted.
FWIW the "industry sources say" line on the incident is that it occurred on 30 October[1], so further back than ten days ago but of course there may have been other CME incidents at that time.
The European Agency Aviation Safety Agency [2] instruction describes the characteristics of the incident but not the date.
I feel like the event was something that happened to a plane. That said, I wouldn't think sunlight would be penetrating to the chips running the plane.
dtagames
Gamma rays penetrate everything and have definitely been known to disrupt computer circuits.
fwip
Yes, which is why the solar flare scenario makes more sense.
awesome_dude
> The grounding of Airbus A320neo aircraft around the world can be traced back to an incident on a JetBlue flight operating a Cancun to New Jersey service on 30 October.
> At least 15 passengers were injured and taken to the hospital after a sudden drop in altitude on the flight from Mexico was forced to make an emergency landing in Florida, US aviation officials said at the time.
> The Thursday flight from Cancun was headed to Newark, New Jersey, when the altitude dropped, leading to the diversion to Tampa International Airport, the US Federal Aviation Administration said in a statement.
> Pilots reported “a flight control issue” and described injuries including a possible “laceration in the head,” according to air traffic audio recorded by LiveATC.net.
> Medical personnel met the passengers and crew on the ground at the airport. Between 15 and 20 people were taken to hospitals with non-life-threatening injuries, said Vivian Shedd, a spokesperson for Tampa Fire Rescue.
> Pablo Rojas, a Miami-based attorney who specialises in aviation law, said a “flight control issue” indicated that the aircraft wasn't responding to the pilots.
> At least 15 passengers were injured and taken to the hospital after a sudden drop in altitude on the flight from Mexico was forced to make an emergency landing in Florida, US aviation officials said at the time.
I’m surprised passengers are allowed to unbuckle for so much of each flight. You can get injured while buckled it, but that seems less common.
Curious what a sw change might have done in terms of resiliency. Maybe an incorrect memory setting or some code path that is not calculating things redundantly maybe?
rootusrootus
So it's not just Boeing that can screw up software on an airplane. I guess now I have to be a little afraid of all the airliners.
oofbey
This is in response to JetBlue flight 1230 from Cancun to Newark on October 30, 2025, where a cosmic ray of some kind flipped a bit and caused a dangerous situation. At the time there was a minor (G1) geomagnetic storm - meaning more cosmic rays than normal. The Planetary K-index was at 5. These are somewhat elevated numbers - enough to produce a visible Aurora in Canada, but probably not even the northernmost US. But also this level of space weather is also very common. We hit G1 or higher about once a week. That's the really damning part. If it had happened in a G4 or G5 storm, then the engineers might have responded "we can't fix everything", but this level of reliability is clearly unacceptable.
rishabhaiover
I hope Airbus only uses Honeywell or Collins in their newer planes.
jMyles
This is one of the rare cases where, IMO, it makes sense to use a modified title as you've done here.
jb1991(dead)
[flagged]
kappi
Following the Airbus A320 emergency airworthiness action, everyone will be talking about the ELAC (Elevator Aileron Computer) manufactured by Thales, which caused a sudden pitch-down without pilot input on JetBlue 1230 back in October.
I was traveling during this entire ordeal. My flight got delayed by 7 hours. Insane day, just now boarding my flight. American Airlines was in shambles today.
You can see much more data in the report:
https://www.atsb.gov.au/sites/default/files/media/3532398/ao...
Wasn't the philosophy back then to run multiple independent (and often even designed and manufactured by different teams) computers and run a quorum algorithm at a very high level?
Maybe ECC was seen as redundant in that model?
It was, and they did (well, same design, but they were independent). I quote from the report:
"To provide redundancy, the ADIRS included three air data inertial reference units (ADIRU 1, ADIRU 2, and ADIRU 3). Each was of the same design, provided the same information, and operated independently of the other two"
> Maybe ECC was seen as redundant in that model?
I personally would not eschew any level of redundancy when it can improve safety, even in remote cases. It seems at the moment of the module's creation, EDAC was not required, and it probably was quite more expensive. The new variant apparently has EDAC. They retrofitted all units with the newer variants whenever one broke down. Overall, ECC is an extra layer of protection. The _presumably_ bit flip would be plausible to blame for data spikes. But even so, the data spikes should not have caused the controls issue. The controls issue is a separate problem, and it's highly likely THAT is what they are going to address, in another compute unit.
"There was a limitation in the algorithm used by the A330/A340 flight control primary computers for processing angle of attack (AOA) data. This limitation meant that, in a very specific situation, multiple AOA spikes from only one of the three air data inertial reference units could result in a nose-down elevator command. [Significant safety issue]"
This is most likely what they will address. The other reports confirm that the fix will be in the ELAC produced by Thales and the issue with the spikes detailed in the report was in an ADIRU module produced by Northrop Gruman.
Jeez, it would drive me _up the wall_. Let's say I could somewhat justify the security concerns, but this seems like it severely hampers the ability to design the system. And it seems like a safety concern.
Because getting a new one certified is extremely expensive. And designing an aircraft with a new type certificate is unpopular with the airlines. Since pilots are locked into a single type at a time, a mixed fleet is less efficient.
Having a pilot switch type is very expensive, in the 50-100k per pilot range. And it comes with operational restrictions, you can't pair a newly trained (on type) captain with a newly trained first officer, so you need to manage all of this.
Significant internal hardware changes might indeed require re-certification, but it generally wouldn't mean that pilots need to re-qualify or get a new type rating.
Why would you assume they're not? I don't know about aircraft specifically, but there's plenty of hardware that uses components older than that. Microchip still makes 8051 clones 45 years after the 8051 was released.
There's a reason the airlines and manufacturers hem and haw about new models until the economics overwhelmingly make it worthwhile, and even then it can still be a shitshow. The MCAS issue is case in point of how introducing new tech can cause unexpected issues (made worse by Boeing's internal culture).
The 787 dreamliner is also a good example of how hard it is. By all accounts is a success, but it had some serious teething problems and still has some concerns about the long term wear and tear of the composite materials (though a lot of it's problems wasn't necessarily the application of new tech, but Boeing's simultaneous desire to overcomplicate the manufacturing pipeline via outsourcing and spreading out manufacturing).
They didn't say the design was brand new.
Guessing that using previously certified stuff is an advantage
Difference between it and ECC?
It’s confusing because EDAC and ECC seem to mean the same thing, but ECC is a term primarily used in memory integrity, where EDAC is a system level concept.
All of the value of your comment comes from the first sentence and the last two.
- EDAC is a term that encompasses anything used to detect and correct errors. While this almost always involves redundancy of some sort, _how_ it is done is unspecified.
- The term ECC used stand-alone refers specifically to adding redundancy to data in the form of an error correcting code. But it is not a single algorithm - there are many ECC / FEC codes, from hamming codes used on small chunks of data such as data stored in RAM, to block codes like reed-solomon more commonly used on file storage data.
- The term ECC memory could really just mean "EDAC" memory, but in practice, error correcting codes are _the_ way you'd do this from a cost perspective, so it works out. I don't think most systems would do triple redundancy on just the RAM -- at that point you'd run an independent microcontroller with the RAM to get higher-level TMR.
https://www.sciencedirect.com/science/article/abs/pii/S01419...
https://forums.raspberrypi.com/viewtopic.php?t=99167
https://forums.raspberrypi.com/viewtopic.php?f=28&t=99042
https://www.raspberrypi.com/news/xenon-death-flash-a-free-ph...
https://www.youtube.com/watch?v=wyptwlzRqaI
https://en.wikipedia.org/wiki/Single-event_upset
For manned spaceflight, NASA ups N from 3 to 5.
Other mitigations include completely disabling all CPU caches (with a big performance hit), and continuously refreshing the ECC RAM in background.
There are also a bunch of hardware mitigations to prevent "latch up" of the digital circuits.
Eg. I could understand if each subsystem had its own actuators and they were designed so any 3 could aerodynamically override the other 2, but I don't think that's how it works in practice.
They do not. Just make voting circuit much more reliable than computing blocks.
As example, computing block could be CMOS, but voting circuit made from discrete components, which are just too large to be sensitive to particles.
Unfortunately, discrete components are sensitive to overall exposure (more than nm scale transistors), because large square gather more events and suffered by diffusion.
Other example from aviation world - many planes still have mechanic connection of steering wheel to control surfaces, because mechanic connection considered ideally reliable. Unfortunately, at least one catastrophe happen because one pilot blocked his wheel and other cannot overcome this block.
BTW weird fact, modern planes don't have rod physically connected to engine, because engine have it's own computer, which emulate behavior of old piston carburetor, and on Boeing emulating stick have electronic actuator, so it automatically placed in position, corresponding to actual engine mode, but Airbus don't have such actuator.
I want to say - especially big planes (and planes overall), are weird mix of very conservative inherited mechanisms and new technologies.
https://en.wikipedia.org/wiki/Radiation_hardening
It's interesting to me that triple-voting wasn't as necessary on the older (rad-hard) processors. Every foundry in the world is steering toward CPUs with smaller and smaller feature sizes, because they are faster and consume less power, but the (very small) market for space-based processors wants large feature sizes. Because those aren't available anymore, TMR is the work-around.
https://en.wikipedia.org/wiki/IBM_RAD6000
https://en.wikipedia.org/wiki/RAD750
Most modern space processing systems use a combination of rad-hard CPUs and TMR.
In other cases all of the subsystems implement the comparison logic and "vote themselves out" if their outputs diverge from the others. A lot of aircraft control systems are structured more as primary/secondary/backup where there is a defined order of reversion in case of disagreement, rather than voting between equals.
But, more generally, it is very hard to eliminate all possible single points of failure in complex control systems, and there are many cases of previously unknown failure points appearing years or decades into service. Any sort of multi-drop shared data bus is very vulnerable to common failures, and this is a big part of the switch to ethernet-derived switched avionics systems (e.g. Afdx) from older multi-drop serial busses.
Hardware fix is the ultimate solution but it might be possible to paper over with software.
I don't believe there was any issue identified with the software of the plane.
The moment to avoid the accident was probably the very first moment when Bonin entered a steep climb when the plane was already at 35,000 feet, only 2000 feet below the maximum altitude for its configuration. This was already a sufficiently insane thing to do that the other less senior pilot should have taken control, had CRM been functioning effectively. What actually happened is that both of the pilots in the cockpit at the start of the incident failed to identify that the plane was stalled despite the fact that (i) several stall warnings had sounded and (ii) the plane had climbed above its maximum altitude (where it would inevitably either stall or overspeed) and was now descending. It’s never very satisfying to blame pilots, but this was a monumental fuck up.
If the pilots genuinely disagree about control inputs there is not much that hardware or software can do to help. Even on aircraft with traditional mechanically linked control columns like the 737, the linkage will break if enough pressure is applied in opposite directions by each pilot (a protection against jamming).
There's a detailed breakdown here: https://admiralcloudberg.medium.com/the-long-way-down-the-cr...
In 90's Telco, you used to have a pair of systems and if they disagreed, they would decide which side was bad and disable it.
In modern cloud, you accept there are errors. There's another request in ~10+ms. You only look when the error rate becomes commercially important.
My understanding of spacecraft is that there would be 3 independent implementations and they would vote.
The plane has a matrix of sensors and systems, allowing faults to be bubbled up and bad elements disabled independently.
The ADIRU does compare values to detect failures (median of 3 sensors), but they could only detect errors that last >1s. The flight computer used the raw data - because the sensors aren't interchangeable (they won't have consistent readings in all flight modes)!
Very nifty.
One thing, they say "memorisation period", I don't think it's a memorisation period? From my reading of the algorithm, it should be more "last value retention period"? Or "sensor spurious fault reading delay"?
Section 2.1 A330/A340 flight control system design "AOA computation logic"
https://www.atsb.gov.au/sites/default/files/media/3532398/ao...
"Preliminary A330/A340 FCPC algorithm"
"The algorithm did not effectively manage a specific situation where AOA 2 and AOA 3 on one side of the aircraft were temporarily incorrect and AOA 1 on the other side of the aircraft was correct, resulting in ADR 1 being rejected."
So, you've got a system where _two_ of the three sensors are bad, and you need to deal with it.
Four of them operating in a redundant set and the fifth performing non critical task, as descripted in [1]. The fifth is also programmed by a different contractor in a different programming language: #1-4 running the Primary Avionics Software System (PASS) programmed by IBM in HAL/S and #5 programmed by a different team of Rockwell International in assembly. [2]
[1] https://people.cs.rutgers.edu/~uli/cs673/papers/RedundancyMa...
[2] https://ntrs.nasa.gov/api/citations/20110014946/downloads/20...
https://avherald.com/h?article=52f1ffc3&opt=0
"This identified vulnerability could lead in the worst case scenario to an uncommanded elevator movement that may result in exceeding the aircraft structural capability."
I won’t blame cosmic rays but more likely dying RAM. The NAS now runs ECC memory.
The actual bug was unsafe code somewhere else in the application corrupting the memory. The application worked fine, but the log message strings were being slightly corrupted. Just a random letter here and there being something it shouldn't be.
The question really should have been, if this was truly cosmic interference, why only this service and why was the problem appearing more than once over multiple versions of the application?
Cosmic rays are a great excuse to problems you don't yet understand. But the reality of them is extremely rare and it's like 99% a memory corruption bug caused by application code.
I jest, but, once upon a time I worked with an infallible developer. When my projects crashed and burned, I would assume that it was my lack of competence and take that as my starting point. However, my colleague would assume that it was a stray neutrino that had flipped a bit to trigger the failure, even if it was a reproducible error.
He would then work backwards from 93 million miles away to blame the client, blame the linux kernel, blame the device drivers and finally, once all of that and the 'three letter agencies' were eliminated, perhaps consider the problem was between his keyboard and his chair.
In all fairness, he was a genius, and, regarding the A320 situation, he would have been spot on!
Airbus/Thales's fix in this case appears to add more error checking, and to restart the misbehaving component. https://bea.aero/fileadmin/user_upload/BEA2024-0404-BEA2025-...
("une supervision interne du composant à l’origine de la défaillance ; - un mécanisme de redémarrage automatique de ce composant dès lors que la défaillance est détectée)
I turn the page on the excuse sheet. "SOLAR FLARES" stares out at me. I'd better read up on that..."
http://jefflane.org/bofh/bofh.pl
If a radiation event caused some bit-flip, how would you realize that's what triggered an error? Or maybe the FDR does record when certain things go wrong? I'm thinking like, voting errors of the main flight computers?
Anyway, would be very interested to know!
"Had the same problem with low power CMOS 3 transistor memory cells used in implantable defibrillators in the 1990s. Needed software detection and correction upgrade for implanted devices, and radiation hardening for new devices. Issue was confirmed to be caused by solar radiation by flying devices between Sydney and Buenos Aires over the south pole multiple times, accumulating a statistically significant different error rate to control sample in Sydney."
I once saw some HW engineers go nuts trying to find out why a storage device had an error rate several orders of magnitude higher than the extremely low error rate they expected (and triggering data corruption errors). It turns out to be one extremely deep VHDL-based control area for an FPGA that didn't properly do integrity. You'd have to flip a bit at an incredibly precise point in time for error to occur, but that's what was happening. When all the math was said and done, that FPGA control path integrity miss exactly accounted for the the higher error rate.
So the immediate cost to Airbus of grounding the fleet is quite low, whilst the downside of not grounding the fleet (risk of incident, lawsuits, reputation, etc.) could be substantial.
It sounds like the fix is fairly quick so probably not as expensive as the max multi month groundings
I doubt anyone is going to sue. Repairs etc are a part of life when owning aircraft. So as long as Airbus makes this happen fast and smooth they’re probably ok
"We take proactive measures, whereas our competitor only takes action after multiple fatal crashes!"
As far as I'm concerned it has not helped with their marketing.
It actually inspires a lot of confidence to people who can at least think economically, if not technically:
Grounding thousands of planes is very expensive (passengers get cash for that in at least the EU, and sometimes more than the ticket cost!), so doing it both shows that it’s probably a serious issue and it’s being taken seriously.
With that out of the way, being expensive does not preclude shoddy work. At the end of the day, the only difference between "they are so concerned about security that they are willing to lose millions[1]" and "their process must be so bad that they have no other choice but to lose millions before their death trap cost them ten times that" is how good your previous perception of their airplanes is.
I think that, had this exact same issue happened to Boeing, we would be having a very different conversation. As the current top-comment suggests, it would probably be less "these things happen" and more "they cheapened out on the ECC".
[1] Disclaimer: I have no idea who loses money in this scenario, if it's also Airbus or if it's exclusively the airlines who bought them.
(As they should - I’m still very mad at them.)
How is it possible that this wouldn't impact upon flight schedules?
But as I hear, air transporters could buy planes in different configurations, so for example, Emirates airlines, or Lufthansa always buy planes with all features included, but small Asian airlines could buy limited configuration (even without some safety indicators).
So for Emirates or Lufthansa, will need one empty flight to home airport, but for small airline will need to flight to some large maintenance base (or to factory base) and wait in queue there (you could find in internet images of Boeing factory base with lot of grounded 737-MAXes few years ago).
So for Emirates or Lufthansa will be minimal impact to flights (just like replacement of bus), but for small airlines things could be much worse.
OTA updates to cars makes me feel uneasy -- not knowing what new bugs it might introduce.
https://www.youtube.com/watch?v=HQuc_HhW6VA
The cause could have also been an extra check introduced in one of the routines - which backfired in this particular failure scenario.
Good related reading on this page .... https://en.wikipedia.org/wiki/Radiation_hardening ... includes a range of mitigation effects.
( I would be interested to find out how they actually test these systems. What combinations of hardware hardening and software logic. ALso do they actually subject to system to radiation as part of the testing )
https://en.wikipedia.org/wiki/Radiation_hardening
I presume they mean a Coronal Mass Ejection.
https://www.swpc.noaa.gov/noaa-scales-explanation
https://kauai.ccmc.gsfc.nasa.gov/CMEscoreboard/prediction/de...
The European Agency Aviation Safety Agency [2] instruction describes the characteristics of the incident but not the date.
[1] https://www.theguardian.com/business/2025/nov/28/airbus-issu...
[2] https://ad.easa.europa.eu/ad/2025-0268-E
> At least 15 passengers were injured and taken to the hospital after a sudden drop in altitude on the flight from Mexico was forced to make an emergency landing in Florida, US aviation officials said at the time.
> The Thursday flight from Cancun was headed to Newark, New Jersey, when the altitude dropped, leading to the diversion to Tampa International Airport, the US Federal Aviation Administration said in a statement.
> Pilots reported “a flight control issue” and described injuries including a possible “laceration in the head,” according to air traffic audio recorded by LiveATC.net.
> Medical personnel met the passengers and crew on the ground at the airport. Between 15 and 20 people were taken to hospitals with non-life-threatening injuries, said Vivian Shedd, a spokesperson for Tampa Fire Rescue.
> Pablo Rojas, a Miami-based attorney who specialises in aviation law, said a “flight control issue” indicated that the aircraft wasn't responding to the pilots.
https://www.stuff.co.nz/travel/360903363/what-happened-fligh...
I’m surprised passengers are allowed to unbuckle for so much of each flight. You can get injured while buckled it, but that seems less common.
https://docs.oracle.com/cd/E19095-01/sf4810.srvr/816-5053-10...
https://en.wikipedia.org/wiki/Cosmic_ray
Curious what a sw change might have done in terms of resiliency. Maybe an incorrect memory setting or some code path that is not calculating things redundantly maybe?
So here’s everything you need to know about ELAC.
The ELAC System in the Airbus A320: The Brains Behind Pitch and Roll Control https://x.com/Turbinetraveler/status/1994498724513345637