> I've noticed that a lot of projects that do support multiple architectures, particularly obscure ones, tend to find oddball edge cases more easily than those that don't. For example, not assuming the endianness of the CPU arch forces you to get network code to cooperate well.
It's a balancing act, like anything is. Do you add or reject this patch from a contributor? This new feature someone wants, or a bug fixed? Is it better off designed/done in a different way? Is this kind of work maintainable e.g. 3 years from now when I'm not working on it? Can we reliably continue to support these systems or know someone who will care for them? Can we reasonably understand the entire system, and evolve it safely? Do you need more moving parts than are absolutely required? Does that use case even matter, everything else aside? The last one is very important.
Of course there's something to be said about software systems that maintain high quality code while supporting a diverse set of use cases and environments, like you mention.
But -- I'd probably say that, that result? It's likely more a function of focused design than it is a function of "trying to target a lot of architectures". Maybe targeting lots of architectures was a design goal, but nonetheless, the designing aspect is what's important. No amount of portability can fix fundamental misunderstandings of what you're trying to implement, of course. And that's where a lot of problems can creep in.
In this case, they have a very clear view of what they want, and a lot of what QEMU does is very irrelevant. It may be part of QEMU's design to be portable, but ultimately it is a lot of unnecessary, moving parts. You can cut down its scope dramatically with this knowledge -- from a security POV, that's very often going to be a win, to remove that surface area.
(Also, I'm definitely not saying QEMU is bloated or something, either. It's great software and I use it every day, just to be clear.)
It's a balancing act, like anything is. Do you add or reject this patch from a contributor? This new feature someone wants, or a bug fixed? Is it better off designed/done in a different way? Is this kind of work maintainable e.g. 3 years from now when I'm not working on it? Can we reliably continue to support these systems or know someone who will care for them? Can we reasonably understand the entire system, and evolve it safely? Do you need more moving parts than are absolutely required? Does that use case even matter, everything else aside? The last one is very important.
Of course there's something to be said about software systems that maintain high quality code while supporting a diverse set of use cases and environments, like you mention.
But -- I'd probably say that, that result? It's likely more a function of focused design than it is a function of "trying to target a lot of architectures". Maybe targeting lots of architectures was a design goal, but nonetheless, the designing aspect is what's important. No amount of portability can fix fundamental misunderstandings of what you're trying to implement, of course. And that's where a lot of problems can creep in.
In this case, they have a very clear view of what they want, and a lot of what QEMU does is very irrelevant. It may be part of QEMU's design to be portable, but ultimately it is a lot of unnecessary, moving parts. You can cut down its scope dramatically with this knowledge -- from a security POV, that's very often going to be a win, to remove that surface area.
(Also, I'm definitely not saying QEMU is bloated or something, either. It's great software and I use it every day, just to be clear.)