Read more about it here: https://wiki.minix3.org/doku.php?id=releases:3.2.0:developer...
> In Minix as a microkernel, device drivers are separate programs which send and receive message to communicate with the other operating system components. Device drivers, like any other program, may contain bugs and could crash at any point in time. The Reincarnation server will attempt to restart device drivers when it notices they are abruptly killed by the kernel due to a crash, or in our case when they exit(2) unexpectedly. You can see the Reincarnation Server in the process list as rs, if you use the ps(1) command. The Reincarnation Server sends keep-a-live messages to each running device driver on the system periodically, to ensure they are still responsible and not i.e. stuck in an infinite loop.
The point is that when failures do occur, they can be isolated and recovered from without compromising system stability. In a monolithic kernel, a faulty driver can crash the entire system; in a microkernel design, it can be restarted independently, preserving uptime and isolating the fault domain.
Hardware glitches, transient race conditions, and unforeseen edge cases are unavoidable at scale. A microkernel architecture treats these as recoverable events rather than fatal ones.
This is conceptually similar to how the BEAM VM handles supervision in Erlang and Elixir; processes are cheap and disposable, and supervisors ensure that the system as a whole remains consistent even when individual components fail. The same reasoning applies in OS design: minimizing the blast radius of a failure is often more valuable than trying to prevent every possible fault.
In practice, the "driver resurrection" model makes sense in environments where high availability and fault isolation are critical, such as embedded systems, aerospace, and critical infrastructure. It's the same philosophy that systems like seL4 and QNX goes by.
Do you understand now?
I was literally talking about Microsoft moving the compositor that was inside the kernel in their old Windows 9x kernel architecture to outside the kernel in Windows NT.
That literally every other kernel (OSS and comercial, Unix and not) does this separation suggests this is a generally accepted good security practice.
I’m not aware of any kernel research that alters the fundamental fact that having compositing in-kernel compositing is a big security risk surface area and the OS you are proposing isn’t even pure Rust - it’s got C and assembly and unsafe Rust thrown in which suggests there’s a non trivial attack surface area that isn’t mitigated architecturally - AFAIK capability security won’t help here with a monolithic design and you need a microkernel design to separate concerns and blast areas to make the capabilities mean anything so that an exploit in one piece of the kernel can’t be a launching pad to broader exploits. This is also ignoring that even safe Rust has potential for exploit since there are compiler bugs around soundness in terms of generated code so even if you could write pure safe Rust code (which you can’t at the OS level) a monolithic kernel would present issues.
TLDR: claiming that there’s decades of OS research to improve on that existing kernels don’t take advantage of is fair. Claiming that a monolithic kernel doesn’t suffer architectural security challenges, particularly with respect to compositing in-kernel is a bold statement that would be better supported by explaining how that research solves the security risks rather than launching an ad hominem attack against a different kernel family than I even mentioned is just a weird defensive reaction.
There's no possible way that data which will only ever be read as raw pixel data, Z tested, alpha blended, and then copied to a framebuffer can compromise security or allow any unauthorized code to run at kernel privilege level. It's impossible. These memory regions are never mapped as executable and we use CPU features to prevent the kernel from ever executing or even being able to access pages that are mapped as userspace pages and not explicitly mapped as shared memory with the kernel i.e. double mapped into the higher half. So there's literally an MMU preventing in kernel compositing from even possibly being a security issue.
* you try to do GPU compositing things get more complicated. You mention you have no interest in GPU compositing but that’s quite rare
* a lot of such exploits come from confusing the kernel about the buffer to use as input/output and then all sorts of mayhem ensues (eg giving it an input buffer from a differ process so the kernel renders to the screen a crypto key in another process or arranging it to clobber some kernel buffers)
* stability - a bug in the compositor panicks the entire machine instead of gracefully restarting the compositor.
But ultimately you’re the one claiming you’re the domain expert. You should be explaining to me why other OSes made the choices they did and why they’re no longer relevant.
(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)