I was literally talking about Microsoft moving the compositor that was inside the kernel in their old Windows 9x kernel architecture to outside the kernel in Windows NT.
That literally every other kernel (OSS and comercial, Unix and not) does this separation suggests this is a generally accepted good security practice.
I’m not aware of any kernel research that alters the fundamental fact that having compositing in-kernel compositing is a big security risk surface area and the OS you are proposing isn’t even pure Rust - it’s got C and assembly and unsafe Rust thrown in which suggests there’s a non trivial attack surface area that isn’t mitigated architecturally - AFAIK capability security won’t help here with a monolithic design and you need a microkernel design to separate concerns and blast areas to make the capabilities mean anything so that an exploit in one piece of the kernel can’t be a launching pad to broader exploits. This is also ignoring that even safe Rust has potential for exploit since there are compiler bugs around soundness in terms of generated code so even if you could write pure safe Rust code (which you can’t at the OS level) a monolithic kernel would present issues.
TLDR: claiming that there’s decades of OS research to improve on that existing kernels don’t take advantage of is fair. Claiming that a monolithic kernel doesn’t suffer architectural security challenges, particularly with respect to compositing in-kernel is a bold statement that would be better supported by explaining how that research solves the security risks rather than launching an ad hominem attack against a different kernel family than I even mentioned is just a weird defensive reaction.
There's no possible way that data which will only ever be read as raw pixel data, Z tested, alpha blended, and then copied to a framebuffer can compromise security or allow any unauthorized code to run at kernel privilege level. It's impossible. These memory regions are never mapped as executable and we use CPU features to prevent the kernel from ever executing or even being able to access pages that are mapped as userspace pages and not explicitly mapped as shared memory with the kernel i.e. double mapped into the higher half. So there's literally an MMU preventing in kernel compositing from even possibly being a security issue.
* you try to do GPU compositing things get more complicated. You mention you have no interest in GPU compositing but that’s quite rare
* a lot of such exploits come from confusing the kernel about the buffer to use as input/output and then all sorts of mayhem ensues (eg giving it an input buffer from a differ process so the kernel renders to the screen a crypto key in another process or arranging it to clobber some kernel buffers)
* stability - a bug in the compositor panicks the entire machine instead of gracefully restarting the compositor.
But ultimately you’re the one claiming you’re the domain expert. You should be explaining to me why other OSes made the choices they did and why they’re no longer relevant.
(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)
The plan is to hand out panes which are just memory buffers to which applications write pixel data as they would on a framebuffer then when the kernel goes to actually refresh the display it composites any visible panes onto the back buffer and then swaps buffers. There is nothing unsafe about that any more so than any other use of shared memory regions between the kernel and userspace and those are quite prolific in existing popular OSes.
If anything the Unix display server nonsense is overly convoluted and far worse security wise.
From there each application can draw its own GUI and respond to events that happen in its panes like a mouse button down event while the cursor is at some coordinates and so forth using event capabilities. What any event or the contents of a pane mean to the application doesn't matter to the OS and the application has full control over all of its resources and its execution environment with the exception of not being allowed to do anything that could harm any other part of the system outside its own process abstraction. That's my rationale for why the display system and input events should work that way. Plus it helps latency to keep all of that in the kernel especially since we're doing all the rendering on the CPU and are thus bottlenecked by the CPU's memory bus having way lower throughput compared to that of a discrete GPU. But that's the way it has to be since there are basically no GPUs out there with full publicly available hardware documentation as far as I know and believe me I've looked far and wide and asked around. Eventually I'll want to port Mesa because redoing all the work develop something that complex and huge just isn't pragmatic.
Also the only approach for systems where people advocate for static linking everything, yet another reason why dynamic loading became a thing.
Most of these systems came with utilities to partially automate the process, some kind of config file to drive it, Netware 2.x even had TUI menuing apps (ELSGEN, NETGEN) to assist in it
The sys admin scripts would even relink just to merely change the ip address of the nic! (I no longer remember the details, but I think I eventually dug under the hood and figured out how you could edit a couple files and merely reboot without actually relinking a new kernel. But if you only followed the normal directions in the manual, you would use scoadmin and it would relink and reboot.) And this is not because SCO sux. Sure they did, but that was actually more or less normal and not part of why they sucked.
Change anything about which drives are connected to which scsi hosts on which scsi ids? fuggeddabouddit. Not only relink and reboot, but also pray and have a bootable floppy and a cheat sheet of boot: parameters ready.
Incremental compilation means you don't have to recompile everything just compile the new driver as a library and relink the kernel and you're done. Keep the prior n number of working ones around in case the new one doesn't work.
The intro page is currently useless.
I'm personally not at all convinced having a scheme multiplexer in front is a good thing, for a namespace like what a kernel would manage. It's just not really any different from having top-level /foo and /bar, and introduces a bunch of special cases. Windows drive letters suck for a reason.
You could roughly emulate it on Unix by assuming every filename starting /scheme/bar/ is a bar-type (special) file, but nothing stops you creating (and you'd necessarily have) 'files' of any type outside that. In Redox, everything has that scheme prefix describing its type (and if omitted, it's implicitly /scheme/file/).
> URIs as namespace paths allowing access to system resources both locally and on the network without mounting or unmounting anything
This is such an attractive idea, and I'm gonna give it a try just because I want something with this idea to succeed. Seems the project has many other great ideas too, like the modular kernel where implementations can be switched out. Gonna be interesting to see where it goes! Good luck author/team :)
Edit: This part scares me a bit though: "Graphics Stack: compositing in-kernel", but I'm not sure if it scares me because I don't understand those parts deeply enough. Isn't this potentially a huge hole security wise? Maybe the capability-based security model prevents it from being a big issue, again I'm not sure because I don't think I understand it deeply or as a whole enough.