Preferences

First things first: It's just "Vulkan".

With respect to OpenGL with the current de-facto standard toolkits Qt and GTK you can't really get away from them for the time being, since at the moment they pull in some implementation of OpenGL as a runtime dependency; crossing fingers for that going away soon.

Also for that matter, although OpenGL is a legacy API, it's a well understood, well documented, and well tested environment. And as much as Vulkan makes certain things – well – not easier, but more straightforward, it isn't without issues. Heck, only recently Matías N. Goldberg found a long standing issue with swapchain reuse that got finally resolved with VK_EXT_swapchain_maintenance1

https://docs.vulkan.org/guide/latest/swapchain_semaphore_reu...

With respect to "technical costs" in the context of Wayland: IMHO it's mostly pushing around responsibilities and moving goalposts. Granted, setting up an on-screen frame buffer to draw on incurs a lot less moving parts in Wayland compared to X11. However, it comes at the cost of multiplying rather basic graphics machinery that's required for drawing the most simple things into each and every client. Of course shared libraries will somewhat ease the requirements on .text and .rodata segments, which can be shared; but all the dynamic state that's generated on initialization ending up in .bss and .data is redundantly kept around. And then there's the issue that Wayland also forgoes things like efficient use of screen frame buffer memory that cuts all windows from the same region of memory and managing pixel ownership. The "every window gets its own wholly sized framebuffer" only worked well for that small time window (pun intended) in which screen resolutions weren't as big as they now are becoming commonplace.

"4k", i.e. 3840×2160 @ 10R10G10B2A resolution takes up about 64MiB in a double buffered configuration (256MiB in an 8k format), if there's only a single window on screen. And every additional full screen application (even if minimized) will add another 32 MiB (128 MiB) to that. Those gigabytes of GPU VRAM don't look as plenty from that view.

The old and dusted (but not busted) way of using a single frame buffer and cutting windows from that doesn't look as outdated anymore.


The issue is not with the level of testing of the API. The issue with the level of testing new implementations of the API will have. Since this API is grotestequely and absurdely complex, expect hell for new implementations to acheive a good level of compatibility with anything legacy (QA hellish nightmare).

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal