Preferences

datenwolf
Joined 1,417 karma

  1. > There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.

    Ohhh yes!

    So, a couple of weeks ago I came across a discussion where some distro (I don't remember which one) contemplated removing 32-bit user space support, suggesting to users to simply run a VM running a 32 bit Linux instead. It was a stupid suggestion then, and this statement is a nice authorative answer from the kernel side, where such suggestions can be shoved to.

  2. First things first: It's just "Vulkan".

    With respect to OpenGL with the current de-facto standard toolkits Qt and GTK you can't really get away from them for the time being, since at the moment they pull in some implementation of OpenGL as a runtime dependency; crossing fingers for that going away soon.

    Also for that matter, although OpenGL is a legacy API, it's a well understood, well documented, and well tested environment. And as much as Vulkan makes certain things – well – not easier, but more straightforward, it isn't without issues. Heck, only recently Matías N. Goldberg found a long standing issue with swapchain reuse that got finally resolved with VK_EXT_swapchain_maintenance1

    https://docs.vulkan.org/guide/latest/swapchain_semaphore_reu...

    With respect to "technical costs" in the context of Wayland: IMHO it's mostly pushing around responsibilities and moving goalposts. Granted, setting up an on-screen frame buffer to draw on incurs a lot less moving parts in Wayland compared to X11. However, it comes at the cost of multiplying rather basic graphics machinery that's required for drawing the most simple things into each and every client. Of course shared libraries will somewhat ease the requirements on .text and .rodata segments, which can be shared; but all the dynamic state that's generated on initialization ending up in .bss and .data is redundantly kept around. And then there's the issue that Wayland also forgoes things like efficient use of screen frame buffer memory that cuts all windows from the same region of memory and managing pixel ownership. The "every window gets its own wholly sized framebuffer" only worked well for that small time window (pun intended) in which screen resolutions weren't as big as they now are becoming commonplace.

    "4k", i.e. 3840×2160 @ 10R10G10B2A resolution takes up about 64MiB in a double buffered configuration (256MiB in an 8k format), if there's only a single window on screen. And every additional full screen application (even if minimized) will add another 32 MiB (128 MiB) to that. Those gigabytes of GPU VRAM don't look as plenty from that view.

    The old and dusted (but not busted) way of using a single frame buffer and cutting windows from that doesn't look as outdated anymore.

  3. From case witness testimony https://storage.courtlistener.com/recap/gov.uscourts.nysd.63...

        12. The forensic analysis also revealed that Elez sent an email with a
        spreadsheet containing PII to two United States General Services Administration
        officials. The PII detailed a name, a transaction type, and an amount of money.
  4. > Some (many?) NASA engineers are at the high end of the band and are advocating a return on Dragon instead. Boeing is obviously at the low end of the band and thinks it is a low risk.

    To me this gives a strong impression of history rhyming with itself. Back in the early 1980ies NASA engineers "close to the hardware" were raising warning, above warning about reliability issues of the shuttles, ultimately being overruled by management, leading to the Challenger disaster.

    Then in 2003 again engineers were raising warnings about heat shield integrity being compromised from impacts with external tank insulation material. Again, management overruled them on the same bad reasoning, that if it did not cause problems in the past, it will not in the future. So instead of addressing the issue in a preventative action, the Columbia was lost on reentry.

    Fool me once …, fool me twice …; I really hope the engineers will put their foot down on this and clearly and decisively overrule any mandate directed from management.

  5. Exactly. If we assume the backdoor via liblzma as a template, this could be a ploy to hook/detour both fprintf and strerror in a similar way. Get it to diffuse into systems that rely on libarchive in their package managers.

    When the trap is in place deploy a crafted package file that appears invalid on the surface level triggers this trap. In that moment fetch the payload from the (already opened) archive file descriptor, execute it, but also patch the internal state of libarchive so that it will process the rest of the archive file as if nothing happened, and the desired outcome also appearing in the system.

  6. What annoys me the most about Reddit is, that it's essentially just a rehash of Usenet with marginally more moderation features, up/down-voting and custom CSS for each group/subreddit. That's about it. If I were to attempt to implement a Reddit "clone", I'd merely spin up a couple of ISC InterNetNews NNTP servers and slap a web application in front of those.

    The only thing that Reddit did – on the interaction level – was replacing the Usenet experience with a visually more appealing and easier to access web frontend. In that regard it's a continuation of Eternal September, with the side effect of draining the user pool from Usenet, leading to the shut down of many Usenet servers world wide because "nobody is using it anymore".

  7. Strictly speaking, those fat Cookie banners are unlawful under the regulation of the GDPR; the GDPR mandates that a site must not behave functionally different given consent or not, as long as the functionality is not related to a specific user.

    Unfortunately there are only so many GDPR compliance officers around, and they have to focus on the bigger fish to fry.

  8. Fun fact: Rust's development saw significant traction and support by Mozilla in the first place to be used for the development of Servo. Or in other words, the eventual development of Servo was, what motivated the development of of Rust from pet project into what it is today.
  9. Does anyone know the size of Bandcamp's catalogue. I'm just wondering about hardware costs (storage) would be for prospective competitors who intend to swoop up Bandcamp's customers (artists and listeners). Audio is a lot less demanding than video, and since there's no DRM it's basically just static files with some access control.
  10. memory-safe language does not imply the use of a VM.

    It can just as well mean, that the compiler will attempt to execute a proof, that memory is never accessed out of bounds, without well defined ownership and within the lifetime of the underlying object. Which is what Rust does, for example.

    Of course it can also mean, that the compiler will then additionally add internal failure checks and safeguards at critical places (Rust does not do this, but it would be nice to have in systems where one might worry about in-register bit-flips (high radiation environments, like X-ray scanners), i.e. stuff not caught by – say – e.g. ECC memory).

  11. If you really, really want to ruin an X11 session, I got you covered:

    https://git.datenwolf.net/codesamples/tree/samples/X11/x11at...

  12. > If I define a function that always returns 1…

    then it's Kolmogorov complexity is also extremely low.

    Look if you have a well enough hash function, it output should be near the Shannon limit and hardly compressible, and ideally contain as much entropy as it has bits. But you can feed in just a single bit or the entire knowledge of humanity, in the end you're going to get a fixed amount of bits, and entropy near of that, and if you throw any form of lossless compression at it, it will hardly compress.

    But quantum mechanics tells us, that information cannot be destroyed. So when you feed it more bits, than it emits, then its mostly the entropy of the information you feed in, that you get out of the hash. But if you feed it just a single bit, the additional entropy comes from the computational process.

    I know, this is now getting really philosophical, but here's something to ponder on: How would you implement a hash function for a reversible computing architecture?

  13. Boltzmann. But it doesn't really matter, it's the same thing. Yes, I know that looking at a sequence of, say 1000 identical bits looks like it's got just 10 bits of entropy after simple RLE compression. But you must not forget the entropy that also generated in the computation itself, and subsequently dissipated into the universe.
  14. That only works, if you know exactly, that the low bits are constant. Otherwise you may run into the issue that due to unsteady rate of RDTSC between two processes/threads that may be preemptively unscheduled between reading the HPTC and the RDTSC you might again end up with colliding time stamps. And if you took the derivative between timestamps taken in succession, you might even find is to be non-monotonic in places.

    The combination of multiple counters incremented by individual unsteady clocks used to be a source for pseudo random scrambler sequences; these days we prefer LFSRs, but overall this is something that can be weird.

    Hence my recommendation: Just throw xxHash32 on concatenation of the HPTC's low bits and the CPU clock cycle counter, and forgo any pretense of monotony in the low bits (because very likely you don't have it anyway).

  15. You don't know precisely at which frequency the cycle counter runs. Depending on the system load it might either run faster or slower than the lowest bits the HPTC. For what it's worth this part is more or less nondeterministic, so the sane thing to do, is spread out the information as much as possible (maximize entropy), in order to minimize the probability of collisions.
  16. 60 bits. Yes, I know, you can compress it down very well. But consider that entropy in computation involves not just the bits you store, but also the bits that the processor touches and eventually dissipates as heat into the universe.
  17. I did update my program, now it measures the ratio.
  18. Actually hashes do create entropy (every computation creates entropy in some form or another). What's the entropy of a 4 bit number? What's the entropy of a 4 bit number hashed by a 64 bit hash function? The act of computation does in fact create entropy, as per the 2nd law of thermodynamics, a part of which shows up in the hash.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal