Preferences

treffer
Joined 939 karma

  1. Oh great, I missed that end of the page. The "Round 2 details" links back to the blog and it is hard to see the FAQ on mobile (needs manual scrolling to the end).

    Google search and Perplexity failed when I tried, too. Google search has caught up now (haven't retried Perplexity)

    A 41.5mm diameter sounds good. That's a whopping 10mm/20% smaller than my current watch. Should be really neat given the thickness.

  2. I pre-ordered because I loved the Pebble Round - especially the size and look. My intended use case is for formal dress codes and special events (weddings, new years, ...) wheretny fennix 51mm does not fit in (literally and figuratively).

    That said: I can't find full dimensions for the new round 2. I can guesstimate that it should be 10-20% smaller in diameter and less than 2/3 the thickness.

    Would you mind sharing full dimensions or even update the post?

    And congratulations! I really like this. I hope there will be enough of a market to support this project long term.

  3. I had 8 IPs in a hetzner server years ago. One IP had an iptables rule to accept openvpn on any port.

    My openvpn config was a long list of commonly accepted ports on either tcp or udp.

    Startup would take a while but the number of times it worked was amazing.

  4. Interesting. But who is OpenDevicePartnership?

    Looking at the members on the repository this seems to be a Microsoft project?

  5. Compatibility as I understand it means "mixing this in a project is OK".

    This is the case if the 2 license aren't at odds. Usually one license is stricter and you have to adhere to that one for the combined work.

    A counter-example is GPLv2 and Apache license. Those 2 are incompatible. This was fixed with GPLv3 and you can often upgrade to GPLv3.

    So no, this won't allow you to relicense as GPLv2. But you can use GPLv2 code.

    This is especially relevant if you have such code redistribution clauses.

  6. I tried a few times as some BIOS have a hidden or disabled setting but I never got past a plain crash. Device and CPU vendor support for classic S3 is shrinking. E.g. on framework laptops the Intel CPU(!) does not officially support S3 sleep.

    So I can understand that there is no option for it if all you can get is out of spec behavior and crashes.

    Also note that it is incompatible with some secure boot and system integrity settings.

  7. The product page lists EDK II. Is the code available anywhere? I can't see it in edk-platforms.....

    I would love to have a UEFI I can compile....

  8. I guess the reason is the screen. It's 320x240, and 0.3M is 640x480 (VGA).the secondary screen is even lower resolution (160x120).

    It does work very well for this screen resolution.

    And what else would you do with this media given it's a feature phone?

  9. Is it? The article lists 2015 as the year where things improved a lot, 2017 is well past that. The numbers are low and even that's inflated due to recalls.

    I've seen >>10 year old laptops where the battery is still good enough to go from charger to charger. Just go to ebay and check out 2009 MacBooks. That's ~15 years now.

    I don't think this is unrealistic if you can live with the heavier degradation.

  10. It just depends on what you use for management.

    IIRC the /etc/network/interfaces does a reconfiguration that's pretty disruptive.

    Things like brctl and ethtool worked on the fly without issues (note though that I mostly used Arista years ago).

    It is usually non-disruptive if it gets applied as deltas. If your config tool does a teardown/recreate then that's disruptive. Within the bounds of ethernet and routing protocols (OSPF DR/DBR changes are disruptive, STP can be fun, ....).

  11. Depends on what you are doing. But you can take the path of app / os images.

    My home network is just openwrt, and I use make plus a few scripts and imagebuilder to create images that I flash, including configs.

    For rpi I actually liked cloud-init, but it is too flaky for complicated stuff. In that case I would nowerdays rather dockerize it and use systemd + podman or a kubelet in standalone mode. Secrets on a mount point. Make it so that the boot partition of the rpi is the main config folder. That way you can locally flash a golden image.

    Anything that mutates a server is brittle as the starting point is a moving target. Building images (or fancy tarballs like docker) makes it way more likely that you get consistent results.

  12. The issue talks about one vs. multiple frames. That's exactly the issue. It's not a matter of complexity, it's a matter of bad compromises.

    The issue can be easily played through. The most simplistic encoding where the issue happens is RLE (run length encoding).

    Say we have 1MB of repeated 'a'. Originally 'aaa....a'. We now encode it as '(length,byte)', so the stream turns into (1048576,'a').

    Now we would want to parallelize it over 16 cores. So we split the 1MB into 16 64k chunks and compress each chunk independently. This works but is ~16x larger.

    Similar things happen for window based algorithms. We encode repeated content as (offset,length), referencing older occurrences. Now imagine 64k of random data, repeated 16 times. The parallel version can't compress anything (16x random data), the non-parallel version will compress it roughly 16:1.

    There is a trick to avoid this downside. The lookup is not unlimited, there is a maximum window size to limit memory usage. For compatibility it's 8MB for zstd (at level 19), but you can go all the way to 2GB (ultra, 22, long=31). As you make chunks significantly larger than the window you are only loosing out on the new ramp up. E.g. if you use 80MB chunks then you have a bit less than 10% of the file encoded worse. You could still double your encoded size with a well crafted file. If you don't care about parallel decompression then you are able to only parallelize parts like the lookup search. This gives good speedup, but only on compression. That's the current parallel compression approach in most cases (iirc) leading to a single frame, just faster. The problem is that back-references can only be resolved backwards.

    The whole problem is not implementation complexity. It's something you algorithmically can't do with current window based approaches without significant tradeoffs on memory consumption, compression ratio and parallel execution.

    For bzip2 the file is always chunked at 900kb boundaries at most. Each block is encoded independently and can be decoded independently. It avoids this whole tradeoff issue altogether.

    I would also disagree with "no need". Zstd easily outperforms tar, but even my laptop SSD is faster than the zstd speed limits. I just don't have the _external_ connectivity to get something onto my disk fast enough. I've also worked with servers 10 years ago where the PCIe bus to the RAID card was the limiting factor. Again easily exceeding the speed limits.

    Anyway, as mentioned a few times it's an odd corner case. And one can't go wrong by choosing zstd for compression. But it is real fun to dig into these issues and look at them, I hope this sparks some interest in it!

  13. There is one thing you can't with most algorithms: prallelize decompression. That's because most compression algorithms use sliding windows to remove repetitive sections.

    And decompression speed also drops as compression ratio increases.

    If you transfer over say a 1GBit link then transfer speed is likely the bottleneck as zstd decompression can reach >200MB/s. However if you have a 10GBit link then you are CPU bound on decompression. See e.g. decompression speed at [1].

    Bzip2 is not window but block based (level 1 == 100kb blocks, 9 == 900kb blocks iirc). This means that, given enough cores, both compression and decompression can parallelize. At something like 10-20MB/s per core. So somwhere >10 cores you will start to outperform zstd.

    Granted, that's a very very corner case. But one you might hit with servers. That's how I learned about it. But so far I've converged on zstd for everything. It is usually not worth the hassle to squeeze these last performance bits out.

    [1] https://gregoryszorc.com/blog/2017/03/07/better-compression-...

  14. This looks really good, I remember looking into BWT ad a kid. It's a true "wat" once you understand it.

    And once you understand it, why does it compression so well? Because suffixes tend to have the same byte preceeding them.

    Bzip2 is still highly useful because it is block based and can thus be scaled nearly linearly across cou cores (both on compress and decompress)! Especially at higher compression levels. See e.g. lbzip2.

    Bzip2 is still somewhat relevant if you want to max out cores. Although it has a hard time competing with zstd.

  15. Compression algorithm implementations are not for everyone.

    The math and algorithms behind it are fun to learn but hard. And then you need to implement it both performant and correct.

    Only a few people build up the algorithmic background to do this. And the gains once an implementation is there are marginal (optimizations).

    The only larger one seems to be zstd, and I haven't wrapped my head around ANS/tANS...

  16. Well, there is a pretty logical explanation.

    Libsystemd was moving to a dlopen architecture for its dependencies.

    This means that the backdoor would not load as the sshd patch only used libsystemd for notify, which does not need liblzma at all.

    So they IMHO gave it a last shot. It's OK if it burns as it would be useless in 3 months (or even less).

    The collateral is the backdoor Binary, but given enough engineering power it will be irrelevant in 2-3 months, too.

  17. I have seen this in some NAS systems. It's a pretty good fit, especially for the ones that can be upgraded to 64GB RAM and run VMs or docker.

    Generally plenty of RAM plus many fast PCIe lanes is not something most ARM chips offer.

  18. One thing everyone should keep in mind about this NVIDIA/AMD battle right now: CUDA has been published for 16 years, it's been a huge push by NVIDIA to do GPGPU computation. I remember seeing it as a new thing in university back then, after the advanced shaders that were only available on NVIDIA.

    NVIDIA pretty rightful has the lead there, because they worked and invested into it for something like 20 years (you could do pretty advanced shaders on NVIDIA pre-CUDA).

    It only started to pay off recently, and especially with the AI hype (GPU mining was nice, too).

    Now everybody is looking at the profits and goes like "OMG, I want a part of that cake!", either by competing (AMD / Intel) or by paying less for the cards (basically everyone else in the AI space).

    But you have to catch up to 16 years of pretty solid software and ecosystem development. And that's only going to work if you have good enough hardware. NVIDIA did the hard work here. They have earned this lead.

    I am saying this as someone who would rather not buy NVIDIA. I really wish I can soon throw 1-2 7900XTX into a machine and use it for LLMs without issues. But I would also bet that it takes at least a few more years to catch up, even with the massive global interest.

  19. A nice example of this is fftw which has hundreds (if not thousands) of generated methods to do the fft math. The whole project is a code generator.

    It can then after compilation benchmark these, generate a wisdom file for the hardware and pick the right implementation.

    Compared with that "a few" implementations of the core math kernel seem like an easy thing to do.

  20. The actual inclusion code was never in the repo. The blobs were hidden as lzma test files.

    So you review would need to guess from 2 new test files that those are, decompressed, a backdoor and could be injected which was never in the git history.

    This was explicitly build to evade such reviews.

  21. Well, I am skeptical about (2).

    It is unclear what exploiting means. The backdoor is doing _something_ for 0.5s if RSA key exchange happens.

    So even a valid login might trigger not yet known side effects. It might just tunnel commands over dns for example (DNS being a well known side effect of ssh anyway).

    So "exploiting" might mean as little as "used ssh".

  22. There has been a lot said to not throw shit at Lasse Collin. This post is reassuring.

    It was said he is on an internet break at the moment, so I hope this doesn't ruin weeks for him. It's thankless enough to maintain something like xz.

  23. Assume that a backdoor was injected and activated, especially if you ever sshd into the machine.

    The backdoor is not fully analyzed as of now. As such nothing can be said about the system besides "it is potentially compromised".

  24. Arch Linux switched switched from xz to zstd, with neglectable increase in size (<1%) but massive speedup on decompression. This is exactly the use case of many people downloading ($$$) and decompressing. It is the software distribution case. Other distributions are following that lead.

    You should use ultra settings and >=19 as the compression level. E.g. arch used 20 and higher compression levels do exist, but they were already at a <1% increase.

    It does beat xz for these tasks. It's just not the default settings as those are indeed optimized for the lzo to gzip/bzip2 range.

  25. I tried to get the translation to trigger by switching to french and it does not show. You are right.

    So it's just odd that the tags and release tarballs diverge.

  26. Thank you, formatting fixed.

    My TLDR is that I would regard all commits by JiaT75 as potentially compromised.

    Given the ability to manipulate gitnhistory I am not sure if a simple time based revert is enough.

    It would be great to compare old copies of the repo with the current state. There is no guarantee that the history wasn't tampered with.

    Overall the only safe action would IMHO to establish a new upstream from an assumed good state, then fully audit it. At that point we should probably just abandon it and use zstd instead.

  27. 1. Everything must be visible. A diff between the release tarball and tag should be unacceptable. It was hidden from the eyes to begin with.

    2. Build systems should be simple and obvious. Potentially not even code. The inclusion was well hidden.

    3. This was caught through runtime inspection. It should be possible to halt any Linux system at runtime, load debug symbols and map _everything_ back to the source code. If something can't map back then regard it as a potentially malicious blackbox.

    There has been a strong focus and joint effort to make distributions reproducible. What we haven't managed though is prove that the project compromises only of freshly compiled content. Sorta like a build time / runtime "libre" proof.

    This should exist for good debugging anyway.

    It wouldn't hinder source code based backdoors or malicious vulnerable code. But it would detect a backdoor like this one.

    Just an initial thought though, and probably hard to do, but not impossibly hard, especially for a default server environment.

  28. Ubuntu still ships 5.4.5 on 24.03 (atm).

    I did a quick diff of the source (.orig file from packages.ubuntu.com) and the content mostly matched the 5.4.5 github tag except for Changelog and some translation files. It does match the tarball content, though.

    So for 5.4.5 the tagged release and download on github differ.

    It does change format strings, e.g.

       +#: src/xz/args.c:735
       +#, fuzzy
       +#| msgid "%s: With --format=raw, --suffix=.SUF is required unless writing to stdout"
       +msgid "With --format=raw, --suffix=.SUF is required unless writing to stdout"
       +msgstr "%s: amb --format=raw, --suffix=.SUF és necessari si no s'escriu a la sortida estàndard"
    
    There is no second argument to that printf for example. I think there is at least a format string injection in the older tarballs.

    [Edit] formatting

  29. “Say you changed out fluorescents to LED light bulbs, that’s a capital improvement. You’re not replacing lightbulbs, you’re enhancing”. That's the basis for the claim that lightbulb changes are CapEx.

    This is an improvement though. A rack is usually 10-20kW @ equinix, that's well within reach for redoing larger areas of lighting. They also bill for overuse. Great, they improved the building to make more money.

    The other claimed scandal on that front is that they wouldn't change one light bulb at a time, but rather turn it into a larger project and then do it. This doesn't sound like a bad idea either. Avoid tiny maintenances, rather re

    I wouldn't be surprised if their battery claims have a similar underlying issue, freeing up space in a similar way that lights free up power.

    I don't see how this shouldn't classify as an improvement or proper management. And it's a good example of the quality of this report.

  30. This one works in private mode. I am using Firefox on mobile and it worked that way.

    There is also a plug in available on gitlab (iirc) that can auto-circumvent it on most sites. But I find that questionable and I try to keep my plug-in list minimal (it's a horrible supply chain attack vector).

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal