Preferences

quesomaster9000
Joined 362 karma

  1. Yup, Amazon supports the 6.11? kernel on aarch64. Most toolchains if you target linux aarch64 static they, they will produce executables that will run on Amazon Linux aarch64 and Android, set-top boxes with 64-bit chips and Linux 3+ it's surprising how many devices a static aarch64 ELF will run on.
  2. Graviton with Nitro 4 has been quite pleasant to use, with the rust aarch64 musl static target and rust-lld I can build monolith ELFs that work not just on my android via `adb push` and `adb shell` but also on AWS.

    AWS with Nitro v3+ iirc supports TPM, meaning I can attest my VM state via an Amazon CA. I know ARM has been working a lot with Rust, and it shows - binfmt with qemu-user mean I often forget which architecture I'm building/running/testing as the binaries seem to work the same everywhere.

  3. I'd argue that the problem is that QR codes shouldn't be an 'app' problem, and yes there's a chicken-egg problem with PoS terminals verifying incoming bank payments but that's a separate issue.

    If you want to do account-to-account payments you can show the customer the account/routing number, amount & invoice ID - but obviously that's high friction and the customer needs to login to their account and send a payment with lots of manual data entry.

    Making yet another app, adding a financial intermediary, requiring you to link your bank account - these aren't solving the friction points.

    We already have bank apps, when I scan a QR code in an industry-wide format it should ask me or confirm which bank app to open and pre-fill all the payment information.

    So from my perspective, the problem is that FedNow in the US, and Open Banking in the UK - they could have just dictated "Banks must support EPC QR, or EMV QR code scanning and deep-links", and QR code payments would happen very quickly - even with NFC/RFID you can do passive scanning to achieve the same thing.

    * Choose Account * Confirm details * Press send

    That's about as easy as you can get for push payments, with a real industry-wide standard for communicating payment intents via NFC/QR. But both FedNow and UK OpenBanking are structured in a way which requires friction, and onerous regulation, through their clunky APIs - meaning you can't actually solve that problem on your own.

  4. Well, I've tried manually verifying the curve parameters and I don't trust this.

    * The generator isn't selected deterministically

    * The BLAKE3(seed) in the OpenFrogget code doesn't match what I get with Python & Javascript implementation of Blake3, the index & seed aren't specified in the paper

    * The paper doesn't provide a reference for why `a=-7` was chosen (presumably because of the GLV endomorphism)

    * the various parameters differ between the reference implementation and the paper and the spec...

    There are enough many holes in this that I wouldn't touch it yet, as a very quick glance into the spec & the code leaves me wondering why their claims of reproducibility & determinism re: the constants aren't true, and the documentation & code don't match what I can reproduce locally.

    So uhh yea... No

  5. And even with the constant `b=BLAKE("ECCFrog512CK2 forever")` there is an open question, while not as problematic as it is with the NIST & SEC curves, it's covered in "How to manipulate curve standards: a white paper for the black hat"[1]

    I'm surprised they didn't include the constant in the paper and at least a short justification for this approach, despite stating "This ensures reproducibility and verifiable integrity" in section 3.2, whereas several other curves take the approach of 'smallest valid value that meets all constraints'.

    Really they should answer the question of "Why can't `b` be zero... or 1" if they're going for efficiency, given they're already using GLV endomorphisms.

    Likewise with the generator, I see no code or mention in the paper about how they selected it.

    [1]: https://eprint.iacr.org/2014/571.pdf

  6. Right, but z/OS is part of a larger longer-running hardware strategy that, with virtualization, serves the needs of mixed-OS workloads and multi-decade tenures overseeing 24/7 systems.

    The corpse of OpenVMS on the other hand is being reanimated and tinkered with, presumably paid for by whatever remaining support contracts exist, and also presumably to keep the core engineers occupied with inevitably fruitless busywork while occasionally performing the contractually required on-call technomancy on the few remaining Alpha systems.

    VMS is dead... and buried, deep.

    It's a shame it can't be open-sourced, just like Netware won't be open-sourced, and probably has less chance of being used for new projects than RiscOS or AmigaOS.

  7. With Claude 3.7 I keep having to remind it about things, and go back and correct it several times in a row, before cleaning the code up significantly.

    For example, yesterday I wanted to make a 'simple' time format, tracking Earths orbits of the Sun, the Moons orbits of Earth and rotations of Earth from a specific given point in time (the most recent 2020 great conjunction) - without directly using any hard-coded constants other than the orbital mechanics and my atomic clock source. Where this would be in the format of `S4.7.... L52... R1293...` for sols, luns & rotations.

    I keep having to remind to to go back to first principles, we want actual rotations, real day lengths etc. rather than hard-coded constants that approximate the mean over the year.

  8. That's a good question, what if it's 10 hours per year for 10 years?

    In that case, I'd probably choose first-aid & the basics of emergency medicine via a couple of half-day or a full-day course per year.

  9. Fluid dynamics simulation with OpenCL, 100 hours is about 10-15 days of concerted effort. That's plenty time to get a grasp of the algebra behind it, get something simple running, port to OpenCL naively and start optimizing.
  10. It's a shame that the `MISA` CSR is in the 'Privileged Architecture' spec, otherwise you could just check bit 21 for 'V', but that appears to only be available in the highest privilege machine-mode.

    Presumably your OS could trap attempts to read the CSR and allow it, but if not then it's a fatal error and your program shits the bed, otherwise you rely on some OS-specific way of getting that info at runtime.

  11. At the moment I'm verifying a Rust floating point implementation, which has lead to many small snippets for not just generating test & edge cases (e.g. find inputs which falls outside of these conditions), but trying to prove completeness on all valid inputs.
  12. Z3 is entirely different IMHO, once you get into solvers the question becomes not just 'is this satisfied' but 'what is the minimum sequence of steps necessary to arrive at the result'.

    I have relied heavily on both Z3 and Alloy for ad-hoc jobs, and Prolog doesn't even come close to the inference power, and that's along-side Macsyma and Sage.

  13. On a side note, how long until we realize the current incantation of the pile of hacks upon hacks that is SMTP is fundamentally flawed and widely adopt something that has cryptography, authenticity and transport-level security built-in from the start?

    Oh wait, yes, that'll never happen.

  14. I'm really eager to see what happens in the near future with WAT & WASI, but I'm also very aware of seeing a repeat of DLL hell.

    There are a few niches where standardization of interfaces and discoverability will be extremely valuable in terms of interoperability and reducing the development effort to bring-up products that deeply integrate with many things, where currently each team has to re-invent the wheel again for every end-user product they integrate with, with the more ideal alternative being that each product provides their own implementations of the standard interfaces that are plugged into interfaces.

    But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.

  15. As somebody who's in the process of building a sandbox for RISC-V 64 Linux ELF executables, even I'm still on the fence.

    The problem is that in WASM-land we're heading towards WASI and WAT components, which is similar to the .NET, COM & IDL ecosystems. While this is actually really cool in terms of component and interface discovery, the downside is that it means you have to re-invent the world to work with this flavor of runtime.

    Meaning... no, I can't really just output WASM from Go or Rust and it'll work, there's more to it, much more to it.

    With a RISC-V userland emulator I could compile that to WASM to run normal binaries in the browser, and provide a sandboxed syscall interface (or even just pass-through the syscalls to the host, like qemu-user does when running natively). Meaning I have high compatibility with most of the Linux userland within a few weeks of development effort.

    But yes, threads, forking, sockets, lots of edge cases - it's difficult to provide a minimal spoof of a Linux userland that's convincing enough that you can do interesting enough things, but surprisingly it's not too difficult - and with that you get Go, Rust, Zig, C++, C, D etc. and all the native tooling that you'd expect (e.g. it's quite easy to write a gdbserver compatible interface, but ... you usually don't need it, as you can just run & debug locally then cross-compile).

  16. LLMs are much more than search, for example today I went through several different recipes for sea bass fillets, and then went into much deeper conversations, and ended up with this weird intersection between Bon and how to aptly describe Zen, then very abruptly tried to hone some Bulgarian grammar, then pondered upon the enshittification of hacker news.

    To the point that I'm disappointed with human contact.

    If you're using it for React.js you're the problem...

  17. With SoundCloud I've found you have to pay for the 'Plus+++' subscription or whatever it is to not get audio with the higher frequencies absolutely butchered, unless you follow a very specific upload process that bypasses their conversion.

    Upload 320kbit encoded MP3? Sounds great.

    Upload a high-khz WAV? It gets butchered, the top-end turns to glittery noise.

    Maybe others have different experiences, but honestly it felt like I'd been duped when paying for the subscription but still got trash quality audio, only to have to pay more.

  18. Absolutely, but not just that you can jump into meta conversations extremely quickly, and rotate things through different dimensions, even with concepts that are much less familiar. But the key is in deliberately creating the transactional context where ... if anything it's like the most fantastic debugging duck I've ever come across.

    It absolutely will not necessarily find the most obvious errors, but you will learn a lot in the process.

  19. At this point I wouldn't be surprised if their site loaded a WASM-compiled remote-desktop viewer to interact with 'Edge in the cloud' just to view the page you want.
  20. Is it just me, or does the linked microsoft.com page hijack the back button?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal