Preferences

amluto
Joined 22,015 karma

  1. Oh, that’s neat. It uses high-harmonic generation.

    My sole personal experience with any sort of harmonic generation was being in the room while some grad students debugged a 266nm laser that consisted of a boring 1064nm Nd:YAG laser followed by two frequency doublers. Quite a lot of power was lost in each stage, and the results of accidentally letting the full 1064nm source power loose were mildly spectacular.

    I wish Lumiverse luck getting any appreciable amount of power out of their system. (FELs, in contrast, seem to be cable of monstrous power output — that’s never been the problem AFAIK.)

    P.S. never buy a 532nm laser from a non-reputable source. While it’s impressive that frequency doubled Nd:YAG lasers are small and cheap enough to be sold as laser pointers these days, it’s far too easy for highly dangerous amounts of invisible 1064nm radiation to leak out, whether by carelessness or malice. I have a little disreputable ~510nm laser pointer, which I chose because, while I don’t trust the specs at all, 510nm is likely produced directly using a somewhat unusual solid state source, and it can’t be produced at all using a doubled Nd:YAH laser. The color is different enough that I’m confident they’re not lying about the wavelength.

  2. All books published by Tor are DRM-free.
  3. They mean bandwidth as in rate at which one can expose a mask using an electron beam, because they’ve confused two different technologies. See my other reply.

    P.S. Can you usefully chirp an FEL? I don’t know whether the electron sources that would be used for EUV FELs can be re-tuned quickly enough, nor whether the magnet arrangements are conducive to perturbing the wavelength. But relativistic electron beams are weird and maybe it works fine. Of course, I also have no idea why you would want to chirp your lithography light source.

  4. This is a totally different technology.

    A free electron laser (FEL) uses free electrons (electrons not attached to a nucleus) as a lasing medium to produce light. The light would shine through a mask and expose photoresist more or less just like the light from ASML’s tin plasma contraption, minus the tin plasma. FELs, in principle, can produce light over a very wide range of wavelengths, including EUV and even shorter.

    That DARPA thing is a maskless electron beam lithography system: the photoresist is exposed by hitting it directly with electrons.

    Electrons have lots of advantages: they have mass, so much less kinetic energy is needed to achieve short wavelengths. They have charge, so they can be accelerated electrically and they can be steered electrically or magnetically. And there are quite a few maskless designs, which saves the enormous expense of producing a mask. (And maskless lithography would let a factory make chips that are different in different wafers, which no one currently does. And you need a maskless technique to make masks in the first place.) There were direct-write electron-beam research fabs, making actual chips, with resolution comparable to or better than the current generation of ASML gear, 20-30 years ago, built at costs that were accessible to research universities.

    But electrons have a huge, enormous disadvantage: because they are charged, they repel each other. So a bright electron beam naturally spreads out, and multiple parallel beams will deflect each other. And electrons will get stuck in electrically nonconductive photoresists, causing the photoresist to (hopefully temporarily) build up a surface charge, interfering with future electron beams.

    All of that causes e-beam lithography to be slow. Which is why those research fabs from the nineties weren’t mass-producing supercomputers.

  5. RDMA is not really intended for this. RDMA is really just a bunch of functionality of a PCIe device, and even PCIe isn’t really quite right to use like RAM because its cache semantics aren’t intended for this use case.

    But the industry knows this, and there’s a technology that is electrically compatible with PCIe that is intended for use as RAM among other things: CXL. I wonder if a anyone will ever build CXL over USB-C.

  6. If I were running this show, I would have a second concurrent project as a hedge and as a chance of leapfrogging the West: trying to make free electron laser lithography work.

    Free electron lasers have lots of (theoretical) advantages: no tin debris, better wavelength control, the ability to get even shorter wavelengths, higher power, higher efficiency, and it’s less Rube Goldberg-ish. Also the barrier to entry for basic research is pretty low: I visited a little FEL in a small lab that looked like it had been built for an entirely reasonable price and did not require any clean rooms.

    So far it seems like Japan is working on this, but I have the impression that no one is trying all that hard.

    https://iopscience.iop.org/article/10.35848/1347-4065/acc18c

  7. The Demo 3 Live Search example has really nasty scroll jank issues. I’m guessing it’s caused by the results being inserted inline in the document (and thus redoing the layout of much of the page) instead of being placed in some sort of overlay.
  8. Sigh. I should probably have clarified the vibe-coded part. I think this entire project could he done with rather little total code, and that the code could be written entirely by humans without an immense programmer-hour commitment or by humans with AI help (fully human-in-the-loop) even faster.

    My actual point is that GitHub Actions is kind of an unusual product. Many big cloud things solve what seems to be a simple problem but the actual requirements are much harder than they might appear, and replacing them well wouldn’t be very complex. But IMO GitHub Actions in particular is a bunch of complexity that does not actually solve the problem that needs solving very well; a small bespoke solution would actually be better.

  9. I feel like I could specify and vibe-code a CI workflow system that would be dramatically better (for a single organization’s workflow) than GitHub Actions. And hosting it would be barely more complex than hosting a GitHub Actions self-hosted runner.

    The stack would be:

    Postgres, as a job queue and job status tracker. The entire control plane state lives in here. Even in a fairly large org, the transaction rate would be very, very low.

    An ingestion agent. Monitors the repository for pushes and PRs.

    A job agent. This runs a in a sandbox and gets the inputs from GitHub and runs what is effectively a workflow step. It doesn’t get any secrets — everything it wants to do is either accomplished in the form of JSON output, blob output, or an org-specific API for doing things that don’t fit the JSON output model.

    A thing to handle results. This is a simple service, connected to the database, that consumes the JSON job results and does whatever is needed (which would mostly consist of commenting on PRs or updating a CI status dashboard). For CD workflows, the build artifacts would be sent to whatever registry they go to.

    A configuration system, which would be some files somewhere, maybe in a git repository that is not the repository that CI is being done on. (GitHub’s model of Actions config being in-band in the repository is IMO entirely wrong.)

    And that’s about it.

    I’m not suggesting that I could duplicate the GitHub Actions in a weekend. But I wouldn’t want to. This would be single-tenant, and it would support exactly the features that the organization actually uses. Heck, even par-for-the-course things like SSO aren’t needed because the entire system would have no users per se :)

  10. The MOTU Ultralite Mk5 is a nice piece of hardware and is even at a great price point if you use more than a tiny fraction of its capabilities, but it also costs quite a few times the entire cost of the rest of this system :)

    If you just want to get the eARC data, any S/PDIF input (USB or I2S-via-hat) would work just as well at 1/20 of the price :)

    I want someone to fudge up a multiple shairplay setup (presumably by claiming multiple IP addresses, as AirPlay 2.0 apparently can’t handle multiple sinks at the same address) that can use a single multichannel interface like the Ultralite Mk5. This would make an excellent multizone audio setup at an entirely reasonable price.

  11. I read the post and the companion post:

    https://vitaut.net/posts/2025/smallest-dtoa/

    And there’s one detail I found confusing. Suppose I go through the steps to find the rounding interval and determine that k=-3, so there is at most one integer multiple of 10^-3 in the interval (and at least one multiple of 10^-4). For the sake of argument, let’s say that -3 worked: m·10^-3 is in the interval.

    Then, if m is not a multiple of 10, I believe that m·10^-3 is the right answer. But what if m is a multiple of 10? Then the result will be exactly equal, numerically, to the correct answer, but it will have trailing zeros. So maybe I get 7.460 instead of 7.46 (I made up this number and I have no idea whether any double exists gives this output.) Even though that 6 is definitely necessary (there is no numerically different value with decimal exponent greater than -3 that rounds correctly), I still want my formatter library to give me the shortest decimal representation of the result.

    Is this impossible for some reason? Is there logic hiding in the write function to simplify the answer? Am I missing something?

  12. My setup is a mixed C/C++/Python project. The C and C++ code builds independently of the Python code (using waf, but I think this barely matters -- the point is that the C/C++ build is triggered by a straightforward command and that it rebuilds correctly based on changed source code). The Python code depends on the C/C++ code via ctypes and cffi (which load a .so file produced by the C/C++ build), and there are no extension modules.

    Python builds via [tool.hatch.build.targets.wheel.hooks.custom] in pyproject.toml and a hatch_build.py that invokes waf and force-includes the .so files into useful locations.

    Use case 1: Development. I change something (C/C++ source, the waf configuration, etc) and then try to run Python code (via uv sync, uv run, or activating a venv with an editable install). Since there doesn't seem to be a way to have the build feed dependencies out to uv (this seems to be a deficiency in PEP 517/660), I either need to somehow statically generate cache-keys or resort to reinstall-package to get uv commands to notice when something changed. I can force the issue with uv pip install -e ., although apparently I can also force the issue with uv run/sync --reinstall-packages [distro name]. [0] So I guess uv pip is not actually needed here.

    It would be very nice if there was an extension to PEP 660 that would allow the editable build to tell the front-end what its computed dependencies are.

    Use case 2: Production

    IMO uv sync and uv run have no place in production. I do not want my server to resolve dependencies or create environments at all, let alone by magic, when I am running a release of my software built for the purpose.

    My code has, long before pyproject.toml or uv was a thing and even before virtual environments existed (!), had a script to build a production artifact. The resulting artifact makes its way to a server, and the code in it gets run. If I want to use dependencies as found by uv, or if I want to use entrypoints (a massive improvement over rolling my own way to actually invoke a Python program!), as far as I can tell I can either manually make and populate a venv using uv venv and uv pip or I can use UV_PROJECT_ENVIRONMENT with uv sync and abuse uv sync to imperatively create a venv.

    Maybe some day uv will come up with a better way to produce production artifacts. (And maybe in the distant future, the libc world will come up with a decent way to make C/C++ virtual environments that don't rely on mount namespaces or chroot.)

    [0] As far as I can tell, the accepted terminology is that the thing produced by a pyproject.toml is possibly a "project" or a "distribution" and that these are both very much distinct from a "package". I think it's a bit regrettable that uv's option here is spelled like it rebuilds a _package_ when the thing you feed it is not the name of a package and it does not rebuild a particular package. In uv's defense, PEP 517 itself seems rather confused as well.

  13. Answering my own question: CEC is electrically unrelated to DDC/EDID. The EDID data tells each source its physical address, and then the devices negotiate over CEC to choose logical addresses and announce their physical addresses. This is one way to design a network, but it’s not what I would have done.

    I wonder if a malfunction in this process is responsible for my AVR sometimes auto-switching to the wrong source.

  14. Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial? uv sync and uv run are sort of fine for developing a distribution/package, but they’re not exactly replacements for anything else one might want to do with the pip and venv commands.

    As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps. Unless I’ve missed something, if I make a change to a source tree that uv sync doesn’t notice, I’m stuck with uv pip install -e ., which is a wee bit disappointing and feels a bit gross. I suppose I could try to put something correct into cache-keys, but this is fundamentally wrong. The list of files in my source tree that need to trigger a refresh is something that my build system determines when it builds. Maybe there should be a way to either plumb that into uv’s cache or to tell uv that at least “uv sync” should run the designated command to (incrementally) rebuild my source tree?

    (Not that I can blame uv for failing to magically exfiltrate metadata from the black box that is hatchling plus its plugins.)

  15. Their integration with existing tools seems to be generally pretty good.

    For example, uv-build is rather lacking in any sort of features (and its documentation barely exists AFAICT, which is a bit disappointing), but uv works just fine with hatchling, using configuration mechanisms that predate uv.

    (I spent some time last week porting a project from an old, entirely unsupportable build system to uv + hatchling, and I came out of it every bit as unimpressed by the general state of Python packaging as ever, but I had no real complaints about uv. It would be nice if there was a build system that could go even slightly off the beaten path without writing custom hooks and mostly inferring how they’re supposed to work, though. I’m pretty sure that even the major LLMs only know how to write a Python package configuration because they’ve trained on random blog posts and some GitHub packages that mostly work — they’re certainly not figuring anything out directly from the documentation, nor could they.)

  16. How about: Mozilla HTTPS To My Router (or printer or any other physically present local object) in a way that does not utterly suck?

    Seriously, there’s a major security and usability problem, it affects individual users and corporations, and neither Google nor Apple nor Microsoft shows the slightest inclination to do anything about it, and Mozilla controls a browser that could add a nice solution. I bet one could even find a creative solution that encourages vendors, inoffensively, to pay Mozilla a bit of money to solve this problem for them.

    Also:

    > Thunderbird for iOS - why is this not a thing yet?

    Indeed. Apple’s mail app is so amazingly bad that there’s plenty of opportunity here.

  17. > Autocoding straight to embedded

    I used this twenty-something years ago. It worked, but I would not have wanted to use it for anything serious. Admittedly, at the time, C on embedded platforms was a truly awful experience, but the C (and Rust, etc) toolchain situation is massively improved these days.

    > Plotting still better than anything else

    Is it? IIRC one could fairly easily get a plot displayed on a screen, but if you wanted nice vector output suitable for use in a PDF, the experience was not enjoyable.

  18. Okay, now I’m curious. If the pins are just connected across all ports, how does the AVR tell which CEC-speaking device is on which port? Chip select or similar pins?
  19. > How am I supposed to get the audio to the speakers without a bulky expensive receiver box?

    You can get a small ARC/eARC audio extractor with RCA or S/PDIF output and use your favorite amplifier or DAC with it.

  20. I always assumed that it was a separate i2c bus per HDMI link and that it was the AVR’s job to handle a request from something and send the right requests to everything else.

This user hasn’t submitted anything.