Preferences

devttyeu
Joined 147 karma

  1. Funny you mention that, I have very recently just came back from a one-shot prompt which fixed a rather complex template instantiation issue in a relatively big very convoluted low-level codebase (lots of asm, SPDK / userspace nvme, unholy shuffling of data between numa domains into shared l3/l2 caches). That codebase maybe isn't in millions of lines of code but definitely is complex enough to need a month of onboarding time. Or you know, just give Claude Opus 4.5 a lldb backtrace with 70% symbols missing due to unholy linker gymnastics and get a working fix in 10 mins.

    And those are the worst models we will have used from now on.

  2. Visual puzzle solving is a pretty easily trainable problem due to it being simple to verify, so that skill getting really good is just a matter of time
  3. In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)

    I don’t believe I can do the same with Rust.

  4. Cryptography is the science of turning any problem into a key management problem
  5. If you have Pro users why not leverage with debt without giving up equity for no good reason?

    Maybe the value prop is not clear, the website talks a bunch about AI agent integrations, that sounds like a completely different product to a parser library, which however advanced it may be, investors will likely see as tangential bit of IP that a senior engineer can build for $10-20k in a few days.

  6. It does address quite a few reliability issues - you can have multiple gateways into the thread network so it is actually highly available.

    It’s definitely complicated, but it’s a kind of usb-c of smart home - you only worry about the complex part when building a product. Just wish there was a better device reset/portability story.

  7. Both use 802.15.4 but iiuc zigbee does that with some incompatibilities.
  8. It is halfway there arguably, and libp2p does make use of it - https://docs.libp2p.io/concepts/transports/webtransport/

    Unlike websockets you can supply "cert hash" which makes it possible for the browser to establish a TLS connection with a client that doesn't have a certificate signed by a traditional PKI provider or even have a domain name. This property is immensely useful because it makes it possible for browsers to establish connections to any known non-browser node on the internet, including from secure contexts (i.e. from an https page where e.g. you can't establish a ws:// connection, only wss:// is allowed but you need a 'real' tls cert for that)

  9. NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.

    Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.

  10. Can't update my selfhosted HomeAssistant because HAOS depends on dockerhub which seems to be still down.
  11. And after all that hardcore engineering work is done, iMessage still has code paths leading to dubious code running in the kernel, enabling 0-click exploits to still be a thing.
  12. Put sensitive electronics in a metal box, comms over a fiber (already common), and you’re good to go.

    Only tricky thing is if currents induced in motors are too hard to reject in driver circuitry, tho even at the extreme this should be possible to insulate with capacitors (or worse/heavier with transformers)

  13.   [Unit]
      Description=Whatever
      
      [Service]
      ExecStart=/usr/local/bin/cantDoHttpSvc -bind 0.0.0.0:1234
      
      [HTTP]
      Domain=https://whatever.net
      Endpoint=127.1:1234
    
    Yeah this could happen one day
  14. Careful posting systemd satire here, there is a high likelihood that your comment becomes the reason this feature gets built and PRed by someone bored enough to also read HN comment section.
  15. I don’t get why so many people keep making this argument. Transformers aren’t just a glorified Markov Chain, they are basically doing multi-step computation - each attention step is propagating information, then the feedforward network does some transformations, all happening multiple times in sequence, essentially applying multiple sequential operations to some state, which is roughly how any computation looks like.

    Then sure, the training is for next token prediction, but that doesn’t tell you anything about the emergent properties in those models. You could argue that every time you infer a model you Boltzmann-brain it into existence once for every token, feeding all input to get one token of output then kill the model. Is it conscious? Nah probably not; Does it think or have some concept of being during inference? Maybe? Would an actual Boltzmann-brain spawned to do such task be conscious or qualify as a mind?

    (Fun fact, at Petabit/s throughputs hyperscale gpu clusters already are moving amounts of information comparable to all synaptic activity in a human brain, tho parameter wise we still have the upper hand with ~100s of trillions of synapses [1])

    * [1] ChatGPT told me so

  16. I don't think most people grasp how abstractly high even 1 DWPD is compared to enterprise HDDs. On the enterprise side you'll often read that a hard drive is rated for maybe 550TB/year, translating to 0.05~0.1 DRWPD [1] (yes, combined read AND write) and you have to be fine with that. (..yeah admittedly the workloads for each are quite different, you can realistically achieve >1 DWPD on an nvme with e.g. a large LSM database).

    What makes NVMe endurance ratings even better (though not for warranty purposes) is when your workload has sequential writes you can expect much higher effective endurance as most DWPD metrics are calculated for random 4k write, which is just about the worst case for flash with multi-megabyte erase blocks. It's my understanding that it's also in large part why there is some push for Zoned (hm-smr like) NVMe, where you can declare much higher DWPD.

    * [1] https://documents.westerndigital.com/content/dam/doc-library...

  17. Maybe that's about the wasted human potential that's depressing. Other than that, this analogy only makes sense when framed in terms of some philosophy - i.e. if you are "long-term utilitarian" I don't think it's correct to look at massive consumption of brainrot favorably, even though individual experiences are technically kinda pleasurable.
  18. Could probably make a decent dataset from VR headset tracking cameras + motion sensors + passthrough output + decoded hand movements
  19. The "We live in a simulation" argument just started looking a lot more conceivable.
  20. Copy-pasting the json of the request into chatgpt, then just sending prompts to it normally does appear to make it enter the same role-play mode as this site appears to have been in
  21. The initialization prompt can be easily extracted from the requests made by the site, if anyone is curious: https://pastebin.com/t1WLgGBt
  22. You can see the whole original prompt by looking at request body in your browser.

    Here's the full initial prompt in the request: https://pastebin.com/t1WLgGBt

  23. Build from source AND run an Ai agent that reviews every single line of code you compile (while hoping that the any potential exploit doesn’t also fool / exploit your AI agent)
  24. Wouldn’t be surprised that the ssh auth being made slower was deliberate - that makes it fairly easy to index all open ssh servers on the internet, then to see which ones get slower to fail preauth as they install the backdoor
  25. That specifically is not the selling point, but it is how one of the selling points works.

    You can just take your data to another instance whenever you don’t agree with the policies of your current one. And all your connections/interactions/data should stay intact.

    If it works as well as it seems to in the federation sandbox, you shouldn’t even be able to tell that you’re using a different service, the app just sends requests to a different server, and the web url may be different, and your default feeds are generated somewhere else.

    Now, you may say that users won’t care about backing up their data, but that can be solved with some open (or paid) archival services.

  26. That is exactly what Bluesky lets you do
  27. Eh, it's not the first time Twitter is horribly broken and/or makes an unpopular decision, and it probably won't be the last one.
  28. The service seems to be under massive load from the influx of users over the weekend, and they've passed 200k users, posting activity is a few times higher vs "the normal"

    So they probably could get even more users, but I'm guessing it's already sufficiently crazy for the team right now

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal