- In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)
I don’t believe I can do the same with Rust.
- If you have Pro users why not leverage with debt without giving up equity for no good reason?
Maybe the value prop is not clear, the website talks a bunch about AI agent integrations, that sounds like a completely different product to a parser library, which however advanced it may be, investors will likely see as tangential bit of IP that a senior engineer can build for $10-20k in a few days.
- It does address quite a few reliability issues - you can have multiple gateways into the thread network so it is actually highly available.
It’s definitely complicated, but it’s a kind of usb-c of smart home - you only worry about the complex part when building a product. Just wish there was a better device reset/portability story.
- It is halfway there arguably, and libp2p does make use of it - https://docs.libp2p.io/concepts/transports/webtransport/
Unlike websockets you can supply "cert hash" which makes it possible for the browser to establish a TLS connection with a client that doesn't have a certificate signed by a traditional PKI provider or even have a domain name. This property is immensely useful because it makes it possible for browsers to establish connections to any known non-browser node on the internet, including from secure contexts (i.e. from an https page where e.g. you can't establish a ws:// connection, only wss:// is allowed but you need a 'real' tls cert for that)
- NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
- Put sensitive electronics in a metal box, comms over a fiber (already common), and you’re good to go.
Only tricky thing is if currents induced in motors are too hard to reject in driver circuitry, tho even at the extreme this should be possible to insulate with capacitors (or worse/heavier with transformers)
- I don’t get why so many people keep making this argument. Transformers aren’t just a glorified Markov Chain, they are basically doing multi-step computation - each attention step is propagating information, then the feedforward network does some transformations, all happening multiple times in sequence, essentially applying multiple sequential operations to some state, which is roughly how any computation looks like.
Then sure, the training is for next token prediction, but that doesn’t tell you anything about the emergent properties in those models. You could argue that every time you infer a model you Boltzmann-brain it into existence once for every token, feeding all input to get one token of output then kill the model. Is it conscious? Nah probably not; Does it think or have some concept of being during inference? Maybe? Would an actual Boltzmann-brain spawned to do such task be conscious or qualify as a mind?
(Fun fact, at Petabit/s throughputs hyperscale gpu clusters already are moving amounts of information comparable to all synaptic activity in a human brain, tho parameter wise we still have the upper hand with ~100s of trillions of synapses [1])
* [1] ChatGPT told me so
- I don't think most people grasp how abstractly high even 1 DWPD is compared to enterprise HDDs. On the enterprise side you'll often read that a hard drive is rated for maybe 550TB/year, translating to 0.05~0.1 DRWPD [1] (yes, combined read AND write) and you have to be fine with that. (..yeah admittedly the workloads for each are quite different, you can realistically achieve >1 DWPD on an nvme with e.g. a large LSM database).
What makes NVMe endurance ratings even better (though not for warranty purposes) is when your workload has sequential writes you can expect much higher effective endurance as most DWPD metrics are calculated for random 4k write, which is just about the worst case for flash with multi-megabyte erase blocks. It's my understanding that it's also in large part why there is some push for Zoned (hm-smr like) NVMe, where you can declare much higher DWPD.
* [1] https://documents.westerndigital.com/content/dam/doc-library...
- Maybe that's about the wasted human potential that's depressing. Other than that, this analogy only makes sense when framed in terms of some philosophy - i.e. if you are "long-term utilitarian" I don't think it's correct to look at massive consumption of brainrot favorably, even though individual experiences are technically kinda pleasurable.
- 5 points
- The initialization prompt can be easily extracted from the requests made by the site, if anyone is curious: https://pastebin.com/t1WLgGBt
- You can see the whole original prompt by looking at request body in your browser.
Here's the full initial prompt in the request: https://pastebin.com/t1WLgGBt
- That specifically is not the selling point, but it is how one of the selling points works.
You can just take your data to another instance whenever you don’t agree with the policies of your current one. And all your connections/interactions/data should stay intact.
If it works as well as it seems to in the federation sandbox, you shouldn’t even be able to tell that you’re using a different service, the app just sends requests to a different server, and the web url may be different, and your default feeds are generated somewhere else.
Now, you may say that users won’t care about backing up their data, but that can be solved with some open (or paid) archival services.
- The service seems to be under massive load from the influx of users over the weekend, and they've passed 200k users, posting activity is a few times higher vs "the normal"
So they probably could get even more users, but I'm guessing it's already sufficiently crazy for the team right now
And those are the worst models we will have used from now on.