- john01davI think that the person to whom you replied is speaking of outdoor installations, while you are speaking of controlled (maybe datacenter) installations. I have outdoor fiber running aerially between buildings on my property, in a region with massive seasonal temperature changes. Multiple local FTTH and Coaxial ISPs also run fiber on shared utility poles (the same ones that the electrical grid maintains) and when I look at the poles I see communications lines all in the same general area, often mere centimeters apart, if that.
- > I feel there has to be something between "I heard about a thing 7th-hand" and "I actively watch political discourse / read scientific papers", but I'm no longer sure The News, as we currently know it, is it.
I have found that some Youtube channels and videos (non-comprehensive examples below (I have hundreds of subscribed channels), mostly not politics, but these things inform politics since politics is making decisions about other things) can fill this gap nicely. This is not a perfect choice, since journalism integrity and standards do not apply, but I find that this can be mitigated by watching a wide variety (for example, in the field of economics, I regularly watch creators who espouse everything from very free-market capitalism all the way to full on communism). There are likely other forms of new media that operate at this level of depth, but I haven't found htem.
https://www.youtube.com/@TechnologyConnections
https://www.youtube.com/watch?v=FWUaS5a50DI
https://www.youtube.com/@HowMoneyWorks
https://www.youtube.com/@DiamondNestEgg
https://www.youtube.com/@TLDRnews (and associated channels)
https://www.youtube.com/@BennJordan (recent good example https://www.youtube.com/watch?v=vU1-uiUlHTo)
- > I am tired of fighting my own OS.
People cite bugs or incompatible software on Linux as a reason to avoid it and use Windows, but they fail to recgonize that Windows actively fights you. I'd take something that's slightly and mistakenly broken on an utterly open platform where I can fix it if I care enough over a closed platform that's actively trying to screw me over.
- US already banned Chinese EVs, even though, from what I've heard, they're excellent
- For now, such hardware is readily available. Every Walmart, for example, will have it. Amazon has it. Pcpartpicker lists numerous other places that you can buy it from.
- That would be a horrifying violation of bodily autonomy.
This doesn't mean that it won't happen, but it does make it especially vile if it does.
- > - I like languages that let you decide how much you need to "prove it."
Rust is known for being very "prove it," as you put it, but I think that it is not, and it exposes a weakness in your perspective here. In particular, Rust lets you be lax about types (Any) or other proved constraints (borrow checker bypass by unsafe, Arc, or cloning), but it forces you to decide how the unproven constraints are handled (ranging from undefined behavior to doing what you probably want with performance trade-offs). A langauge that simply lets you not prove it still must choose one of these approaches to run, but you will be less aware of what is chosen and unable to pick the right one for your use case. Writing something with, for example, Arc, .clone(), or Any is almost as easy as writing it in something like Python at the start (just arbitrarily pick one approach and go with it), but you get the aforementioned advantages and it scales better (the reader can instantly see (instead of dredging through the code to try to figure it out) "oh, this could be any type" or "oh, this is taken by ownership, so no spooky action at a distance is likely").
- The majority (Steam, Xbox, PlayStation, Nintento, App Store, Play Store, and Kindle Store) have a captive market of developers (with varying degrees of enforcement, from end users demanding it (Steam) to it being impossible to use anything else (App Store and the consoles)). This will absolutely put upward pressure on the cut that the market will bear.
- > Wherever LLM-generated code is used, it becomes the responsibility of the engineer. As part of this process of taking responsibility, self-review becomes essential: LLM-generated code should not be reviewed by others if the responsible engineer has not themselves reviewed it. Moreover, once in the loop of peer review, generation should more or less be removed: if code review comments are addressed by wholesale re-generation, iterative review becomes impossible.
My general procedure for using an LLM to write code, which is in the spirit of what is advocated here, is:
1) First, feed in the existing relevant code into an LLM. This is usually just a few source files in a larger project
2) Describe what I want to do, either giving an architecture or letting the LLM generate one. I tell it to not write code at this point.
3) Let it speak about the plan, and make sure that I like it. I will converse to address any deficiencies that I see, and I almost always do.
4) I then tell it to generate the code
5) I skim & test the code to see if it's generally correct, and have it make corrections as needed
6) Closely read the entire generated artifact at this point, and make manual corrections (occasionally automatic corrections like "replace all C style casts with the appropriate C++ style casts" then a review of the diff)
The hardest part for me is #6, where I feel a strong emotional bias towards not doing it, since I am not yet aware of any errors compelling such action.
This allows me to operate at a higher level of abstraction (architecture) and remove the drudgery of turning an architectural idea into written, precise, code. But, when doing so, you are abandoning those details to a non-deterministic system. This is different from, for example, using a compiler or higher level VM language. With these other tools, you can understand how they work and rapidly have a good idea of what you're going to get, and you have robust assurances. Understanding LLMs helps, but thus not to the same degree.
- He speaks of trust and LLMs breaking that trust. Is this not what you mean, but by another name?
> First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).
> Specifically, we must be careful to not use LLMs in such a way as to undermine the trust that we have in one another
> our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice
- > it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!)
This applies to natural language, but, interestingly, the opposite is true of code (in my experience and that of other people that I've discussed it with).
- Does this violate anti trust law?
- Their absurdly high 30% cut combined with having the only otherwise decent store with real network effect driven market share is a very real criticism
- How often do big providers like Gmail, customers of whom you will want to communicate with, eat the emails? I know that this is common if you run your own email server, and often just gone and not even to spam.
Google would probably justify this as security, and not necessarily unreasonably, but it has a clear anti-competitive effect too. The security concerns would be more credible if they made it easy to debug this, like giving a useful error message back to the sender stating what the missing security criteria are and having a clear process for appeals (like if you got unlucky with an IP address, or if you are missing a specific security measure on your domain).
- I have been successful in getting non-technical people onto Signal. As far as a technical product goes, Signal is kindof shit (among other things: no support for non-Debian-based Linux forcing users to use sketchy third party repos when they are a massive target for backdoors, really shitty UX for backups), but it gets the job done and seems to have robust encryption from what other people say (I am not qualified to evaluate this myself).
If a P2P solution that solved the aforementioned Signal issues were to have excellent UX, then that could probably work.
Lastly, what counts as "excellent UX" for technical and non-technical people seems to differ. For example, I consider Discord and Slack to be quite intuitive and easy to use, but multiple technical people have expressed to me that they find it to be very confusing and that they prefer other solutions, such as GroupMe in one example. To me, GroupMe shoving the SMS paradigm into something that's fundamentally not SMS is more confusing and poor UX, but to these non-technical people that seems easy. I suspect that Signal's shortcomings that I perceive are an example of this: making UX trade-offs that work great for non-technical people but are less good for technical people. I'm not sure what these specific UX trade-offs are, but I suspect that it's something akin to having a conceptually sound underlying model (like Discord or Slack servers/workspaces and channels), versus having really obvious "CLICK HERE TO NOT FUSS" buttons like GroupMe, while having graceful failures for non-technical users that can't even figure that out (like just pretending to be SMS in GroupMe's case if you can't figure out how to install an app, or don't want to put that effort in, something that many people know how to use).
- Do you rewrite fundamental data structures over and over, like maps, of just not use them?
- I wrote something similiar (minimal nosql key-value DB) and it was less fast than (specifically lower throughput, I did not measure other metrics) Redis, despite some passing attempts to make it fast (like using async/await for all IO).
- Regarding Robot, I think that it's completely fine for what it is. I almost never interact with it, and instead just configure my server as I see fit over ssh. Hetzner's value proposition is extremely cheap no frills servers -- you're paying for the server, not the management interface. If you want management interfaces that do a lot of useful work, use a cloud.
- Another issue is that you could, in principle, build data centers in places where you don't need to evaporate water to cool them. For example, you could use a closed loop water cooling system and then sink that heat into the air or into a nearby body of water. OVH's datacenter outside Montreal¹ does this, for example. You can also use low carbon energy sources to power the data center (nuclear or hydro are probably the best because their production is reliable and predictable).
Unlike most datacenters, AI datacenters being far away from the user is okay since it takes on the order of seconds to minutes for the code to run and generate a response. So, a few hundred milliseconds of latency is much more tolerable. For this reason, I think that we should pick a small number of ideal locations that have a combination of weather that permits non-sub-ambient cooling and have usable low carbon resources (either hydropower is available and plentiful, or you can build or otherwise access nuclear reactors there), and then put the bulk of this new boom there.
If you pick a place with both population and a cold climate, you could even look into using the data center's waste heat for district heating to get a small new revenue stream and offset some environmental impact.
- > alternate keyboards can steal your password, alternate browsers can have adware / malware, alternate launcher can do many naughty things etc. etc.
It's plausible that Google is done some of these things, like doing some sort of data mining on everything that you type for example (steal your password), and many official google apps have ads if you don't pay them