- Firefox fell behind Chrome because of aggressive marketing from Google in a way that probably violates some antitrust laws if they were actually being enforced, combined with a couple of own-goals from Mozilla.
Basically Google exploits their market dominance in Search and Mail to get people to use Chrome (and probably their other services too). When you search in a non-Chrome browser, you'll be constantly informed by Google about how much better their search is with Chrome through pop-ups and in-page notifications (not browser notifs). If you click a link in the Gmail app on iOS, rather than opening the browser, you get a Chrome advertisement if they detect it isn't your default browser.
This goes hand-in-hand with Chrome being the default Android browser (don't underestimate the power of being the default) and Mozilla alienating their core audience of power users by forcibly inserting features that those power users despise.
Chrome never won on features, it won on marketing and abuse of a different monopoly.
- I'd say it's probably worse in terms of scope. The audience for some AI-powered documentation platform will ultimately be fairly small (mostly corporations).
Anubis is promoting itself as a sort of Cloudflare-esque service to mitigate AI scraping. They also aren't just an open source project relying on gracious donations, there's a paid whitelabel version of the project.
If anything, Anubis probably should be held to a higher standard, given many more vulnerable people (as in, vulnerable against having XSS on their site cause significant issues with having to fish their site out of spam filters and/or bandwidth exhaustion hitting their wallet) are reliant on it compared to big corporations. Same reason that a bug in some random GitHub project somewhere probably has an impact of near zero, but a critical security bug in nginx means that there's shit on the fan. When you write software that has a massive audience, you're going to have to be held to higher standards (if not legally, at least socially).
Not that Anubis' handling of this seems to be bad or anything; both XSS attacks were mitigated, but "won't somebody think of the poor FOSS project" isn't really the right answer here.
- If Mozilla were to kill adblockers, there's basically no reason to not use Chromium. It's pretty much the only relevant difference between Chromium and Firefox these days.
It's truly impressive how they've managed to do every user-hostile trick Google Chrome also did over the years, except for no real clear reason besides contempt for their users autonomy I suppose. Right now the sole hill Mozilla really has left is adblockers, and they've talked about wanting to sacrifice that?
It truly boggles the mind to even consider this. That's not 150 million, that's the sound of losing all your users.
- Insane that they're dropping client certificates for authentication. Reading the linked post, it's because Google wants them to be separate PKIs and forced the change in their root program.
They aren't used much, but they are a neat solution. Google forcing this change just means there's even more overhead when updating certs in a larger project.
- It's technically possible to get any Android app to accept user CAs. Unfortunately it requires unpacking it with apktool, adding a networkconfigoverride to the XML assets and pointing the AndroidManifest.xml to use it. Then restitch the APK with apktool, use jarsigner/apksigner and finally use zipalign.
Doesn't need a custom ROM, but it's so goddamn annoying that you might as well not bother. I know how to do these things; most users won't and given the direction the big G is heading in with device freedom, it's not looking all that bright for this approach either.
- For a lot of developers, the current biggest failure of open source is the AWS/Azure/GCP problem. BigCloud has a tendency to just take well liked open source products, provide a hosted version of them and as a result they absolutely annihilate the market share of the entity that originally made the product (which usually made money by offering supported and hosted versions of the software). Effectively, for networked software (which is the overwhelming majority of software products these days) you might as well use something like BSD/MIT rather than any of the GPLs[0] because they practically have the same guarantees; it's just that the BSD/MIT licenses don't contain language that makes you think it does stuff it actually doesn't do. Non-networked software like kernels, drivers and most desktop software don't have this issue, so it doesn't apply.
Open source for that sort of product (which most of the big switches away from open source have been about) only further entrenches BigCloud's dominance over the ecosystem. It absolutely breaks the notion that you can run a profitable business on open source. BigCloud basically always wins that race even if they aren't cheaper because the company is using BigCloud already, so using their hosted version means cutting less yellow tape internally since the difficulty of getting people to agree on BigCloud is much lower compared to adding a new third party you have to work with.
The general response to this issue from the open source side tends to just be to accuse the original developers of being greedy/only wanting to use the ecosystem to springboard their own popularity.
---
I should also note that this generally doesn't apply to the fight between DHH and Mullenweg that's described in the OP. DHH just wants to kick a hornets nest and get attention now that Omarchy isn't the topic du jour anymore - no BigCloud (or for this case, shared hosting provider is probably more likely) is going to copy a random kanban tool written in Ruby on Rails. They're copying the actual high profile stuff like Redis, Terraform and whatever other examples you can recently think of that got screwed by BigClouds offering their services in that way (shared providers pretty much universally still use the classic AMP stack, which doesn't support a Ruby project, immunizing DHHs tool from that particular issue as well). Mullenweg by contrast does have to deal with Automattic not having a stranglehold on being a WordPress provider since the terms of his license weren't his to make to begin with; b3/cafelog was also under GPL and WordPress inherited that. He's been burned by FOSS, but it's also hard to say he was surprised by it, since WP is modified from another software product.
[0]: Including the AGPL, it doesn't actually do what you think it does.
- It's not impossible to run a publicly owned company in the US that isn't insanely hostile towards it's customers or employees... it's just really damn difficult because of bad legal precedent.
Dodge v. Ford is basically the source of all these headaches; the Dodge Brothers owned shares in Ford. Ford refused to pay the dividends he had to pay to the Dodge Brothers, suspecting that they'd use the dividends to start their own car company (he wasn't wrong about that part). The Dodge Brothers sued Ford, upon which Fords defense for not paying out dividends was "I'm investing it in my employees" (an obvious lie, it was very blatantly about not wanting to pay out). The judge sided with the Dodge Brothers and the legal opinion included a remark that the primary purpose of a director is to produce profit to the shareholders.
That's basically become US business doctrine ever since, being twisted into the job of the director being to maximize profits to the shareholders. It's slightly bunk doctrine as far as I know; the actual precedent would mostly translate to "the shareholders can fire the directors if they think they don't do a good job" (since it can be argued that as long as any solid justification exists, producing profit for the shareholders can be assumed[0]; Dodge v. Ford was largely Ford refusing to follow his contracts with money that Dodge knew Ford had in the bank), but nobody in the upper areas of management wants to risk facing lawsuits from shareholders arguing that they made decisions that go against shareholder supremacy[1]. And so, the threats of legal consequences morph into the worst form of corporate ghoulishness that's so pervasive across every publicly traded company in the US. It's why short-term decision making dominates long-term planning for pretty much every public company.
[0]: This is called the "business judgement rule", where courts will broadly defer the judgement on if a business is ran competently or not to the executives of that business.
[1]: Tragically, just because it's bunk legal theory, doesn't change that the potential and disastrous consequences of lawsuits in the US are a very real thing.
- Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
- > Furthermore, current copyright terms are decades past the death of the creator.
It's important to recognize why this is the case - a lot of the hubbub around posthumous copyright comes from the fact that a large amount of classic literature often went unrecognized during an author's lifetime (a classic example is Moby Dick, which sold and reviewed poorly - Melville only made 1260$ from the book in total and his wife only made ~800$ from it in the remaining 8 years it remained under copyright after Melville died, even though it's hard to not imagine it on a literature list these days). Long copyright terms existed to ensure that the family of an author didn't lose out on any potential sales that would come much later. Even more recent works, like Lord of the Rings also heavily benefitted from posthumous copyright, as it allowed Tolkien's son to actually make the books into the modern classics they are today, through carefully curating the rereleases and additions to the work (the map of Middle Earth for instance was drawn by Tolkien's son.)
It's mostly a historic example though; Copyright pretty blatantly just isn't designed with the internet in mind. Personally I think an unconditional 50 years is the right timeline for copyright to end. No "life+50"; just 50.
50 years of copyright should be more than enough to get as much mileage out of a work as possible, without running into the current insanity where all of the modern worlds cultural touchstones are in the hands of a few megacorporations. For reference, 50 years means that everything before 1975 would no longer be under copyright today, which seems like a much fairer length to me. It also means that if you create something popular, you have roughly the entire duration of a person's working life (starting at 18-23, ending at 65-70) to make money from it.
- Grades aren't necessarily an indicator on if a person comprehends the educational material. Someone can visibly under-perform on general tests, but when questioned in-person/made to do an exam still recite the educational material from the top of their head, apply it correctly and even take it in a new direction. Those are underachievers; they know what they can do, but for one reason or another, they simply refuse to show it (a pretty common cause tends to be just finding the general coursework to be demeaning or the teachers using the wrong education methods, so they don't put a lot of effort into it[0].) Give them coursework above their level, and they'll suddenly get acceptable/correct results.
IQ can be used somewhat reliably to identify if someone is an underachiever, or if they're legitimately struggling. That's what the tests are made and optimized for; they're designed to test how quickly a person can make the connection between two unrelated concepts. If they do it quick enough, they're probably underachieving compared to what they actually can do and it may be worth trying to give them more complicated material to see if they can actually handle it. (And conversely, if it turns out they're actually struggling, it may be worth dedicating more time to help them.)
That's the main use of it. Anything else you attach to IQ is a case of correlation not being causation, and anyone who thinks it's worth more than that is being silly. High/Low IQ correlates to very little besides a sort of general trend on how quickly you can recognize patterns (because of statistical anomaly rules, any score outside the 95th percentile is basically the same anyways and IQ scores are normalized every couple years; this is about as far as you can go with IQ - there's very little difference between 150/180/210 or whatever other high number you imagine).
- It keeps astounding me that people assign value to a score whose purpose was mainly intended to find outliers in the education system as being anything besides that.
Or to quote the late astrophysicist Stephen Hawking: "People who boast about their IQ are losers".
- The problem with a standard video element is that while it's mostly nice for the user, it tends to be pretty bad for the server operator. There's a ton of problems with browser video, beginning pretty much entirely with "what's the codec you're using". It sounds easy, but the unfortunate reality is that there's a billion different video codecs (and a heavy use of Hyrum's law/spec abuse on the codecs) and a browser only supports a tiny subset of them. Hosting video already at a basis requires transcoding the video to a different storage format; unlike a normal video file you can't just feed it to VLC and get playback, you're dealing with the terrible browser ecosystem.
Then once you've found a codec, the other problem immediately rears its head: video compression is pretty bad if you want to use a widely supported codec, even if for no other reason than the fact that people use non-mainstream browsers that can be years out of date. So you are now dealing with massive amounts of storage space and bandwidth that are effectively being eaten up by duplicated files, and that isn't cheap either. To give an estimate, under most VPS providers that aren't hyperscalers, a plain text document can be served to a couple million users without having to think about your bandwidth fees. Images are bigger, but not by enough to worry about it. 20 minutes of 1080p video is about 500mb under a well made codec that doesn't mangle the video beyond belief. That video is going to reach at most 40000 people before you burn through 20 terabytes of bandwidth (the Hetzner default amount) and in reality, probably less because some people might rewatch the thing. Hosting video is the point where your bandwidth bill will overtake your storage bill.
And that's before we get into other expected niceties like scrolling through a video while it's playing. Modern video players (the "JS nonsense" ones) can both buffer a video and jump to any point in the video, even if it's outside the buffer. That's not a guarantee with the HTML video element; your browser is probably just going to keep quietly downloading the file while you're watching it (eating into server operator cost) and scrolling ahead in the video will just freeze the output until it's done downloading up until that point.
It's easy to claim hosting video is simple, when in practice it's probably the single worst thing on the internet (well that and running your own mailserver, but that's not only because of technical difficulties). Part of YouTube being bad is just hyper capitalism, sure, but the more complicated techniques like HLS/DASH pretty much entirely exist because hosting video is so expensive and "preventing your bandwidth bill from exploding" is really important. That's also why there's no real competition to YouTube; the metrics of hosting video only make sense if you have a Google amount of money and datacenters to throw at the problem, or don't care about your finances in the first place.
- A cache can help even for small stuff if there's something time-consuming to do on a small server.
Redis/valkey is definitely overkill though. A slightly modified memcached config (only so it accepts larger keys; server responses larger than 1MB aren't always avoidable) is a far simpler solution that provides 99% of what you need in practice. Unlike redis/valkey, it's also explicitly a volatile cache that can't do persistence, meaning you are disincentivized from bad software design patterns where the cache becomes state your application assumes any level of consistency of (including it's existence). If you aren't serving millions of users, stateful cache is a pattern best avoided.
DB caches aren't very good mostly because of speed; they have to read from the filesystem (and have network overhead), while a cache reads from memory and can often just live on the same server as the rest of the service.
- The problem that is that a lot of CVEs often don't represent "real" vulnerabilities, but merely theoretical ones that could hypothetically be combined to make a real exploit.
Regex exploitation is the forever example to bring up here, as it's generally the main reason that "autofail the CI system the moment an auditing command fails" doesn't work on certain codebases. The reason this happens is because it's trivial to make a string that can waste significant resources to try and do a regex match against, and the moment you have a function that accepts a user-supplied regex pattern, that's suddenly an exploit... which gets a CVE. A lot of projects then have CVEs filed against them because internal functions rely on Regex calls as arguments, even if they're in code the user is flat-out never going to be able interact with (ie. Several dozen layers deep in framework soup there's a regex call somewhere, in a way the user won't be able to access unless a developer several layers up starts breaking the framework they're using in really weird ways on purpose).
The CVE system is just completely broken and barely serves as an indicator of much of anything really. The approval system from what I can tell favors acceptance over rejection, since the people reviewing the initial CVE filing aren't the same people that actively investigate if the CVE is bogus or not and the incentive for the CVE system is literally to encourage companies to give a shit about software security (at the same time, this fact is also often exploited to create beg bounties). CVEs have been filed against software for what amounts to "a computer allows a user to do things on it" even before AI slop made everything worse; the system was questionable in quality 7 years ago at the very least, and is even worse these days.
The only indicator it really gives is that a real security exploit can feel more legitimate if it gets a CVE assigned to it.
- Pretty much. Most animals are both smarter than you expect, but also tend to be more limited in what they can reason about.
It's why anyone who's ever taken care of a needy pet will inevitably reach the comparison that taking care of a pet is similar to taking care of a very young child; it's needy, it experiences emotions but it can't quite figure out on its own how to adapt to an environment besides what it grew up around/it's own instincts. They experience some sort of qualia (a lot of animals are pretty family-minded), but good luck teaching a monkey to read. The closest we've gotten is teaching them that if they press the right button, they get food, but they take basically their entire lifespan to understand a couple hundred words, while humans easily surpass that.
IIRC some of the smartest animals in the world are actually rats. They experience a qualia very close to humans to the point that psychology experiments are often easily observable in rats.
- Nowadays pip also defaults to installing to the users home folder if you don't run it as root.
Basically the only thing missing from pip install being a smooth experience is something like npx to cleanly run modules/binary files that were installed to that directory. It's still futzing with the PATH variable to run those scripts correctly.
- > You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...
You can, but as stated - that's not free (or easy). That's still yet another fee you have to pay for... which hurts adoption of HTTPS for intranets (not to mention it's not really an intranet if it's reliant on something entirely outside of that intranet.)
If LetsEncrypt charged 1$ to issue/renew a certificate, they wouldn't have made a dent in the public adoption of HTTPS certificates.
> Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.
I already mentioned that one, that's the wildcard method.
- I never said they had an incentive to solve it. I said that it's one of the big blockers to getting regular adoption. It ought to be obvious that all these issues aren't a problem if you look at it through the big tech lens: why is it a problem when we're providing the service. They're a problem when you're a normal person with a healthy distrust of big tech companies.
In practice, I expect someone to figure out a way to break into/bypass the OS flow entirely with a less "big tech wants your private details" solution and that's what winds up getting adoption.
- Main reason is that it's hard to get certificates for intranets that all devices will properly trust.
Public CAs don't issue (free) certificates for internal hostnames and running your own CA has the drawback that Android doesn't allow you to "properly" use a personal CA without root, splitting it's CA list between the automatically trusted system CA list and the per-application opt-in user CA list. (It ought to be noted that Apple's personal CA installation method uses MDM, which is treated like a system CA list). There's also random/weird one-offs like how Firefox doesn't respect the system certificate store, so you need to import your CA certificate separately in Firefox.
The only real option without running into all those problems is to get a regular (sub)domain name and issue certificates for that, but that usually isn't free or easy. Not to mention that if you do the SSL flow "properly", you need to issue one certificate for each device, which leaks your entire intranet to the certificate transparency log (this is the problem with Tailscale's MagicDNS as a solution). Alternatively you need to issue a wildcard certificate for your domains, but that means that every device in your intranet can have a valid SSL certificate for any other domain name on your certificate.
There's a reason most code forges offer you a fake email that will also be considered as "your identity" for the forge these days.