- This reminds of me Makimoto’s Wave:
https://semiengineering.com/knowledge_centers/standards-laws...
There is a constant cycle between domain-specific hardware-hardcoded-algorithm design, and programmable flexible design.
- I put some article content to Pangram. Pangram says it's AI
https://www.pangram.com/history/282d7e59-ab4b-417c-9862-10d6...
The author's writing style is really similar to AI. AI already somehow passed Turing test. The AI detectors are not that trustworthy (but still useful).
- The overuse of emoji make me suspect it's AI. I plugged first paragraph into pangram and result is 100% AI.
https://www.pangram.com/history/30310094-f54b-4e74-8d68-e4c9...
- It's probably KPI-driven. Devs are punished by any visible error. So dev hides errors.
- It's also possible that OpenAI use many human-generated similar-to-ARC data to train (semi-cheating). OpenAI has enough incentive to fake high score.
Without fully disclosing training data you will never be sure whether good performance comes from memorization or "semi-memorization".
- Once a metric becomes optimization target, it ceases to become good metric.
- React server component is frontend's attempt of "eating" backend.
On the contrary, HTMX is the attempt of backend "eating" frontend.
HTMX preserves the boundary between client and server so it's more safe in backend, but less safe in frontend (risk of XSS).
- Any attempt that blurs boundary between client and server is unsafe.
- Egison is a pattern-matching-oriented language https://www.egison.org/
- All mistakes can be blamed to "carelessness". This doesn't change the fact that some designs are more error-prone and more unsafe.
- The biggest value of Rust is to avoid heisenbug https://en.wikipedia.org/wiki/Heisenbug
Memory safety and thread safety are causes of heisenbugs. However there are other causes. Rust don't catch all heisenbug. But not being perfect doesn't mean it's useless (perfect solution fallacy).
The article has some valid points but is full of ragebait exaggeration.
- Maybe the AI feeling is illusion because you already know it's AI-generated, just confirmation bias. Like wine tastes better after knowing it's expensive. In real world AI-generated images have passed Turing test. Only by double blind test do you can be really sure.
- A nitpick about website: the top progress bar is kind of distracting (high-constrast color with animation). It's also unnecessary because there is already scrollbar on the right side.
- The "diskless" is actually replacing disk with S3
- Why not just use github pages for static blogs? It's free. No need to worry about extra bandwidth and other costs caused by crawlers.
- I disagree with this
> While UUIDv7 still contains random data, relying on the primary key for security is considered a flawed approach
The correct way is 1. generate ID on server side, not client side 2. always validate data access permission of all IDs sent from client
Predictable ID is only unsafe if you don't validate data access permission of IDs sent from client. Also, UUIDv7 is much less predictable than auto-increment ID.
But I do agree that having create time in public-facing ID can leak analytical information.
- With proper data permission check, having predictable ID is totally fine. And UUIDv7's random part is large enough so that it's much harder to predict than auto increment id.
If your security relies on attacker don't know your ID (you don't do proper data permission check), your security is flawed.
The first is what to optimize. The second "being pushed to work faster" often produce bad results.
https://x.com/jamonholmgren/status/1994816282781519888
> I’ll add that there’s also a massive habits and culture problem with shipping slop that is very hard to eradicate later.
> It’s like a sports team, losing on purpose so they can get better draft picks for the next year. It rarely works well because that loser mentality infects the whole organization.
> You have to slow down, do it right, and then you can speed up with that foundation in place.