- alex-moonOK yeah I think I see what you're saying, if the SMTP mailer is a hosted service and we're talking about the logs for the service itself then failed connections are not an error - this I agree with. I also wouldn't be logging anything transactional at all in this case - the transactional logs are for the user, they are functionality of the service itself in that case, and those logs should absolutely log a failure to connect as an error.
- "If a SMTP mailer trying to send email to somewhere logs 'cannot contact port 25 on <remote host>', that is not an error in the local system and should not be logged at level 'error'."
But it is still an error condition, i.e. something does need to be fixed - either something about the connection string (i.e. in the local system) is wrong, or something in the other system or somewhere between the two is wrong (i.e. and therefore needs to be fixed). Either way, developers on this end (I mean someone reading the logs - true that it might not be the developers of the SMTP mailer) need to get involved, even if it is just to reach out to the third party and ask them to fix it on their end.
A condition that fundamentally prevents a piece of software from working not being considered an error is mad to me.
- https://joyus.ajmoon.com
This is actually pretty much as done as it's going to be (could use some nicer UI feedback, i.e. how you actually use the app) - it is actually just a demo for an effort I undertook to mod Datastar to support nested web components. I am writing it up as we speak!
Instructions: you have to answer three questions; each one will auto-submit once your response goes over 100 characters; the answer to the third question is your "post". It's a proof of concept of a friction intervention for social media to encourage slow thinking before posting (and hopefully reframing negative experiences in the mind, it's kind of dual purpose).
- Point of curiosity: the community prediction is, presumably, an arithmetic mean, but I argue that is not a good model for a dataset that almost certainly gets more dense closer to the present, creating a gradient out into the future. It would be great to see the geometric mean as well.
- 8 points
- This is actually a surprisingly effective way to get a broad range of feedback on topics. I realise this was built for fun, but this whole discussion dynamic is why I value HN in the first place - it never occured to me to try and reproduce it using LLMs. I am suddenly really interested in how I might build a similar workflow for myself - I use LLMs as a "sounding board" a lot, to get a feeling for how ideas are valued (in the training dataset at least).
- It's funny, I have never thought of it this way, but, reflecting, I realise the way I do think about it is very similar. Whenever I have to justify a subscription on JetBrains or hosting or what have you, I always just ask myself: will this bring me joy? Specifically will it bring me as much joy as e.g. a Netflix subscription? Very easy to justify then.
To be fair, I used to smoke cigs, and drink heavily, which are both very expensive habits. I've since quit those (they weren't bringing me joy) but the benchmark is the same.
- So it's fascinating reading this looking at the screengrabs of the "original" versions... not so much because they are "how I remember them" but indeed, because they have a certain nostalgic quality I can't quite name - they "look old". Presumably this is because, back in the day, when I was watching these films on VHS tapes, they had come to tape from 35mm film. I fear I will never again be able to look at "old looking" footage with the same nostalgia again, now that I understand why it looks that way - and, indeed, that it isn't supposed to look that way!
- Baader-Meinhof complex in action: I have _just_ ordered a book of Rovelli's (Reality is Not What It Seems - https://en.wikipedia.org/wiki/Reality_Is_Not_What_It_Seems), it should be in my hands by the end of the week. I am fascinated by the ongoing work in quantum gravity, it's tantalising by its nature.
This is a great interview and I must say I like the man a lot more than I did before. He has articulated something here that I have long felt: that it is as important in politics as it is in philosophy or theoretical physics to be able to state one's assumptions, to suspend one's assumptions for the sake of argument and to drop/change one's assumptions in the face of evidence.
I feel like this is a vital skill that we, as a society, need now maybe more than ever, in literally any field in which there is any meaningful concept of "correct" (which I think is most fields). I also think it's a skill you basically learn at university - and that that is a problem. I don't know what an approach to cultivating it more widely would look like.
- I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers" or "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.
You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.
- I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%. When the stakes are life or death, as they are with someone who is suicidal, that is a good example of a time when 80% isn't good enough.
In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.
I know that you will want to hear this from experts in the "relevant field" rather than myself, so here is a write-up from Stanford on the subject: https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...
- I greatly appreciated this article and have found the data very useful - I have shared this with my business partner and we will use this information down the road when we (eventually) get around to migrating our app from Angular to something else. Neither of us were surprised to see Angular at the bottom of the league tables here.
Now, let's talk about the comments, particularly the top comment. I have to say I find the kneejerk backlash against "AI style" incredibly counter-productive. These comments are creating noise on HN that greatly degrades the reading experience, and, in my humble opinion, these comments are in direct violation of all of the "In Comments" guidelines for HN: https://news.ycombinator.com/newsguidelines.html#comments
Happy to change my mind on this if anyone can explain to me why these comments are useful or informative at all.
- Reading this is a truly weird experience - the idea of a single source of truth for domain names seems foreign now, though in truth it's probably not as far removed from the current practice as anyone would like to think.
- This is beautifully designed and engaging and potentially a fun way to learn things! Amazing work.
Some things you could add to make it stickier:
1. Have a natural end where you "win" - possibly I just didn't hit this, it's the kind of game I'd play in bed in the morning as long as the play time was similar to Wordle and the other NYT games.
2. Have facts show up when you get something right! Could literally just be the opening sentence from the article and a link to the article. This extra context stimulates curiosity - I'd love to be able to have "Space physics? That doesn't sound like a real thing..." and then have the hat guy pop up and go, "You cracked it chief. Space Physics is the study of high atmosphere plasmas."
EDIT: for comparison, have a look at Metazooa - https://metazooa.com/ - which did this very well.
- Fascinated by the cluster of Australian, British and South African. As an Australian living in UK, I hear an enormous difference between these accents - even just in the British ones, the Yorkshireman and the Geordie stick out like a sore thumb to me - the narcissism of small differences perhaps. Interestingly, my partner, who is from England, often says, of various Australians we hear (either on TV or my friends), that they sound British to her. I, meanwhile, can pick an Australian from very few words. What are we hearing differently? It is a mystery to me.
- There is an amazing video of a similar exploit in Pokémon Yellow where they basically run a bunch of different games: https://youtu.be/Vjm8P8utT5g?si=8N0Xh-VOq_gqHJ4z
- I realise this isn't the most thoughtful comment but I hope the intended spirit comes across when I say, sincerely: ha ha yay (clapping hands)
- This is depressing to be sure, but rest assured there is a long history of middle management being "the buffer" between upper management and everyone else. It's a common trope in TV shows and such.
I think one of the most important things this write up is maybe missing (though I am not super clear on how this has changed recently) is when middle management are acknowledging to their reports that the C suite are a bunch of cunts, is that they also need to be saying the same thing to the C suite themselves. "Going out to bat" literally means feeding back to upper management that what they're doing isn't going to work. This should be a fight that is ongoing. Again, privately, one to one, but you do actually need to be doing it. If you can back it up with numbers, even better.
- I think this is a real problem though I haven't got a lot of specifics to pull together into a comment just yet. Let me say a bit about what I mean though - I'll turn it into a blog post when I've got it more together.
I recently signed up for Junie - that's JetBrains' Cursor equivalent, an agentic AI coding plugin for the IDE. You tell it what you want and then it does it. My first thought was "Oh I'll just get it to do a rough outline for me then I'll fill in the details myself." But then the first rough outline was missing some things, so I iterated. Then it was basically working, just missing some things, so I iterated again.
After a dozen iterations, it was really clear to me that I was stuck in the intermittent reinforcement - aka the "just one more" - loop. This is the basic mechanism that drives addiction.
Addiction is basically where your prefrontal cortex gets short circuited out of the brain's reward system. The reward system is this kind of phased feedback loop between the PFC and the basal ganglia, the central tegmental area and the thalamus.
The PFC is the bit that does the "thinking" this article is talking about, the "putting together" of disparate pieces of information to make new information that then goes back into the reward system to shift the goals involved in reward seeking.
This is that feeling you get of "losing sight" of the "bigger picture" and it's responsible for a lot of social ills - we don't just get addicted to drugs, this basic mechanism, the tightening of that feedback loop, is involved in everything we do, whether that's exercise or eating or work or relationships, it's when these things become unhealthy.
Tech already does a lot of really bad things to our brains in this area. Doom-scrolling is the most famous example at the moment. So unhealthy, but we can't even stop ourselves - that's what an addiction looks like.
My biggest concern is that this is what AI is doing. All the big AI houses made their models free to use, and consumers lapped it up. We got hooked. The first hit is free, you see what I mean? Then they bump the price up. By the time they do so, it's too late - we need it.