Preferences

h4ny
Joined 122 karma

  1. > I'm being a bit facetious here...

    Maybe just don't do that? It's never helpful in good-faith discussions and just indicates a lack of empathy and maybe a lack of understanding of the actual issue being discussed.

    > So, you haven't identified any actual problems with them being on social media though.

    The problems GP raised seem pretty clear to me. Could gives us some examples of what you would consider to be "actual problems" in this context?

    > Just that kids are doing something new and sometimes scary...

    Any sane parent wouldn't send their kids to learn to ride a bicycle on the open road and without any supervision. You'd find a park or an empty lot somewhere, let them test it out, assess their ability to deal with potential dangers and avoid harming others at the same time, and let them be on their own once they are able to give you enough confidence that they can handle themselves most of the time without your help.

    The problem with today's social media for children is that that there is no direct supervision or moderation of any kind. Like many have pointed out, social media extends to things like online games as well, and the chance that you will see content that are implicitly or explicitly unsuitable for children is extremely high. Just try joining the Discord channels of guilds of any online game to see for yourself.

    Not all things new and scary come with a moderate to high risk of irreparable harm.

  2. I encourage everyone to read the definition on the home page:

    > Definition: A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

    And also the detailed descriptions of each of the dark patterns, for example:

    https://www.darkpattern.games/pattern/12/grinding.html

    Quoting just the short descriptions of the dark patterns without considering the definition above is effectively mischaracterizing the intent of the website and not using the tool as intended, and all the patterns seem like they can be/are just enjoyable mechanics to many.

    Some of the users reviewing games on the website seem to also miss the point (inaccurate reviews), which leads to comments like https://www.hackerneue.com/item?id=45947761#45948330.

    It is increasingly often the case in predatory games that a very subtle combination of the mechanics listed make them dark patterns collectively, so it's also important to consider the patterns in groups.

  3. This feels like a step backwards and now people who never bothered to write proper, appropriate commit messages for others to start with can care even less.

    I personally don't see what the use case of this is -- you shouldn't even be hired in the first place if you can't even describe the changes you made properly.

  4. GGP's sentiment resonates with me. I invest a fair bit of time into LLMs to keep up on how †hings are evolving and I do throw both small and large tasks at them. I'm seeing great results with some small task but with anything that is remotely close to actual engineering I just can't get satisfactory results.

    My largest project is a year old, it's full-stack JavaScript, and I consciously use patterns, structures, and diligently add documentations right from the beginning for the code base to be as LLM friendly as possible.

    I see great results on refactoring with limited scope, scaffolding test cases (I still choose to write my own tests but LLMs can also generate very good tests if I explicitly point to existing tests of highly related code, such as some repository methods), documenting functions, etc. but I'm just not seeing the kind of quality that people claim that LLMs can do for them on complex tasks.

    I want to believe that LLMs are actually capable of doing what at least a good junior engineer can do but I'm not seeing that in my own experience. Whenever we point out these issues we are encountering, we just basically get the "git gud" response with no practical details on what we can actually dp to get the results that people claim to be getting. Then people start blaming our lack of structures, patterns, problems with our prompts, the language, our stack, etc. when we complain about the "git gud" response being too vague. Nobody claiming to be seeing great results seems to want to do a comprehensive write-up or, better still, a stream of their entire workflow to teach others how to do actual, good engineering with LLMs on real-world problems either -- they all just want to give high level details and assert success.

    On top of that, the fact that none of the people I know in engineering working in both large organizations and respectable startups that are pushing AI are seeing that kind of results naturally makes me even more skeptical of claims of success. What I'm often hearing from them are mediocre engineers thinking that they are being productive but actually just offloading the work to their colleagues through review, and nobody seems to be seeing tangible returns from using AI in their workflow but people in C-suites are pushing AI anyway.

    If just about anything can be "your fault", how can anyone claiming that LLMs are great for real engineering without showing evidence be so confident that what they're claiming but not showing is actually the case.

    I feel like every time I comment on anything related to your blog posts I probably came across as belligerent and get down voted but I really don't intend to.

  5. Could you elaborate on what you mean by "moral basis" in your comment?
  6. Not speaking for everyone but to me the problem is the normalization of bad behavior.

    Some people in this thread are already interpreting that policies that allow contributions of AI-generated code means it's OK to not understand the code they write and can offload that work to the reviewers.

    If you have ever had to review code that an author doesn't understand or written code that you don't understand for others to review, you should know how bad it is even without an LLM.

    > Why do you care? Their sandbox their rules...

    * What if it's a piece of software or dependency that I use and support? That affects me.

    * What if I have to work with these people in these community? That affects me.

    * What if I happen to have to mentor new software engineers who were conditioned to think that bad practices are OK? That affects me.

    Things are usually less sandboxed than you think.

  7. You just stop accepting contributions from them?

    There is nothing inherently different about these policies that make them more or less difficult to enforce than other kinds of polices.

  8. > I didn't make a decision on the tradeoff, the LLVM community did. I also disclosed it in the PR.

    That's not what the GP mean. Just because a community doesn't disallow something doesn't mean it's the right thing to do.

    > I also try to mitigate the code review burden by doing as much review as possible on my end

    That's great but...

    > & flagging what I don't understand.

    It's absurd to me that people should commit code they don't understand. That is the problem. Just because you are allowed to commit AI-generated/assisted code does not mean that you should commit code that you don't understand.

    The overhead to others of committing code that you don't understand then ask someone to review is a lot higher than asking someone for directions first so you can understand the problem and code you write.

    > If your project has a policy against AI usage I won't submit AI-generated code because I respect your decision.

    That's just not the point.

  9. Interesting idea! Would be nice to see:

    * How the colors were picked and assigned to each category and (e.g. at what point is red pink and no longer red)

    * An indication of distribution in charts, they have different scales on the y-axis.

    * The author likely sampled posters with mostly the same color above a given threshold for each category, would that (together with the lack of methodology and error bars) heavily skews the reader's presentation of the data analysis?

  10. Take this as a additional point of reference: I don't have formal education in art and not an artist, but I find your work interesting enough that I would stop at a store to look at and probably buy something (printed and fabric) if I can afford to (especially the cover art on the home page).

    Reading your comment, it sounds like you are actively sabotaging yourself by convincing yourself that you shouldn't just try (perhaps due to a subconscious fear of rejection). How do you get an audience if you don't actively promote your work and/or try to sell them?

    There is no guarantee that you will "succeed" (whatever that looks like to you — success could mean having a lot of people appreciate your work and/or selling your art for lots of money) if you try your hardest but if you don't try you will never succeed at all. I'll break down the second last paragraph as an example below.

    > I'd love to sell it online, but without an audience, no one will visit.

    Audience don't just suddenly appear because you have created something. You need to put in the effort to create an audience to begin with.

    > I could sell it at https://www.saatchiart.com, but they don't really market most of what they have. You have to drag people there.

    You need an incredible amount of luck for people to just "discover" your work and just suddenly like it (especially with abstract art?), so having need "to drag people there" is just what you should do if you want exposure for your work whether or not you host them on saatchiart.com.

    Don't fall into the trap of "if you build it, they will come".

    Focus on creating a compelling narrative behind your art and keep iterating to attract a small, loyal audience first (1000 people is already a lot).

    > Plus they take 30% or 40% (50% is normal for galleries).

    This is irrelevant if nobody knows your work and would buy them to begin with. It's just another excuse to not try. By the time this is a problem you can migrate to something more personal. Many people that support independent artists want the artists they like to get more money from them.

    > Locally, in the right location, people see your art, and stop by. It's just the pain of setting it up, and then sitting there while you wait!

    I enjoy engaging with artists at markets because the personal connection with them is actually the most valuable thing for me and the most compelling reason for me to make purchases. I also appreciate the artists who show up consistently at related events particular those who remember me well, which also becomes a reason for me to introduce their work to my friends.

    Good luck with your work and I hope you will find success with it! ^^

  11. I have been using NoScript for years and I find calling it "perfectly usable" is a bit of a stretch at least for my use case. I can only see it being "perfectly usable" if you only visit mostly the same sites most of the time and have already enabled whatever you need to enable.

    I visit new websites all the time because of HN and Reddit, and without JavaScript many sites just don't work or look too broken for me to want to read anything. Unless we collectively decide to stop using buttons instead of anchors for navigation and stop having external, unrelated JavaScript blocking the actual site (that, sometimes funny enough, doesn't require JavaScript to function), it's not going to get any better.

    I went through a phase where I think JavaScript is bad and have used CSS instead of JavaScript for a lot of things (mostly because I enjoy writing CSS). The thing is if you have ever tried developing any substantial and moderately complex feature for an actual product with CSS instead of JavaScript, while keeping them readable, maintainable and scalable, you will realize that they are good for different things and talking about them in a mutually exclusive way isn't helpful.

    Both CSS and JavaScript are constantly evolving, I agree with you that there are now things that we should do with CSS instead of JavaScript and increasingly more so.

  12. Thanks for replying, I understand your original reasoning now in a way that I didn't when I last responded. I was only considering how it would appears to people who don't recognize Gr isn't an element, I agree that it's a syntactic mistake to those who know chemical symbols well.
  13. No. The 9.14 vs. 3.14 analogy is more suitable.

    If you have read the blog post it's a difference between the chemical symbol Ge and Gr, which as I understand is what you would refer to as a "semantic error".

  14. > I'm inclined to give them a pass. It's easy enough to figure out that it should be germanium and not gadolinium, and dyslexia already exists among scientists.

    People make mistakes and you probably mean well but this is also the sort of pass given that makes scientific research and reporting terrible.

    If it's "easy enough to figure out" then it's even more important to get it right -- why should we trust someone who can't even get the "easy" things right?

    > ... and dyslexia already exists among scientists.

    The article is pointing out a problem that appears to be fairly common, is that really a suitable explanation? Even if it is a suitable explanation, is that a reason for lowering standards, which you can then apply to explain away every mistake?

    Keep in mind that proper publications should usually have been reviewed by at least 3 people including the authors (typically more) by the time everyone else gets to read it. So that kind of mistake isn't really acceptable.

    > What I think is more dangerous to understanding is skipping formulas in favor of initials! BFO instead of BiFeO3, or BT instead of Bi2Te3, SRO for SrRuO3, LSFO for La0.3Sr0.7FeO3 abbreviations that I think obscure too much detail. You can more easily wander into talking about different things with the same terms. Such abbreviations are already endemic in condensed matter physics.

    If you have been trained in scientific writing, you would always introduce an abbreviation. For example, "BiFeO3 (BFO)" and "SrRuO3 (SRO). It's also common to include a list of abbreviation in some forms of scientific writing.

  15. > It supposes you are able to articulate where you want to be in five years, and have the ability to break that down into actionable tasks.

    This is gold. :)

  16. > you MUST get very good at reviewing code that you did not write.

    I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.

    > AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

    I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.

    If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.

    > Educational establishments MUST prioritize teaching code review skills

    Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.

    > and other high-level leadership skills.

    Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.

  17. What a delightful read. Thanks for all the thoughts put into the problem solving, the writing, and the presentation!
  18. > As you said, the very title of the article acknowledged that it didn’t produce a working product.

    Then why not say "mostly didn't work"? I read the article and that's the impression I got.

    The OP's comment isn't an outage, it's more like you intentionally painted it as an outrage with a comment that reads more like an outrage.

  19. I have been seeing different people reporting different results with different tasks. Watched a live stream that compared GPT-5, Gemini Pro 2.5, Claude 4 Sonnet, and GLM 4.5, and GPT-5 appeared to not follow instructions as well as the other three.

    At the moment it feels like most people "reviewing" models depends on their believes and agenda, and there are no objective ways to evaluate and compare models (many benchmarks can be gamed).

    The blurring boundaries between technical overview, news, opinions and marketing is truly concerning.

  20. I did read that and it doesn't change what I said about your comment on HN, I was calling out the fact that you are making a very bold statement without having done careful analysis.

    You know you have a significant audience, so don't act like you don't know what you're doing when you chose to say "TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs" then defend what I was calling out based on word choices like "conclusions" (I'm sure you have read conclusions in academic journals?), "I think", and "speculation".

  21. > TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs.

    That's just straight up not the case. Not sure how you can jump to that conclusion not least when you stated that you haven't tested tool calling in your post too.

    Many people in the community are finding it substantially lobotomized to the point that there are "safe" memes everywhere now. Maybe you need to develop better tests that and pay more attention to benchmaxxing.

    There are good things that came out of these release from OpenAI but we'd appreciate more objective analyses...

  22. Why is this impossible? If the LLM has effectively seen all the code that could lead to that trace, all it has to do is to pick the bits out that has the highest chance to map to it right?

    Is it a new marketing strategy to start by saying you're "incredibly cynical" about something that you're going to say the exact opposite about, perhaps to mask arguments with little rigor?

  23. That's a fundamentally biased attitude though. It's not very different from saying: "oh, that guy's got tattoos, I'm not going to engage because he's probably a gangster."

    Unless someone has proven to you that they are not trustworthy, you choosing not to listen is a... personal choice at best. So don't try to chalk it up like if it's not presented in a certain way that's to your taste it's not real science and/or it's fake news.

  24. > and they expect the coming storm because they seek what follows.

    Well, a lot of economists, including one Nobel Prize winner (Paul Krugman) have commented on this being a bad idea for the economy...

    On the off chance that those people who are supposed to know what they are talking about are all wrong and haven't thought about it being some genius diversion tactics -- it's generally still a pretty bad idea for a government to effectively ask everyone to "tough it out" and "just trust me bro".

    It's really disappointing that people are actually trying to theorize this as some kind of genius plan. It doesn't take a genius to figure out the amount of irreparable damage they have already done to regular folks is unacceptable (uh... I'm sure some nutters out there think that DOGE is keeping track of all the damages they have done and will pay everyone once their genius plans have worked out).

    That kind of logic really amazes me.

  25. That's a great article and a lot of the comments seem to resonate with the article. But somehow this is disappearing from the front page faster than anything else, it's hard not to think that "this is bad for business, so it must go"...
  26. Tangentially related, I feel that SWEs who claim that they are more productive with AI haven't actually demonstrated with real examples of how they are actually more productive.

    Nobody I follow (including some prominent bloggers and YouTubers) claiming productivity increase is recording or detailing any workflow or showing real world, non-hobby (scalable, maintainable, readable, secure, etc.) workflows of how to do it. It's like everyone who "knows what they are doing" is hiding what the secret sauce for a competitive edge or that they are all just mediocre SWEs hyping AI up and lying because it makes them more money.

    Even real SWEs in large companies I know can't really seem to tell me how their productivity is increasing, and when you dig deeper it always seem to be well-scoped and well-understood problems (which is great, but doesn't match the level of hype and productivity increase that everyone else is claiming) -- and they still have to be very careful with reviewing (for now).

    It's almost like AI makes SWE brains go mush and forget about logic and data.

  27. I don't want to comment on this without giving the source a fair go, but between knee-jerk reaction to whether or not it's time-wasting pedantry and reading directly off slides without providing any real value, I find it hard to even try to make myself give it a chance. I tried skipping to the questions first to see if there is anything more valuable but it hasn't changed my mind. Would love to know if I'm missing something from people who find this valuable.
  28. By "a lot of people with ADHD" do you include people who are also self-diagnosed?

    I ask because my experience is similar to yours (I also have ADHD). Anecdotal so take it with a grain of salt: I work in software engineering, know and have worked with a lot of self-diagnosed ADHDs who make it part of their identity, while some of them probably do have ADHD, the vast majority of them feel like perfectly normal people who would latch onto any opportunity to prove that they have ADHD. (e.g. people who have a single habit that could be stimming but otherwise don't exhibit and ADHD traits saying stuff like "sorry I can't help myself because of my ADHD brain").

    In contrast those I know who have been properly diagnosed don't behave like they constantly need to tell people they have ADHD. They are usually deeply interested in the condition but you probably wouldn't know unless there is a proper setting for them to disclose it (invited to talk to an audience about their experience) or if you're someone they trust.

    It feels like people who have the tendency to make ADHD part of their identity just want to been seen as special and important in some way (I don't mean this negatively) perhaps because that's how they see their genuinely neurodivergent peers. They tend to have many excuses for not getting a diagnosis because they won't risk the chance of finding out that they don't have ADHD as they have already made it part of their identity.

    On the uglier side, there is also no shortage of people lying about having medical conditions for clout and money on the Internet.

  29. It's truly frightening for people who don't understand that well to be given the opportunity to mange people.

    As yobbo said it much better than me, "the part that is narcissistic is believing your empathy is uniquely taking a toll on you."

    It's impossible to have a conversation with people who only make reasonable-sounding statements that are irrelevant to the actual conversation. I'm out.

  30. > I don't see anything narcissistic about being a normal human being with natural empathy for the people they work with.

    I agree but that's not what I said. What I said was it's "unhelpful, or even narcissistic", for a manager to think that being empathetic as a manager is more emotionally taxing than an IC.

    > Managing people does require emotional stability and self control, though.

    It does. That's just being professional though. Being a good coworker in general requires emotional stability and self control. I'm not sure if you're trying to argue that a manager has to have much greater ability to do those than an IC since you seem to avoid saying that directly.

    > ICs tend to automatically defer to their managers. Sometimes, this leads the ICs to misinterpret tiny reactions from their managers in completely unintended ways. I find that negative, emotionally charged reactions from managers are especially likely to lead to unintended reactions from their ICs.

    Uh, the reverse sounds just about as true? I'm honestly not sure what the argument there is.

    > As an IC you try to not bring your personal life ups and does to the job. This is even more important as a manager.

    It's unclear from what you said why it's more important as a manager -- that's just what every professional should do at a workplace. Can you please elaborate?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal