Preferences

km144
Joined 167 karma

  1. > “In my trips to Wall Street,” Dyer told the panel, “one of my analyst friends took me to lunch one day and said, ‘Joe, you have to get iRobot out of the defense business. It’s killing your stock price.’ And I countered by saying ‘Well, what about the importance of DARPA and leading-edge technology? What about the stability that sometimes comes from the defense industry? What about patriotism?’ And his response was, ‘Joe, what is it about capitalism you don’t understand?’”

    I find this article a pretty compelling critique of the extractive incentives of Wall Street and a good argument for government stepping in from time to time to adjust those incentives. Where is the societal good in the engine of capitalism prioritizing short-term extraction over long-term value creation?

  2. Sure, but the author is arguing that the outcome you're describing is tightly coupled to the perverse incentives that he describes in the article. Investors pushed the company towards extraction over innovation and the end product suffered as a result.
  3. I'm not so sure I buy the premise that engineers are really dismissing AI because it's still not good enough. At the very least, this framing does not get to the heart of why certain engineers dislike AI.

    Many of the people I've encountered who are most staunchly anti-AI are hobbyists. They enjoy programming in their spare time and they got into software as a career because of that. If AI can now adequately perform the enjoyable part of the job in 90% of cases, then what's left for them?

  4. Low mileage used cars don't come with a warranty, or probably have a more limited warranty if they're CPO.

    Leases can be better, but again they are usually better choices in high depreciation scenarios (like luxury vehicles or EVs, as you point out), not low depreciation scenarios.

  5. Have you seen the prices of pre-owned Honda/Toyota sedans that are less than 5 years old? There are absolutely cars out there where trading in your new car after 3-4 years can make sense depending on the cost of the car, the depreciation curve, and whether you want to always be driving a relatively new car. Of course it's almost always going to be a better value proposition to drive the car for 10 years if you can, but that can still depend on depreciation.
  6. Those things also require more willpower than taking a medication. Willpower is generally determined by your particular psychology which is determined by genetics and environmental factors. People don't have a choice in the matter as much as your comment seems to imply. Getting GLP-1s to everyone who could benefit from them is extremely important for overall health.
  7. "Real industry" also has quite a hard time getting things done these days. If you look around at the software landscape, you'll notice that "getting things done" is much easier for companies whose software interfaces less with the real world. Banking, government, defense, healthcare etc. are all places where real-life regulation has a trickle-down effect on the actual speed of producing software. The rise of big tech companies as the dominant economic powerhouses of our time is only further evidence that it's easier to just do a lot of things over the internet and even preferred, because the market rewards it. We would do well to figure out how to get stuff done in the real world again.
  8. I think the problem is false positives, not false negatives. The people you interact with during the interview process have all sorts of reasons to embellish the experience of working at their company.
  9. You hit the nail on the head. There is no place on the internet more broadly susceptible to the same kinds of "founder brain" malaise that has afflicted so many in Silicon Valley--i.e. "I am good at software development so therefore I am confident I have a good understanding of (and opinion on) all sorts of intellectual topics".
  10. Maybe it that's an apt analogy in more ways than one, given the recent research out of MIT on AI's impact on the brain, and previous findings about GPS use deteriorating navigation skills:

    > The narrative synthesis presented negative associations between GPS use and performance in environmental knowledge and self-reported sense of direction measures and a positive association with wayfinding. When considering quantitative data, results revealed a negative effect of GPS use on environmental knowledge (r = −.18 [95% CI: −.28, −.08]) and sense of direction (r = −.25 [95% CI: −.39, −.12]) and a positive yet not significant effect on wayfinding (r = .07 [95% CI: −.28, .41]).

    https://www.sciencedirect.com/science/article/pii/S027249442...

    Keeping the analogy going: I'm worried we will soon have a world of developers who need GPS to drive literally anywhere.

  11. I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.

    Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.

  12. > According to this view, justice demands that variations in how well-off people are should be wholly determined by the responsible choices people make and not by differences in their unchosen circumstances. Luck egalitarianism expresses that it is a bad thing for some people to be worse off than others through no fault of their own.

    When I see this line of reasoning, it leads me down the road of determinism instead. Who is to say what determines the quality of choices people make? Does one's upbringing, circumstance, and genetics not determine the quality of one's mind and therefore whether or not they will make good choices in life? I don't understand how we can meaningfully distinguish between "things that happen to you" and "things you do" if the set of "things that happen to you" includes things like being born to specific people in a specific time and place. Surely every decision you make happens in your brain and your brain is shaped by things beyond your control.

    Maybe this is an unprovable position, but it does lead me to think that for any individual, making a poor choice isn't really "their" fault in any strong sense.

  13. As you alluded to at the end of your post—I'm not really convinced 20k LOC is very limiting. How many lines of code can you fit in your working mental model of a program? Certainly less than 20k concrete lines of text at any given time.

    In your working mental model, you have broad understandings of the broader domain. You have broad understandings of the architecture. You summarize broad sections of the program into simpler ideas. module_a does x, module_b does y, insane file c does z, and so on. Then there is the part of the software you're actively working on, where you need more concrete context.

    So as you move towards the central task, the context becomes more specific. But the vague outer context is still crucial to the task at hand. Now, you can certainly find ways to summarize this mental model in an input to an LLM, especially with increasing context windows. But we probably need to understand how we would better present these sorts of things to achieve performance similar to a human brain, because the mechanism is very different.

  14. Are you a paid user? I haven't seen a model selector in years.
  15. How would I even know? I haven't seen which model of ChatGPT I'm using on the site ever since they obfuscated that information at some point.
  16. Theories are hard because the world is complex. I guess that sounds trivial but it really should be said more often. There is no silver bullet with these things, because the systems are so complicated that it is hard to reason about how one thing is the true root cause without implicating another cause. That's also why economics is so difficult I suppose.
  17. A lot of the time, multiple devs working on a single branch can be avoided via different decisions made upstream about work that needs done. If my job included more git wrangling as one of my daily tasks I would probably hate my job.
  18. My thought is, if a GUI like GH Desktop makes it hard to use Git, then your workflow is too complicated. Version control doesn't have to be complicated. But a lot of that is upstream decisions about how you structure your work as a team.
  19. How about... greater benefits to people who are unemployed? I mean, UBI is inherently a poor policy for "mass unemployment scenarios", because there is no feasible scenario in which the majority of people are unemployed. To give UBI to people making around the median wage or higher because there is mass unemployment due to automation that doesn't affect them doesn't seem like a good use of money at all.
  20. Probably the most HN-coded response I can imagine to someone asking why you would possibly want 12 vacuum cleaners.
  21. Housing, healthcare, childcare, and education have become more expensive. These are the biggest expenses for most people and are necessities. So the percentage of income available for other expenses has definitely decreased. Not sure about real wages though.
  22. > Is there more we can add to the AI conventions markdown in the repo to guide the Agent to make fewer mistaken assumptions?

    Forgive my ignorance, but is this just a file you're adding to the context of every agent turn or this a formal convention in the VS code copilot agent? And I'm curious if there's any resources you used to determine the structure of that document or if it was just a refinement over time based on mistakes the AI was repeating?

  23. I think that is a fair perspective. When I say "to what end" I am mostly implying the "end" of a product for the market. I think writing in particular is always a thing where if you tell people you do it as a hobby, they assume your goal is a published book, not the process itself. Creativity as the end is a wonderful thing, but I just have a feeling AI is going to be more widely adopted to pump out passable (or even arguably "good") content that people will pay money for.

    Again the same thing with writing software, where you can be creative with it and it can enhance the experience. But most people just use AI to help them do their job better—and in an era where many software companies appear to have a net negative effect on society, it's hard to see the good in that.

  24. I feel that when making a claim like that, the burden of proof is on you to explain how AI makes the world a better place. I have seen far more of the opposite since the advent of GPT-3. Please do not say it makes you more productive at your job, unless you can also clearly derive how being better at your job might make the world a better place.
  25. Creative works carry meaning through their author. The best art gives you insight into the imaginative mind of another human being—that is central to the experience of art at a fundamental level.

    But the machine does not intend anything. Based on the article as I understand it, this product basically does some simulated annealing of the quality of art as judged by an AI to achieve the "best possible story"—again, as judged by an AI.

    Maybe I am an outlier or an idiot, but I don't think you can judge every tool by its utility. People say that AI helps them write stories, I ask to what end? AI helps write code, again to what end? Is the story you're writing adding value to the world? Is the software you're writing adding value to the world? These seem like the important questions if AI does indeed become a dominant economic force over the coming decades.

  26. Garry Tan is inexplicably cringe and annoying
  27. Yeah, I think it makes more sense when you consider that the former (boss complaining to direct) is something that should seem obviously bad to even non-managers, but handling the reverse situation correctly is also critical. It's confusing because the title is written as if I am the direct report, while the article is written as if I am the manager.
  28. I'd argue it is actually more of a broad trend i.e. the boom in computer science enrollment over the last 20 years has been driven mostly by people chasing a better return on investment in the increasing cost of the average four-year degree, and software pays better than the average four-year degree. I do think that college being cheaper on average would help at least somewhat with CS being such a popular major.
  29. I actually think prestige is a contributing factor for CS as well. People assume you must be smart to be a software engineer, and FAANG companies are prestigious to normal people because they have name recognition. Definitely not on the same level as a Doctor/Surgeon/Lawyer or whatever but certainly could be more than a typical 4-year degree will get you. And I suppose there's also the fact that those companies were viewed very differently 10-15 years ago and now there is a lot more cynicism about big tech in general.
  30. Probably that's just how we were wired. I'm probably often guilty of invoking an appeal to nature [1] when it comes to these things, but it's striking to me how few people who exist in modern society have the capacity to acknowledge that to live in this society is to entirely live within an experiment whose parameters have evolved from generation to generation over the past few centuries. We do not think critically enough about which of the technologies that "enhance" our lives actually enhance them. If personal automobiles are good for us, then to what end are they good? If social media is good for us, then to what end is it good? When you go beyond the first or second question, you start to realize the societal good is dubious at best.

    https://en.wikipedia.org/wiki/Appeal_to_nature

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal