Preferences

atleastoptimal
Joined 3,919 karma

  1. training on data isn’t stealing the data, in the same way learning from a textbook doesn’t mean youre stealing from it
  2. Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.

    Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like other industries in the US.

  3. >IBM CEO

    might as well ask a magic 8 ball for more prescient tech takes

  4. Boomers in the manager class love AI because it sells the promise of what they've longed for for decades: a perfect servant that produces value with no salary, no need for breaks, no pushback, no workers comp suits, etc.

    The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.

    There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.

  5. What is inevitably going to happen

    1. AI becomes better, causes more fear, public uproar, arms race between China/US

    2. AI becomes a government project, big labs merge, major push into AI for manipulation, logistics, weapons development/control

    3. ????

    4. Utopia or destruction

  6. Yeah the thing about having principles is that if the principle depends on a qualitative assessment, then the principle has to be flexible as the quality that you are assessing changes. If AI was still at 2023 levels and was improving very gradually every few years like versions of Windows then I'd understand the general sentiment on here, but the rate of improvement in AI models is alarmingly fast, and assumptions about what AI "is good for" have 6-month max expiration dates.
  7. Most "low hanging fruits" have been taken. The thing with AI is that it gets worse in proportion to how new of a domain it is working in (not that this is any different than humans). However the scale of apps made that utilize AI have exploded in usefulness. What is funny is that some of the ones making a big dent are horrible uses of AI and overpromise its utility (like cal.ai)
  8. I had a software engineering job before AI. I still do, but I can write much more code. I avoid AI in more mission-critical domains and areas where it is more important that I understand the details intimately, but a lot of coding is repetitive busywork, looking for "needles in haystacks", porting libraries, etc. which AI makes 10x easier.
  9. HN loves this "le old school" coder "fighting the good fight" speak but it seems sillier and sillier the better and better LLM's get. Maybe in the GPT 4 era this made sense but Gemini 3 and Opus 4.5 are substantively different, and anyone who can extrapolate a few years out sees the writing on the wall.

    A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.

  10. Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.
  11. >Big corps in general have been fucking us over since the industrial revolution

    And yet your computer, all the food you eat, the medicine that keeps you alive if you get sick, etc is all due to the organizational industrial and productive capacity of large corporations. The existence of large corporations is just a consequence of demand for goods and services and the advantages of scale, and they exist because of the enormous demand for reliable systems to provide them.

  12. > Should it be able to love?

    We can leave that question to the philosophers, but the whole debate about AGI is about capabilities, not essence, so it isn't relevant imo to the major concerns about AGI

  13. I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.

    In this article we see a sentiment I've often seen expressed:

    > I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.

    AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.

    Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?

    It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?

  14. They fund founders with the immutable qualities which correlate with success. These include

    1. FAANG or Ivy/Stanford/Harvard pedigree

    2. Ex-founder with a good exit

    3. Physically attractive/charismatic, a good salesman

    4. Extremely high intelligence, ability, track record of aptitude

    These qualities are more important than the idea of the company, which they expect to pivot or evolve over time. What doesn't change over time is the founders' relative advantage over their peers in these aspects, so it makes sense to prioritize them in selection

  15. Leaks about Gemini 3's capabilities seem to imply it is a major leap in visual acuity. I have a feeling Gemini 3 is a merger of the capabilities of Genie, Veo3 and Google's other robotics/vision developments.

    However it is inevitable that people on here will try to find errors, refusing to believe the massive categorical difference in LLM's vs previous tech. The HN cycle will repeat until AGI

    >HN says "AI will never/not in a long time be able to do (some arbitrary difficult task)

    >Model is released that can do that task at near or above human level

    >HN finds some new even narrower task (which few humans can do) that LLMs can't do yet and say that it's only "intelligent" if it can do that

    >Repeat

  16. It's marketing, but if it's the truth, isn't it a public good to release information about this?

    Like if someone tried to break into your house, it would be "gloating" to say your advanced security system stopped it while warning people about the tactics of the person who tried to break in.

  17. Open source AI is getting cheaper and cheaper. Model companies run inference at a profit, the lack of profitability from AI companies is just due to them putting all their capital into training the next generation of models.
  18. you know the answer
  19. hard work is 1000x easier when I know that I'm contributing to something worthwhile, and more importantly, I have strong assurance that my work will make a positive impact. Working at something you suspect you're redundant or bad at is demoralizing and a recipe for burnout.

This user hasn’t submitted anything.