Preferences

pedrosorio
Joined 1,659 karma

  1. > Is there a human out there who would just magically type all the right things - no errors - first try?

    If they know what they're doing and it's not an exploratory task where the most efficient way to do it is by trial and error? Quite a few. Not always, but often.

    That skill seems to have very little value in today's world though.

  2. Regarding electricity, it depends on what you mean by “we”, I guess

    https://www.voronoiapp.com/energy/-China-Generated-More-Elec...

  3. > Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png.

    These are the same link

  4. You can check all the teams and their members here: https://cphof.org/advanced/icpc/2025

    It's rarely the case that looking at school names is useful (for many things in life) when there are more data points.

    In this case, without any insider knowledge, just by looking at their profiles, the relevant name would appear to be Benjamin Jeter (https://codeforces.com/profile/BenjaminJ) rather than ASU. Currently 5th active American in the top competitive programming platform, top 200 worldwide (https://codeforces.com/ratings/country/United%20States). That's elite.

    In teams of 3, even one "super player" can make a big difference. Almost certainly carrying that team.

  5. *St. Petersburg

    I guess, like a lot of other sports at the college level, having a reputation that attracts the best competitive programmers (and a great coach to go along with it) doesn't hurt: https://en.wikipedia.org/wiki/Andrey_Stankevich

  6. This one has "Advent of code" vibes
  7. When the driver and the platform are different entities (like Uber) you end up with these weird incentives. How would that happen in the Waymo case?
  8. > The fact that is is needed at all of course highlights a weakness in the language. The import statements themselves should be able to convey all information about dependencies

    What languages convey the version of the dependencies in a script’s import statements?

  9. ^ fyi, this comment reveals you didn't RTFA
  10. > learn at a deeper level and prove it by becoming one of a different crowd of technologists

    The “prove you’re special” motivation is definitely a strong third reason that does not align with the nepotism baby or monk archetypes

  11. LLM on just DNA seems to be useful: https://www.nature.com/articles/s42256-024-00872-0

    These models have proven to develop incredible abilities through pattern matching on massive text data, so I wouldn’t be too quick to dismiss the limits of what they could do.

    Having them use specialized tools would probably be more effective (e.g. have the reasoning LLM use the DNA LLM), but in the long term with scale… who knows? The bitter lesson keeps biting us every time we think we know better.

  12. > As the confidence of advice, how much the rates of the mistakes are different between human lawyers and the latest GPT?

    Notice I am not talking about "rates of mistakes" (i.e. accuracy). I am talking about how confident they are depending on whether they know something.

    It's a fair point that unfortunately many humans sound just as confident regardless of their knowledge, but "good" experts (lawyers or otherwise) are capable of saying "I don't know (let me check)", a feature LLMs still struggle with.

  13. > amount of data you'd need to learn in order to give descent law advice on a spot?

    amount of data you'd need to learn to generate and cite fake court cases and give advice that may or not be correct with equal apparent confidence in both cases

    fixed that for you

  14. > Should we not mimic our biology as closely as possible rather than trying to model how we __think__ it works (i.e. chain of thought, etc.).

    Should we not mimic migrating birds’ biology as closely as possible instead of trying to engineer airplanes for transatlantic flight that are only very loosely inspired in the animals that actually fly?

  15. > My guess would be that a lottery system is actually better for most people currently in the H1-B process because my personal experience

    Regardless of your personal experience, if H1-B visas are currently allocated randomly to less than 50% of the applicants, then this is mathematically true.

  16. > Such a culture isn't going to be globally competitive

    Globally competitive in what sense, then?

  17. > Start feeding them more than just milk/formula at 6-8 months.

    This feels out of place. What did you do the first time?

  18. Motivation is not the limiting factor. Not knowing what you don’t know is.
  19. > differentiate between someone who picked up Python for a weekend project, and someone with 30 years experience with real-world systems

    Why do you need credentials to differentiate these two?

  20. Followed by

    > In my opinion, this was a clear case of a programmer resisting the system approach because he wanted to spend time fixing the same problems over and over and pretend to be working hard

    Which is funny when the first bullet point under "skills" in the author's resume [0] is:

    > Team player mentality

    I love working with "team players" like this.

    [0] The "About me" page in website: https://latexonline.cc/compile?git=https://github.com/vitons...

  21. They were talking about 2002 when the Euro was introduced and "it felt like" prices doubled overnight. At the time (as you can see in the chart) inflation was below 4% per year.
  22. Even across users it’s a terrible idea.

    Even in the simplest of applications where all you’re doing is passing “last user query” + “retrieved articles” into openAI (and nothing else that is different between users, like previous queries or user data that may be necessary to answer), this will be a bad experience in many cases.

    Queries A and B may have similar embeddings (similar topic) and it may be correct to retrieve the same articles for context (which you could cache), but they can still be different questions with different correct answers.

  23. Because the purpose of most homework is not to give you a “real world task”.

    It is to give you simplified toy problems that allow you to test your understanding of key concepts that you can use as building blocks.

    By skipping those, and outsourcing “understanding” of the fundamentals to LLMs, you’re setting yourself up for failure. Unless the goal of the degree is to prepare you for MBA-style management of tools building things you don’t understand.

  24. > Or, fails in the same way any human would, when giving a snap answer to a riddle told to them on the fly

    The point of o1 is that it's good at reasoning because it's not purely operating in the "giving a snap answer on the fly" mode, unlike the previous models released by OpenAI.

  25. > It knows english at or above a level equal to most fluent speakers, and it also can produce output that is not just a likely output, but is a logical output

    This is not an apt description of the system that insists the doctor is the mother of the boy involved in a car accident when elementary understanding of English and very little logic show that answer to be obviously wrong.

    https://x.com/colin_fraser/status/1834336440819614036

  26. > Covid and other things reduced life expectancy in the US.

    It reduced life expectancy for people who were alive in 2019. Those statistics say nothing about the life expectancy of people who are currently alive.

  27. > I mailed a bunch of bills to my insurer, and requested reimbursement (…) Anyway, they mailed me a check

    This is not what I’d describe as “fighting your insurance”.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal