Preferences

Fargren
Joined 2,656 karma

  1. If 1% of indie games are solid, and all AAA game are solid, and there are 100 times more indie games than AAA games, then there would still be the same amount of solid indies as there are solid AAA games. As it is, I think for every good AAA game, there are somewhere between 50 and 500 great indie games.

    Finding them is slightly harder, but absolutely worth it.

    In any case, complaining about how many games there are out there that are not your thing is a waste of time. Much better to define what you like and look for recommendations from people who like similar games. Who care how many FPSs are released if you don't like FPSs? If you like RPGs, find RPG gamers and ask them what's good. Substitute for any genre; there is no genre out there that's not getting more releases than you could possibly play.

  2. That explains why it happens, but doesn't really help with the problem. The expectation I have as a pretty naive user, is that what is in the .md file should be permanently in the context. It's good to understand why this is not the case, but it's unintuitive and can lead to frustration. It's bad UX, if you ask me.

    I'm sure there are workarounds such as resetting the context, but the point is that god UX would mean such tricks are not needed.

  3. It seems to me that building a recording device that can survive in space, that it's very light, and that can not break apart after receiving the impact from an explosive charge strong enough to decelerate it from the speeds that would take it to Alpha Centauri is... maybe impossible.

    We're talking about 0.2 light years. To reach it in 20 years, that's 1/10th of the speed of light. The forces to decelerate that are pretty high.

    I did a quick napkin calculation (assuming the device weighs 1kg), that's close to 3000 kiloNewton, if it has 10 seconds to decelerate. The thrust of an F100 jet engine is around 130 kN.

    IANan aerounatics engineer, so I could be totally wrong.

  4. Any mass that it fires would have a starting velocity equal to that of the probe, and would need to be accelerated an equal velocity in the opposite direction. It would be a smaller mass, so it would require less fuel than decelerating the whole probe; but it's still a hard problem.

    Be careful with the word "just". It often makes something hard sound simple.

  5. It does not. Social media platforms have had massive societal impact. From language, to social movements, to election results, social media has had effects, positive or negative, that impact the lives of even those who do not use them.
  6. LLMs cannot loop (unless you have a counterexample?), and I'm not even sure they can do a lookup in a table with 100% reliability. They also have finite context, while a Turing machine can have infinite state.
  7. An LLM is not a universal Turing machine, though. It's a specific family of algorithms.

    You can't build an LLM that will factorize arbitrarily large numbers, even in infinite time. But a Turing machine can.

  8. You are making a big assumption here, which is that LLMs are the main "algorithm" that the human brain uses. The human brain can easily be a Turing machine, that's "running" something that's not an LLM. If that's the case, we can say that the fact that humans can come up with novel concept does not imply that LLMs can do the same.
  9. That would be fine, if there was a law that forced every browser to have this setting and every company to respect the setting.
  10. "rapid, iterative Waterfall" is a contradiction. Waterfall means only one iteration. If you change the spec after implementation has started, then it's not waterfall. You can't change the requirements, you can't iterate.

    Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.

  11. That's not what AGI used to mean a year or two ago. That's a corruption of the term, and using that definition of AGI is the mark of a con artist, in my experience.
  12. Any definition of AGI that doesn't include awareness is wrongly co-opting the term, in my opinion. I do think some people are doing that, on purpose. That way they can get people who are passionate about actual-AGI to jump on board on working with/for unaware-AGI.
  13. > It makes some sense for an AI trained on human persuasion

    Why?

    > However, results will vary.

    Like in voodoo?

    I'm sorry to be dismissive, but your comment is entirely dismissing the point it's replying to, without any explanation as to why it's wrong. "You are holding it wrong" is not a cogent (or respectful) response to "we need to understand how our tools work to do engineering".

  14. Yes in EU. I'm in Spain and I sign up to several banks as well as government sites in my desktop PC.
  15. Yes. Personal data under GDPR is "any information which are related to an identified or identifiable natural person". If it's data about a specific person, it's personal data, it's a very straightforward definition. Businesses need either informed consent or legitimate interest to store or process it.
  16. I'm not sure what point you are trying to make. Are you saying in order to make LLMs better at learning the missing piece is to make the capable to interact with the outside world? Give them actuators and sensors?
  17. You can get this experience in Android (without sideloading) using Firefox with the correct plugins. I don't have an iPhone so I don't know if the same is true there.
  18. I made two comments in this thread. The one you replied to, and this one I'm using now to respond to you. Do you have me confused with someone else?

    But yeah, I think "within our lifetime" is a critical qualifier, and most people who are not writing it down are implicitly assuming that the qualifier is obvious. I have very limited interest in technologies that will not exist until centuries after I'm born, other than as entertainment.

    Without that qualifier, almost any practical discussion about technology is moot. It's fun to talk about FTL or whatever, but we certainly should not be investing heavily into it... It might be possible, but most research on that direction would be wasteful.

  19. > Religion is a lie

    Anyone who says "we will have within this generation technology to extend your lifetime indefinitely" is lying just as much as the priest who says he knows God exists is lying[1]. I would say it's more likely that the scientist liar is accidentally right, than that the priest is; that doesn't make either of them people you should trust.

    At the current stage of technology, belief on this process is basically based only on hope. Belief in this is essentially religious.

    [1] possibly they both believe they are saying the truth, so you could argue they are wrong rather than lying. They are still both standing on the same grounds.

  20. All of those are arguments for why robots should generally have wheels rather than legs, except for when legs are specifically needed.
  21. Some people don't lose weight as easily as others, due to genetics, medical treatment, disability... And even some others have the freedom to not invest as much effort on losing weight.

    Point being: there are overweight Japanese, despite the existence of the measures Japan takes to avoid it. These are the people I mean that it doesn't work for. And for those people, they don't have to deal only with the consequences of being overweight, they also have to deal with being treated very poorly. You can say it's for their own good, and the it incentives them to better themselves. Regardless, it still sucks for them.

    Shaming people into losing weight may work, on aggregate. I'm not entirely convinced it's a good way to go about it, at the individual level.

  22. I was not trying to defend him. I'm very annoyed at how these words are being intentionally abused; they chose to recycle the term rather to create a new one exactly to create this confusion. It's still important to know what the grifters mean.
  23. AGI, like AI before it, has been coopted into a marketing term. Most of the time, outside of sci-fi, what people mean when they say AGI is "a profitable LLM".

    In the words OpenAI: “AGI is defined as highly autonomous systems that outperform humans at most economically valuable work”

  24. I don't see how that's worse than user-password authentication. For password without 2FA the attack pattern is

    1) User goes to BAD website and signs up (with their user and password). BAD website captures the user and password

    2) BAD website shows a fake authentication error, and redirects to GOOD website. Users is not very likely to notice.

    3) BAD uses user and password to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.

    OK, with a password manager the user is more likely to notice they are in BAD website. Is that the advantage?

  25. I will not argue, there are benefits to the way Japan does it, if the alternative is doing nothing. Obesity is indeed bad.

    But we're talking about a pill that can hopefully prevent both, so we don't need to make that choice.

  26. The way Japan "solves" obesity is quite cruel on those it does not work for. Overweight natives are not treated kindly. I think creating a medicine to make this easier is a moral good, and a sign of kindness and intelligence. Hopefully the aliens also agree.
  27. > if it would improve the evolutionary fitness of the majority of people

    Evolution led to the intelligence that led to creating Ozempic. Maybe that's the mechanism by which evolution is improving evolutionary fitness. The idea that what was created by man is not part of evolution is part of the naturalism fallacy; it's the false belief that the domain of nature stops at the doors of the lab.

  28. Sure. I'm passionate about turning off my computer at 18:00. And I'm passionate about not hurting our customers. These two go hand in hand: I work very hard to avoid prevent doing things that are likely to cause incidents, as those hurt customers who are not at fault, and they often mean I have to stay late.

    I guess you could say I'm passionate about testing and observability, though that doesn't really describe how I feel. It just puts me in a sour mood when something breaks and we could have prevented it with better practices from the start.

  29. The amount of times I've seen passionate people make bad technical decisions in the name of trying something exciting is too many to count. I obviously agree that passion is valuable, but not without faults. I think I'm better in some ways and worse in others due to the way I approach the job.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal