- That explains why it happens, but doesn't really help with the problem. The expectation I have as a pretty naive user, is that what is in the .md file should be permanently in the context. It's good to understand why this is not the case, but it's unintuitive and can lead to frustration. It's bad UX, if you ask me.
I'm sure there are workarounds such as resetting the context, but the point is that god UX would mean such tricks are not needed.
- It seems to me that building a recording device that can survive in space, that it's very light, and that can not break apart after receiving the impact from an explosive charge strong enough to decelerate it from the speeds that would take it to Alpha Centauri is... maybe impossible.
We're talking about 0.2 light years. To reach it in 20 years, that's 1/10th of the speed of light. The forces to decelerate that are pretty high.
I did a quick napkin calculation (assuming the device weighs 1kg), that's close to 3000 kiloNewton, if it has 10 seconds to decelerate. The thrust of an F100 jet engine is around 130 kN.
IANan aerounatics engineer, so I could be totally wrong.
- Any mass that it fires would have a starting velocity equal to that of the probe, and would need to be accelerated an equal velocity in the opposite direction. It would be a smaller mass, so it would require less fuel than decelerating the whole probe; but it's still a hard problem.
Be careful with the word "just". It often makes something hard sound simple.
- You are making a big assumption here, which is that LLMs are the main "algorithm" that the human brain uses. The human brain can easily be a Turing machine, that's "running" something that's not an LLM. If that's the case, we can say that the fact that humans can come up with novel concept does not imply that LLMs can do the same.
- "rapid, iterative Waterfall" is a contradiction. Waterfall means only one iteration. If you change the spec after implementation has started, then it's not waterfall. You can't change the requirements, you can't iterate.
Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.
- > It makes some sense for an AI trained on human persuasion
Why?
> However, results will vary.
Like in voodoo?
I'm sorry to be dismissive, but your comment is entirely dismissing the point it's replying to, without any explanation as to why it's wrong. "You are holding it wrong" is not a cogent (or respectful) response to "we need to understand how our tools work to do engineering".
- Yes. Personal data under GDPR is "any information which are related to an identified or identifiable natural person". If it's data about a specific person, it's personal data, it's a very straightforward definition. Businesses need either informed consent or legitimate interest to store or process it.
- I made two comments in this thread. The one you replied to, and this one I'm using now to respond to you. Do you have me confused with someone else?
But yeah, I think "within our lifetime" is a critical qualifier, and most people who are not writing it down are implicitly assuming that the qualifier is obvious. I have very limited interest in technologies that will not exist until centuries after I'm born, other than as entertainment.
Without that qualifier, almost any practical discussion about technology is moot. It's fun to talk about FTL or whatever, but we certainly should not be investing heavily into it... It might be possible, but most research on that direction would be wasteful.
- > Religion is a lie
Anyone who says "we will have within this generation technology to extend your lifetime indefinitely" is lying just as much as the priest who says he knows God exists is lying[1]. I would say it's more likely that the scientist liar is accidentally right, than that the priest is; that doesn't make either of them people you should trust.
At the current stage of technology, belief on this process is basically based only on hope. Belief in this is essentially religious.
[1] possibly they both believe they are saying the truth, so you could argue they are wrong rather than lying. They are still both standing on the same grounds.
- Some people don't lose weight as easily as others, due to genetics, medical treatment, disability... And even some others have the freedom to not invest as much effort on losing weight.
Point being: there are overweight Japanese, despite the existence of the measures Japan takes to avoid it. These are the people I mean that it doesn't work for. And for those people, they don't have to deal only with the consequences of being overweight, they also have to deal with being treated very poorly. You can say it's for their own good, and the it incentives them to better themselves. Regardless, it still sucks for them.
Shaming people into losing weight may work, on aggregate. I'm not entirely convinced it's a good way to go about it, at the individual level.
- I don't see how that's worse than user-password authentication. For password without 2FA the attack pattern is
1) User goes to BAD website and signs up (with their user and password). BAD website captures the user and password
2) BAD website shows a fake authentication error, and redirects to GOOD website. Users is not very likely to notice.
3) BAD uses user and password to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.
OK, with a password manager the user is more likely to notice they are in BAD website. Is that the advantage?
- > if it would improve the evolutionary fitness of the majority of people
Evolution led to the intelligence that led to creating Ozempic. Maybe that's the mechanism by which evolution is improving evolutionary fitness. The idea that what was created by man is not part of evolution is part of the naturalism fallacy; it's the false belief that the domain of nature stops at the doors of the lab.
- Sure. I'm passionate about turning off my computer at 18:00. And I'm passionate about not hurting our customers. These two go hand in hand: I work very hard to avoid prevent doing things that are likely to cause incidents, as those hurt customers who are not at fault, and they often mean I have to stay late.
I guess you could say I'm passionate about testing and observability, though that doesn't really describe how I feel. It just puts me in a sour mood when something breaks and we could have prevented it with better practices from the start.
- The amount of times I've seen passionate people make bad technical decisions in the name of trying something exciting is too many to count. I obviously agree that passion is valuable, but not without faults. I think I'm better in some ways and worse in others due to the way I approach the job.
Finding them is slightly harder, but absolutely worth it.
In any case, complaining about how many games there are out there that are not your thing is a waste of time. Much better to define what you like and look for recommendations from people who like similar games. Who care how many FPSs are released if you don't like FPSs? If you like RPGs, find RPG gamers and ask them what's good. Substitute for any genre; there is no genre out there that's not getting more releases than you could possibly play.