- Reubend parentGood catch. I didn't realize that there was a longer list of restrictions below the section called "Stricter mode", and it seems like a lot of String functions I use are missing too.
- When reading through the projects list of JS restrictions for "stricter" mode, I was expecting to see that it would limit many different JS concepts. But in fact none of the things which are impossible in this subset are things I would do in the course of normal programming anyway. I think all of the JS code I've written over the past few years would work out of the box here.
- This provides plenty of value conceptually, but I wish they were able to push the syntax into a more intuitive place. My biggest gripe with Relay is how forced the syntax feels, and this seems better but still confusing. Take for example this component declaration:
export const PostCard = ({ post: postRef }: { post: ViewRef<'Post'> }) => { ... }
That feels terrible to me! I don't have any suggestion from my side, but I feel like there's got to be a less awkward way.
- Really impressive. I was surprised when listening to their demos how poorly their closed source competitors handled common abbreviations like "Wed. June 23rd". I had assumed that every commercial vendor had handled cases like that more elegantly, in line with how a voice actor would read them.
- I don't know what the book describes, but I'd like to hear more specifics. Until then, I'm going to assume that it's sensationalist hogwash.
> The same knowledge that helps us treat neurological disorders could be used to disrupt cognition, induce compliance, or even in the future turn people into unwitting agents.
Disrupting cognition is easy. But as far as I'm aware, we don't have any drugs to "induce compliance" and we're miles away from being able to turn people into "unwitting agents" purely on the basis of neuroscience.
- OpenAI likes to time their announcements alongside major competitor announcements to suck up some of the hype. (See for instance the announcement of GPT-4o a single day before Google's IO conference)
They were probably sitting on this for a while. That makes me think this is a fairly incremental update for Codex.
- Before the outrage comments pour in, take a look at the conditions he has to meet (per the New York Times):
> ...this 12-step package asks Mr. Musk, the company’s chief executive, to vastly expand Tesla’s stock market valuation — to $8.5 trillion from around $1.4 trillion — while hitting a variety of other goals. Those include selling one million robots with humanlike qualities and 10 million paid subscriptions to the company’s self-driving software.
The headline $1 trillion figure only comes into play if he hits a very lofty goal: 6x the company's valuation and sell a bunch of expensive robots.
I don't think this is nearly as crazy as people are making it seem. It's a huge reward for a goal that seems unlikely to be achievable in a short time frame.
- Last time these folks were mentioned on HN, there was a lot of skepticism that this is really possible to do. The issue is cooling: in space, you can't rely on convection or conduction to do passive cooling, so you can only radiate away heat. However, the radiator would need to be several kilometers big to provide enough cooling, and obviously launching such a large object into space would therefore eat up any cost savings from the "free" solar power.
More discussion: https://www.hackerneue.com/item?id=43977188
- I'm not sure why the author thinks that no one's talking about these problems.
I've heard a lot of people complaining about context loss from agents, whether that's due to context windows, or communication across agents, or agents not paying attention to what the user specified in their prompt in the first place.
- Looking through the meeting notes myself, I don't see MSFT mentioned specifically. That makes sense given their relationship to OpenAI as investors; they might be okay with OpenAI calling out "big tech" in general, but it would be extremely weird if they made any specific references given that Microsoft owns 49% of their company.*
*technically it's some weird profit sharing thing rather than equity
- > The “realism” of graphics has nothing to do with performance.
Obviously I understand your point that computational complexity is different than the extent to which something is realistic. But it's totally wrong that it "has nothing it do with" it.
Photorealistic scenes require high res textures, higher detail levels in geometry, better shadows, better global illumination, etc...
Cartoonish art styles don't necessarily require any of those. They still benefit from them, but with diminishing returns.
It's cool if they want to take advantage of some fancy UE5 features, but the burden to optimize is on them, especially considering that the game's quality settings look like this: https://www.thegamer.com/borderlands-4-optimal-pc-settings/#...