Preferences

deviation
Joined 838 karma

  1. I think it's important to be distinct here... These "Space DC" companies are not showing up on some Techy-Shark-Tank (or walking into VC meetings) with a promise to investors that they have an established strategy which will pay off.

    IMO, they are just answering the question: "If we pour 100B into R&D, could it have a reasonable chance at succeeding?".

    For Nvidia (or these other massive companies) the investment is chump change.

  2. Likely a combination of practicality, and the importance of airflow throughout the sand in order to heat it and pull from it effectively.

    Also, water's specific heat capacity is 4.186 J/g°C, while air's is approximately 1.005 J/g°C. It would take much more energy to heat up water than it would to heat up air.

    Also, water boils at 100 degrees, and they store it in the sand at 600 degrees.

  3. Capitalism, at work. Wherever there is a cost, there will be attempts made at cost efficiency. Google understands that hiring designers or artists is expensive, and they want to offer a cheaper, more effective alternative so that they can capture the market.

    In a coffee shop this morning I saw a lady drawing tulips with a paper and pencil. It was beautiful, and I let her know... But as I walked away I felt sad that I don't feel that when browsing online anymore- because I remember how impressive it used to feel to see an epic render, or an oil painting, etc... I've been turned cynical.

  4. I guarantee if there's even a 0.1% chance of this architecture eventually outperforming traditional ones, then Zuckerberg et al are already eating the cost and have teams spinning up experiments doing just that.
  5. Replacing faulty nodes or equipment in space seems totally reasonable... It's not like getting faulty drives replaced in my datacenter racks don't already take weeks/months...
  6. It's a hoop to jump through, but I'd recommend checking out Apple's container/containerization services which help accomplish just that.

    https://github.com/apple/containerization/

  7. Anecdotally, I would have thrown "127,271 BTC to USD" into Google. I didn't mind it.
  8. This is also somewhat highlighted in Google's paper "Borg, Omega, and Kubernetes" which they published in 2016.

    https://static.googleusercontent.com/media/research.google.c...

  9. HRW would cover the simple case, but they needed way more-- e.g. per-request balancing, zone affinity, live health checks, spillover, ramp-ups, etc. Once you need all that dynamic behavior, plain hashing just doesn’t cut it IMO. A custom client-side + discovery setup makes more sense.
  10. Interesting. In a thought process while editing a PDF, Claude disclosed the folder hierarchy for it's "skills". I didn't know this was available to us:

    > Reading the PDF skill documentation to create the resume PDF

    > Here are the files and directories up to 2 levels deep in /mnt/skills/public/pdf, excluding hidden items and node_modules:

  11. The world is full of useful shapes! No reason that math shouldn't :)
  12. Somewhere between 85% and 90% of all countries have some sort of mandated severance pay in the event of a layoff.

    A small percentage of countries also mandate severance even if the employee is fired (with cause).

  13. I feel that companies still misunderstand how to evaluate these metrics they're collecting on the efficacy of RTO.

    IMO, RTO efficacy should be measured on a team-by-team basis. There are no doubt zero "one size fits all" approaches for entire orgs, or entire companies (and if there are, then the metrics should /strongly/ reflect that)

  14. I feel that the reason RTO is still such a wonderful topic to write about or debate online, is that the spectrum of human "experience" is so wide, that there will always be a significant number of people on either side of the fence.

    Personally, I can't count the amount of times I've switched sides, and I don't think I'm the only one.

    IMO, mandated RTO is (objectively) an effort by large organizations to make their "systems" more predictable in aggregate. The manner of predictability will be largely depend on the size of the organization (e.g. A startup vs. Microsoft) and their needs (productivity/reliability/consistency/etc), and we see this manifest in any number of the RTO announcements we've seen online.

  15. Is the problem here the modern web? Or that this "simple" feature had its dependencies split amongst 3 microservices, instead of 1?

    Seems more a system design failure to me.

  16. This was a great coffee read. Very insightful.
  17. Knew this was coming thanks to their chrome API's for on-device gen-AI (summarize, translate, generate, etc).

    I'm surprised this didn't happen sooner... The amount of data available from Chrome users seems enormous.

  18. Not to hand wave-- but this feels industry standard IMO. I have a dozen PRs sitting unacknowledged and stale across a handful of FAANG (and other) repos, including Apple's.

    I start my first day @ Apple in a few weeks, so I ACK that my opinion might be a little biased here.

  19. It seems to be an import pipeline bug.

    Photos does a lot of extra work on import (merging RAW+JPEG pairs, generating previews, database indexing, optional deletion), so my guess is a concurrency bug where a buffer gets reused or a file handle is closed before the copy finishes.

    Rare, nondeterministic corruption fits the profile.

  20. Their tokenization suggests they're new Qwen models AFAIK. They tokenize input to the exact same # of tokens that Qwen models do.
  21. So this confirms a best-in-class model release within the next few days?

    From a strategic perspective, I can't think of any reason they'd release this unless they were about to announce something which totally eclipses it?

  22. If you have a raspberry pi or some device laying around that you're happy to act as an always-on-server you could set it up as a Layer 7 firewall using something like Nginx to act as a reverse proxy for SSL/TLS interception.

    Throw this into some LLM on research mode and I'm sure you could get some step-by-step instructions for setting it up.

    I suppose it's not much different to a PiHole but instead of filtering out ads you're filtering out shorts.

  23. Not router level, but "Enhancer for YouTube" has "Hide shorts" in its appearance preferences. Available on Chrome, Firefox, and Edge.

    If I was a concerned parent, I'd just install and hide the extension from the bookmarks bar.

    The downside being that it doesn't affect native YouTube apps for mobile devices...

  24. Unless they are transparent with us in detailing why the technology behind this is different to a slightly altered system prompt... Then I will assume OpenAI is just trying to stay relevant.
  25. This makes sense if we compare compute cost instead of hours.

    Transformer self-attention costs scale roughly quadratically with context window size. Servicing prompts in a 32k-token window uses much more compute per request than in an 8k-token window.

    A Max 5× user on an 8k-token window might exhaust their cap in around 30 hours, while a Max 20× user on a 32k-token window will exhaust theirs in about 35 to 39 hours instead of four times as long.

    If you compact often, keep context windows small etc, I'd wager that your Opus 4 consumption would approach the expected 4× multiplier... In reality, I assume the majority of users aren't clearing their context windows and just letting the auto-compact do it's thing.

    Visualization: https://codepen.io/Sunsvea/pen/vENyeZe

  26. Neat idea. I'm sure this will solve some friction for the neuroscientists/mathematicians out there with ~20+ windows open.

    Personally (as someone with ADHD), this would just relentlessly grind my gears. My thoughts are unpredictable by nature and so I value the "reliability" of knowing my chrome is two alt+tabs away, etc.

    If an algorithm started messing with this and changing throughout the day... Damn, I'd go crazy.

  27. A... massive distinction.
  28. Reminds me of "Don't dig for the gold, sell the shovels".

    Could also be read:

    > Meta spends 10% of last year's revenue to acquire 49% of a top AI data company and poach their leadership, to ensure they are a key player in what could be a ~5-trillion dollar industry by 2033.

    Meta has a history of this. Acquiring Oculus (and leaning in on VR), Ray-Ban partnership (and leaning in on AR)... etc.

    These all just seem like decisions to ensure the company's survival (and participation) in whatever this AI revolution will eventually manifest into.

  29. Cool attack chain.

    Regarding the payout-- I'm curious if targeting (in the disclosure video) a CEO/C-Suite exec @ Google would have encouraged a higher amount from the panel.

  30. The domain is banned on my work VPN, gave me a good laugh.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal