- Likely a combination of practicality, and the importance of airflow throughout the sand in order to heat it and pull from it effectively.
Also, water's specific heat capacity is 4.186 J/g°C, while air's is approximately 1.005 J/g°C. It would take much more energy to heat up water than it would to heat up air.
Also, water boils at 100 degrees, and they store it in the sand at 600 degrees.
- Capitalism, at work. Wherever there is a cost, there will be attempts made at cost efficiency. Google understands that hiring designers or artists is expensive, and they want to offer a cheaper, more effective alternative so that they can capture the market.
In a coffee shop this morning I saw a lady drawing tulips with a paper and pencil. It was beautiful, and I let her know... But as I walked away I felt sad that I don't feel that when browsing online anymore- because I remember how impressive it used to feel to see an epic render, or an oil painting, etc... I've been turned cynical.
- It's a hoop to jump through, but I'd recommend checking out Apple's container/containerization services which help accomplish just that.
- This is also somewhat highlighted in Google's paper "Borg, Omega, and Kubernetes" which they published in 2016.
https://static.googleusercontent.com/media/research.google.c...
- HRW would cover the simple case, but they needed way more-- e.g. per-request balancing, zone affinity, live health checks, spillover, ramp-ups, etc. Once you need all that dynamic behavior, plain hashing just doesn’t cut it IMO. A custom client-side + discovery setup makes more sense.
- Interesting. In a thought process while editing a PDF, Claude disclosed the folder hierarchy for it's "skills". I didn't know this was available to us:
> Reading the PDF skill documentation to create the resume PDF
> Here are the files and directories up to 2 levels deep in /mnt/skills/public/pdf, excluding hidden items and node_modules:
- I feel that companies still misunderstand how to evaluate these metrics they're collecting on the efficacy of RTO.
IMO, RTO efficacy should be measured on a team-by-team basis. There are no doubt zero "one size fits all" approaches for entire orgs, or entire companies (and if there are, then the metrics should /strongly/ reflect that)
- I feel that the reason RTO is still such a wonderful topic to write about or debate online, is that the spectrum of human "experience" is so wide, that there will always be a significant number of people on either side of the fence.
Personally, I can't count the amount of times I've switched sides, and I don't think I'm the only one.
IMO, mandated RTO is (objectively) an effort by large organizations to make their "systems" more predictable in aggregate. The manner of predictability will be largely depend on the size of the organization (e.g. A startup vs. Microsoft) and their needs (productivity/reliability/consistency/etc), and we see this manifest in any number of the RTO announcements we've seen online.
- It seems to be an import pipeline bug.
Photos does a lot of extra work on import (merging RAW+JPEG pairs, generating previews, database indexing, optional deletion), so my guess is a concurrency bug where a buffer gets reused or a file handle is closed before the copy finishes.
Rare, nondeterministic corruption fits the profile.
- If you have a raspberry pi or some device laying around that you're happy to act as an always-on-server you could set it up as a Layer 7 firewall using something like Nginx to act as a reverse proxy for SSL/TLS interception.
Throw this into some LLM on research mode and I'm sure you could get some step-by-step instructions for setting it up.
I suppose it's not much different to a PiHole but instead of filtering out ads you're filtering out shorts.
- Not router level, but "Enhancer for YouTube" has "Hide shorts" in its appearance preferences. Available on Chrome, Firefox, and Edge.
If I was a concerned parent, I'd just install and hide the extension from the bookmarks bar.
The downside being that it doesn't affect native YouTube apps for mobile devices...
- This makes sense if we compare compute cost instead of hours.
Transformer self-attention costs scale roughly quadratically with context window size. Servicing prompts in a 32k-token window uses much more compute per request than in an 8k-token window.
A Max 5× user on an 8k-token window might exhaust their cap in around 30 hours, while a Max 20× user on a 32k-token window will exhaust theirs in about 35 to 39 hours instead of four times as long.
If you compact often, keep context windows small etc, I'd wager that your Opus 4 consumption would approach the expected 4× multiplier... In reality, I assume the majority of users aren't clearing their context windows and just letting the auto-compact do it's thing.
Visualization: https://codepen.io/Sunsvea/pen/vENyeZe
- Neat idea. I'm sure this will solve some friction for the neuroscientists/mathematicians out there with ~20+ windows open.
Personally (as someone with ADHD), this would just relentlessly grind my gears. My thoughts are unpredictable by nature and so I value the "reliability" of knowing my chrome is two alt+tabs away, etc.
If an algorithm started messing with this and changing throughout the day... Damn, I'd go crazy.
- Reminds me of "Don't dig for the gold, sell the shovels".
Could also be read:
> Meta spends 10% of last year's revenue to acquire 49% of a top AI data company and poach their leadership, to ensure they are a key player in what could be a ~5-trillion dollar industry by 2033.
Meta has a history of this. Acquiring Oculus (and leaning in on VR), Ray-Ban partnership (and leaning in on AR)... etc.
These all just seem like decisions to ensure the company's survival (and participation) in whatever this AI revolution will eventually manifest into.
IMO, they are just answering the question: "If we pour 100B into R&D, could it have a reasonable chance at succeeding?".
For Nvidia (or these other massive companies) the investment is chump change.