- > anyone can visually examine this.
They can't, if the variations are subtle enough. For example, many people are oblivious to the fact that one can extract audio from objects captured on mute video, due to tiny vibrations.
Analog is the worse option here. Simple screenshot of 100% black bar would be what a smart lazy person would do.
- > made their redactions with actual ink, and then re-scanned every page
That's not very competent.
> going analog is foolproof
Absolutely not. There are many way's to f this up. Just the smallest variation in places that have been inked twice will reveal the clear text.
- Progressive tax on resource consumption, this is what a tax system for the next millennium looks like.
- I guess it's the opposite of Futur III:
https://www.der-postillon.com/2012/08/neue-zeitform-futur-ii...
- I guess sarcasm.
- Booked a trip yesterday, without knowing this has happened. ~1h off my usual trip time, which I got accustomed to in the last two decades. It's extremely awesome!
- Not by a long shot.
- >200
- FWIW, we use Slack in that way, and it works.
- The point about the stakes is a good one. But there is an individiual factor to it. And maybe it's exactly because of the stakes you mention: if you perceive your personal stakes to be low, or might even gain something out of redistributing the message, no matter if fabricated or not, your threshold might be low as well.
- Yes, but in that story, parent only has the word of that Journalist. I personally don't even have that, I only have a post about it.
My deeper point is that it's arguably very difficult to establish a global, socially acceptable lower threshold of trust. Parent's level is, apparently, the word of a famous Journalist in a radio broadcast. For some, the form of a message alone makes the message worthy of trust, and AI will mess with this so much.
- Ironically, there is no evidence that woman ever said that.
- A common pattern you'd find in reliable research papers is that authors tend to understate their findings, which in practice strengthens the impact of their conclusions.
- I think the answer is somewhere in the middle, not as restrictive as parent, but also not as wide as AI companies want us to believe. My personal opinion is that hallucinations (random noise) are a fundamental building block of what makes human thinking and creativity possible, but we have additional modes of neuroprocessing layered on top of it, which filter and modify the underlying hallucinations in a way so they become directed at a purpose. We see the opposite if the filters fail, in some non-neurotypical individuals, due to a variety of causes. We also make use of tools to optimize that filter function further by externalizing it.
The flip side of this is that fundamentally, I don't see a reason why machines could not get the same filtering capabilities over time by adjusting their architecture.
- > "hallucination" ... "behaviour that only sometimes resembles thinking"
I guess you'll find that if you limit the definition of thinking that much most humans are not capable of thinking either.
- We have two optimization mechanisms though which reduce noise with respect to their optimization functions: evolution and science. They are implicitly part of "standing on the shoulders of giants", you pick the giant to stand on (or it is picked for you).
Whether or not the optimization functions align with human survival, and thus our whole existence is not a slop, we're about to find out.
- The way AI (and capitalism really) makes CEOs obsolete is by replacing all companies with just one. So only one CEO needed eventually.
- Local storage, sticky sessions, consistent hashing cache
- A strong believe of mine: there is no storage, only communication. I hold that thought since I first heard of SRAM, and I think it applies to everything, knowledge, technology, societies, our universe in general..
You're definitely not overthinking this. Fitting words by length is the attack vector if the blanking itself has been done correctly.