- quotemstrPolitics is just what happens when three or more humans get together. It's an inescapable part of human nature.
- More projects should push back against calls for "governance" and "steering committees" and such. As you noticed, they paralyze projects. It took JavaScript seven years to get a half-baked version of Python context managers, and Python itself has slowed down markedly.
The seemingly irresistible social pressure to committee-ize development is a paper tiger. It disappears if you stand your ground and state firmly "This is MY project".
- Firecracker: so no virtiofs? Shame.
- Using Truffle for elisp is very cool.
> In GNU Emacs, with tagged pointers, you can know an object is a cons simply by looking at the pointer.
Another thing is that these objects don't need type words. In a conventional GC-adaptation of Emacs (e.g. the igc branch, or this article) one models cons cells, floats, and so on as regular objects consisting of a type word followed by the object payload. A cons cell is only two words long, so when you model it as a regular object, the type word makes it 50% larger!
The regular Emacs GC, for all its faults, densely packs cons cells and other small object types in specialized blocks, avoiding the need to pay the per-object type word overhead and thereby getting better space use and cache locality.
It'd be nice to get a modern GC with specialized heaps just for cons cells, floats, and other small objects
- I wonder whether it'd be possible to augment the CO2 with something that would make it more detectable visually and aromatically, like we do natural gas.
Natural gas is naturally odorless and colorless. Therefore, by default, it can accumulate to dangerous levels without anyone noticing until too late. We make natural gas safer by making stink, and we make it stink by adding trace amounts of "odorizers" like thiophane to it.
I wonder whether we could do something similar for CO2 working fluid this facility uses --- make it visible and/or "smell-able" so that if a leak does happen, it's easier to react immediately and before the threshold of suffocation is reached. Odorizers are also dirt cheap. Natural gas industry goes through tons of the stuff.
- Once social trust (or assibiyah, to use Ibn Khandun's term) in a region collapses, it often returns slowly or not at all. Sadly common pattern in history. I think one could plausibly argue that in this way, Calabria never recovered from the collapse of antiquity, the Gothic wars, and generations spent as a Christian-Muslim war zone.
- Universities may have cured us of some forms of indoctrination but exposed us to others: for example, nuclear power was demonized for decades is academia and our avoiding it has set us back as a civilization.
The "answer" here isn't education per se. A would-be censor might look at the spread of an inconvenient idea and conclude the education isn't working and therefore harder measures are justified.
The answer is epistemic humility and historical literacy. A good education instills both. They teach us that one can be wrong without shame, that testing ideas makes us stronger, and that no good has come out of boost ideas beyond what their merits can support.
Specifically, I want universities to do a much better job of teaching people to argue a perspective with which they disagree. A well-educated person can hold the best version of his opponent's idea in mind and argue it persuasively enough that his opponent agrees that he's been fairly heard. If people can't do that at scale, they're tempted to reach for censorship instead of truth seeking.
Another thing I want from universities (and all schools) is for them to inculcate the idea that the popularity of an idea has nothing to do with its merits. The irrational primate brain up-weights ideas it sees more often. The censor (if we're steelmanning) believes that coordinated influence campaigns can hijack the popularity heuristic and make people believe things they wouldn't if those ideas diffused organically through the information ecosystem.
This idea is internally consistent, sure, but 1) the censorship "cure" is always worse than the disease, and 2) we can invest in bolstering epistemics instead of in beefing up censorship.
We are rational primates. We can override popularity heuristics. Doing so is a skill we must be taught, however, and one of the highest ROI things we can do in education right now is teach it.
- In the history of humanity, it's never been the side attempting to restrict expression and the flow of information that's been in the right.
You don't "solve" the spread of "disinformation" because it's not a real problem in the first place. What you call "disinformation" is merely an idea with which you disagree. It doesn't matter whether any idea comes from the west, from China, from Russia, or Satan's rectum: it stands on its own and competes on its merits with other ideas in the mind of the public.
An idea so weak that it can survive only by murdering alternative ideas in the cradle is too fragile to deserve existing at all.
When you block the expression of disagreement, you wreck the sense-making apparatus that a civilization uses to solve problems and navigate history. You cripple its ability to find effective solutions for real but inconvenient problems. That, not people seeing the wrong words, is the real threat to public safety.
As we've learned painfully over the past decade, it is impossible for a censor to distinguish falsehood from disagreement. Attempts to purify discourse always and everywhere lead to epistemic collapse and crises a legitimacy. The concept is flawed and any policy intended to "combat the spread of disinformation" is evil.
- Adorable: they've reinvented Emacs markers
- He's arguing most drivers are mostly event driven --- which is true, trivially.
- samdoesnothing is making a legitimate point about needing to consider prevalence of unsafe inna Rust program. That he's being downvoted to hell is everything wrong with HN.
- > I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.
It's often that just getting started at all on a task is the hardest part. That's why writers often produce a "vomit draft" (https://thewritepractice.com/vomit-first-draft/) just to get into the right frame of mind to do real writing.
Using a coding agent to fix something trivial serves the same purpose.
- Facts are facts and exist independent of who discovers them. If you'd like to learn, the last thing you want to do is stop people poking at contradictions and pressure-testing claims. If Fil-C is really the "incredible achievement" you say it is, it can withstand scrutiny.
- In https://www.hackerneue.com/item?id=46270657, you write
> If you set the index to `((alice - bob) / sizeof(...))` then that will fail under Fil-C’s rules (unless you get lucky with the torn capability and the capability refers to Alice).
In the comment above, you write, referring to a fault on access through a torn capability
> Try it. That’s what happens.
Your position would be clearer if you could resolve this contradiction. Yes or no: does an access through a pointer with an arbitrary offset under a data race that results in that pointer's capability tearing always fault?
> You’re right that the intval is untrusted under Fil-C rules.
Can Fil-C compile C?
You can't argue, simultaneously,
1) it's the capability, not your "intval", that is the real pointer with respect to execution flow and simultaneously, and
2) that Fil-C compiles normal C in which the "intval" has semantic meaning.
Your argument is that Fil-C is correct with respect to capabilities even if pointers are transiently incorrect under data races. The trouble is that Fil-C programs can't observe these capabilities and can observe pointers, and so make control flow decisions based on these transient incorrect (you call them "untrusted") inputs.
- Fil-C lets programs access objects through the wrong pointer under data race. All over the Internet, you've responded to the tearing critique (and I'm not the only one making it) by alternatively 1) asserting that racing code will panic safely on tear, which is factually incorrect, and 2) asserting that a program can access memory only through its loaded capabilities, which is factually correct but a non sequitur for the subject at hand.
You're shredding your credibility for nothing. You can instead just acknowledge Fil-C provides memory safety only for code correctly synchronized under the C memory model. That's still plenty useful and nobody will think less of you for it. They'll think more, honestly.
- [Woman walking on beach at sunset, holding hands with husband]
Voiceover: "Miracurol cures cancer."
[Couple now laughing over dinner with friends]
"Ask your doctor if Miracurol is right for you."
[Same footage continues, voice accelerates]
"In clinical trials, five mice with lymphoma received Miracurol. All five were cured. One exploded. Not tested in humans. Side effects include headache, itchiness, impotence, explosion, and death. Miracurol's cancer-free guarantee applies only to cancers covered under Miracurol's definition of cancer, available at miracurol.org. Manufacturer not responsible for outcomes following improper use. Consult your doctor."
[Couple walking golden retriever, sun flare]
Voiceover: "Miracurol. Because you deserve to live cancer-free."
Patient: "I exploded."
Miracurol: "That's extremely well documented on miracurol.org."
- Exactly. I agree that this specific problem is hard to exploit.
> Seems perhaps fixable by making pointer equality require that capabilities are also equal
You'd need 128-bit atomics or something. You'd ruin performance. I think Fil-C is actually making the right engineering tradeoff here.
My point is that the way Pizlo communicates about this issue and others makes me disinclined to trust his system.
- His incorrect claims about the JVM worry me.
- His schtick about how Fil-C is safer than Rust because the latter has the "unsafe" keyword and the former does not is more definitional shenanigans. Both Fil-C and Rust have unsafe code: it's just that in the Fil-C case, only Pizlo gets to write unsafe code and he calls it a runtime.
What other caveats are hiding behind Pizlo's broadly confident but narrowly true assertions?
I really want to like Fil-C. It's good technology and something like it can really improve the baseline level of information security in society. But Pizlo is either going to have to learn to be less grandiose and knock it off with the word games. If he doesn't, he'll be remembered not as the guy who finally fixed C security but merely as an inspiration for the guy who does.
- You may define "memory safety" as you like. I will define "trustworthy system" as one in which the author acknowledges and owns limitations instead of iteratively refining private definitions until the limitations disappear. You can define a mathematical notation in which 2+3=9, but I'm under no obligation to accept it, and I'll take the attempt into consideration when evaluating the credibility of proofs in this strange notation.
Nobody is trying to hide the existence of "eval" or "unsafe". You're making a categorical claim of safety that's true only under a tendentious reading of common English words. Users reading your claims will come away with a mistaken faith in your system's guarantees.
Let us each invest according to our definitions.
- > Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not.
My program:
If the return statement can access P1, disjoint from P2, that's a weird execution for any useful definition of "weird". You can't just define the problem away.if (p == P2) return p[attacker_controlled_index];Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program. Turns out you get memory safety only if you write that C program with Fil-C's memory model and its limits in mind. If someone's going to do that, why not write instead with Rust's memory model in mind and not pay a 4x performance penalty?
- > The kaboom you get is a safety panic
You don't always get a panic. An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability. That's dangerous if a program has made a control decision based on the pointer bits being P2. IOW, an attacker controlled offset can transform P2 back into P1 and access memory using P1's capability even if program control flow has proceeded as though only P2 were accessible at the moment of adversarial access.
That can definitely enable a "weird execution" in the sense that it can let an attacker make the program follow an execution path that a plain reading of the source code suggests it can't.
Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.
You are trying to define the problem away with sleigh-of-hand about the pointer "really" being its capability while ignoring that programs make decisions based on pointer identity independent of capability -- because they're C programs and can't even observe these capabilities. The JVM doesn't have this problem, because in the JVM, the pointer is the capability.
It's exactly this refusal to acknowledge limitations that spooks me about your whole system.