- > If a carpenter shows up to put a roof yet their hammer or nail-gun can't actually put in nails, who'd you blame; the tool, the toolmaker or the carpenter?
I would be unhappy with the carpenter, yes. But if the toolmaker was constantly over-promising (lying?), lobbying with governments, pushing their tools into the hands of carpenters, never taking responsibility, then I would also criticize the toolmaker. It’s also a toolmaker’s responsibility to be honest about what the tool should be used for.
I think it’s a bit too simplistic to say «AI is not the problem» with the current state of the industry.
- I think this is a bit unfair. The carpenters are (1) living in world where there’s an extreme focus on delivering as quicklyas possible, (2) being presented with a tool which is promised by prominent figures to be amazing, and (3) the tool is given at a low cost due to being subsidized.
And yet, we’re not supposed to criticize the tool or its makers? Clearly there’s more problems in this world than «lazy carpenters»?
- I’m sorry, but this is such a terribly unscientific approach. You want to make a case for your hypothesis? Follow a structured approach with real arguments.
Saying «I know that correlation doesn’t imply causation», but then only demonstrating correlation isn’t really bringing this discourse any further.
- Blocks are fundamentally different from functions due to the control flow: `return` inside a block will return the outer method, not the block. `break` stops the whole method that was invoked.
This adds some complexity in the language, but it means that it’s far more expressive. In Ruby you can with nothing but Array#each write idiomatic code which reads very similar to other traditional languages with loops and statements.
- > This has massive implications. SEC means low latency, because nodes don't need to coordinate to handle reads and writes. It means incredible fault tolerance - every single node in the system bar one could simultaneously crash, and reads and writes could still happen normally. And it means nodes still function properly if they're offline or split from the network for arbitrary time periods.
Well, this all depends on the definition of «function properly». Convergence ensures that everyone observed the same state, not that it’s a useful state. For instance, The Imploding Hashmap is a very easy CRDT to implement. The rule is that when there’s concurrent changes to the same key, the final value becomes null. This gives Strong Eventual Consistency, but isn’t really a very useful data structure. All the data would just disappear!
So yes, CRDT is a massively useful property which we should strive for, but it’s not going to magically solve all the end-user problems.
- > Suggest contrary to that is wrongthink and enough to have one ostracized not only from science, but also society as a whole.
There's many scientists who have published the "contrary". They were not ostracized from science or from society as a whole. These saw next to none negative impact to their position while they were alive. Other scientists have published rebuttals and later some of the originals articles have been retracted.
J. Philippe Rushton: 250 published articles, 6 books, the most famous university professor in Canada. Retractions of this work came 8 years after his death.
Arthur Jensen: Wrote a controversial paper in 1969. Ended up publishing 400 articles. Remained a professor for his full life.
Hans Eysenck: The most cited living psychologist in peer-reviewed scientific journal literature. It took more than 20 years before any of his papers were retracted.
There's a lot of published articles about the "contrary view" that you can read. You can also read the rebuttals by the current scientific consensus (cited above).
> The analogous claim would therefore be that “although height differences have a large hereditary component, it does not follow that disparities in height between families have a genetic basis.” This seems very clearly false to me.
But this is not an analogous claim since you're talking about disparities between families. The analogous claim would be: "although height differences have a large hereditary component, it does not follow that disparities in height between groups have a genetic basis".
A very simple example for height[1]: The Japanese grew 10 cm taller from mid-20th century to early 2000s. Originally people thought that the shortness of the Japanese was related to their genetics, but this rapid growth (which also correlates with their improved economy) suggests that the group difference between Japanese and other groups was not related to the genetic component of height variance.
[1]: Secular Changes in Relative Height of Children in Japan, South Korea and Taiwan: Is “Genetics” the Key Determinant? https://biomedgrid.com/pdf/AJBSR.MS.ID.000857.pdf
- Your first link (Wikipedia) directly contradicts your examples:
> Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis[18][19][20][21]. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.[22][23][24][25][26][27].
- No, sorry. I was just remembering where I've typically seen sequential consistency being used. For instance, Peterson's algorithm was what I had in mind. Spinlock is indeed a good example (although a terrible algorithm which I hope you haven't seen used in practice) of a mutex algorithm which only requires acquire-release.
- A mutex would be the most trivial example. I don't believe that is possible to implement, in the general case, with only acquire-release.
Sequential consistency mostly become relevant when you have more than two threads interacting with both reads and writes. However, if you only have single-consumer (i.e. only one thread reading) or single-producer (i.e. only one thread writing) then the acquire-release semantics ends up becoming sequential since the single-consumer/producer implicitly enforces a sequential ordering. I can potentially see some multi-producer multi-consumer queues lock-free queues needing sequential atomics.
I think it's rare to see atomics with sequential consistency in practice since you typically either choose (1) a mutex to simplify the code at the expense of locking or (2) acquire-release (or weaker) to minimize the synchronization.
- Acquire-release ordering provides ordering guarantees for all memory operations. If an acquire observes a releases, the thread is also guaranteed to see all the previous writes done by the other thread - regardless of the atomicity of those writes. (There still can't be any other data races though.)
This volatile keyword appears to only consider that specific memory location whereas the Volatile class seem to implement acquire-release.
- Here's a few workflows that I really enjoy in jj:
- While I'm working on something I can do `jj desc` and start writing the commit message. Every edit is automatically being added to this change.
- My work tree is dirty and I quickly want to switch to a clean slate. In Git: (1) either do `git stash` where I'm definitely is going to forget about it or (2) do `git commit -a -m wip && git switch -c some-random-branch-name`. In jj: `jj new @-`. That's it! If I run `jj log` then my previous change shows up. No need to come up with arbitrary names. It's so refreshing to move changes around.
- I'm working on a stack of changes and sometimes need to make edits to different parts. In Git (1): Each change is its own branch and I need to switch around and do a bunch of rebases to keep them in sync. In Git (2): I have one branch with multiple commits. I make changes towards the final state and then do `git rebase -i` to move them upwards to where they belong. Biggest downside: I'm not actually testing the changes at the point where they end up and I'm not guaranteed it makes sense. In jj: I do `jj new <CHANGE>` to make changes further up in the stack. Once I'm happy with it I do `jj squash` and every dependent change is automatically rebased on top.
- And finally: I can solve merge conflicts when I want to! If any rebasing leads to a merge conflict I don't have to deal with it right away.
- This is one the reasons I find it so silly when people disregard Zig «because it’s just another memory unsafe language»: There’s plenty of innovation within Zig, especially related to comptime and metaprogramming. I really hope other languages are paying attention and steals some of these ideas.
«inline else» is also very powerful tool to easily abstract away code with no runtime cost.
- Lock-free data structures does not guarantee higher throughput. They guarantee lower latency which often comes at the expense of the throughput. A typical approach for implementing a lock-free data structure is to allow one thread to "take over" the execution of another one by repeating parts of its work. It ensures progress of the system, even if one thread isn't being scheduled. This is mainly useful when you have CPUs competing for work running in parallel.
The performance of high-contention code is a really tricky to reason about and depends on a lot of factors. Just replacing a mutex with a lock-free data structure will not magically speed up your code. Eliminating the contention completely is typically much better in general.
- It’s encoded using the spec that binary data in headers should be enclosed by colons: https://www.rfc-editor.org/rfc/rfc8941.html#name-byte-sequen...
- The opposite of probabilistic is not deterministic in this context. This is not about «drawing a random number», but rather that balancing is dependent on the input data. «With high probability» here means «majority of the possible input data leads to a balanced structure».
If it was not probabilistic then the balancing would be guaranteed in all cases. This typically means that it somehow stores balancing information somewhere so that it can detect when something is unbalanced and repair it. In this data structure we’re just hashing the content without really caring about the current balance and then it turns out that for most inputs it will be fine.
- > … but it seems like the judge simply doesn't get the objections. And the reasoning is really strange
The full order is linked in the article: https://cdn.arstechnica.net/wp-content/uploads/2025/06/NYT-v.... If you read that it becomes more clear: The person who complained here filed a specific "motion to intervene" which has a strict set of requirements. These requirements were not met. IANAL, but it doesn't seem too strange to me here.
> Also, rejecting something out of hand simply because a lawyer didn't draft it seems really antithetical to what a judge should be doing. There is no requirement for a lawyer to be utilized.
This is also mentioned in the order: An individual have the right to represent themselves, but a corporation does not. This was filed by a corporation initially. The judge did exactly what a judge what supposed to do: Interpret the law as written.
- In most situations panicking and deferencing a null pointer leads to the exact same scenario: The binary crashes. You can unwind and catch panics in Rust, but I’m not sure if that would have helped in this scenario as it might have immediately went directly into the fault code again.
However, I would assume that the presence of an «unwrap» would have been caught in code review, whereas it’s much harder to be aware of which pointers can be null in Java/C++.
- Here’s a quite recent interesting paper about this: https://dl.acm.org/doi/abs/10.1145/3643027
> In this article, we study the convergence of datalog when it is interpreted over an arbitrary semiring. We consider an ordered semiring, define the semantics of a datalog program as a least fixpoint in this semiring, and study the number of steps required to reach that fixpoint, if ever. We identify algebraic properties of the semiring that correspond to certain convergence properties of datalog programs. Finally, we describe a class of ordered semirings on which one can use the semi-naïve evaluation algorithm on any datalog program.
It’s quite neat since this allows them to represent linear regression, gradient decent, shortest path (APSP) within a very similar framework as regular Datalog.
They have a whole section on the necessary condition for convergence (i.e. termination).
- See also: https://www.tumblr.com/accidentallyquadratic
Quadratic complexity sits in an awkward sweet spot: Fast enough for medium-sized n to pass first QA, but doomed to fail eventually as n grows.
- From the article:
> It marks a piece of code to be treated as data (to be sent to the client).
> This means that whoever imports onClick from the backend code won’t get an actual onClick function—instead, they’ll get '/js/chunk123.js#onClick' or something like that identifying how to load this module. It gives you code-as-data. Eventually this code will make it to the client (as a <script>) and be evaluated there.
The point of quoting in Lisp is that you get the code actually as data: You can introspect it ("how many string literals are there in here"), rewrite it ("unroll every loop once"), serialize it, store it in a database. And more importantly: The code is structured in the same way as any data in a regular program (lists). It's not hard for the developer to do any of these things.
If I get back '/js/chunk123.js#onClick' I simply have a reference, which I can use to invoke it remotely. The code appears to still be sent as bundled JavaScript, evaluated as usual, and then linked together with the reference. There's a small connection to code-as-data in the sense that you need to be able serialize code in order to share it between a server/client, but other than that I don't really see much of a connection.
However, looking at the recent commits it doesn't quite look like the most solid foundation: https://github.com/shuaimu/rusty-cpp/commit/480491121ef9efec...
… which then 30 minutes later is being removed again because it turns out to be completely dead code: https://github.com/shuaimu/rusty-cpp/commit/84aae5eff72bb450...There's also quite a lot of dead code. All of these warnings are around unused variable, functions, structs, fields: