- The monotonic behavior is not the default, but I would also be happier if it was removed from the spec or at least marked with all the appropriate warning signs on all the libraries implementing it.
But I don't think UUIDv7 solves the issue by "having less quirks". Just like you'd have to be careful to use the non-monotonic version of ULID, you'd have to be careful to use the right version of UUID. You also have to hope that all of your UUID consumers (which would almost invariably try to parse or validate the UUID, even if they do nothing with it) support UUIDv7 or don't throw on an unknown version.
- Let's revisit the original article[1]. It was not about arguments, but about the pain of writing callbacks and even async/await compared to writing the same code in Go. It had 5 well-defined claims about languages with colored functions:
1. Every function has a color.
This is true for the new zig approach: functions that deal with IO are red, functions that do not need to deal with IO are blue.
2. The way you call a function depends on its color.
This is also true for Zig: Red functions require an Io argument. Blue functions do not. Calling a red function means you need to have an Io argument.
3. You can only call a red function from within another red function.
You cannot call a function that requires an Io object in Zig without having an Io in context.
Yes, in theory you can use a global variable or initialize a new Io instance, but this is the same as the workarounds you can do for calling an async function from a non-async function For instance, in C# you can write 'Task.Run(() -> MyAsyncMethod()).Wait()'.
4. Red functions are more painful to call.
This is true in Zig again, since you have to pass down an Io instance.
You might say this is not a big nuisance and almost all functions require some argument or another... But by this measure, async/await is even less troublesome. Compare calling an async function in Javascript to an Io-colored function in Zig:
And in Zig:function foo() { blueFunction(); // We don't add anything } async function bar() { await redFunction(); // We just add "await" }
Zig is more troublesome since you don't just add a fixed keyword: you need a add a variable that is passed along through somewhere.fn foo() void { blueFunction() } fn bar(io: Io) void { redFunction(io); // We just add "io". }5. Some core library functions are red.
This is also true in Zig: Some core library functions require an Io instance.
I'm not saying Zig has made the wrong choice here, but this is clearly not colorless I/O. And it's ok, since colorless I/O was always just hype.
---
[1] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
- Rust has concurrency issues for sure. Deadlocks are still a problem, as is lock poisoning, and sometimes dealing with the borrow checker in async/await contexts is very troublesome. Rust is great at many things, but safe Rust only eliminates certain classes of bugs, not all of them.
Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.
In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).
- Runtime borrow checking panics if you use the non-try version, and if you're careful enough to use try_borrow() you don't even have to panic. Unlike Go, this can never result in a data race.
If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.
- I think the original sin of Go is that it neither allows marking fields or entire structs as immutable (like Rust does) nor does it encourage the use of builder pattern in its standard library (like modern Java does).
If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.
- This is all very nice as an idea or a mythical background story ("Go was designed entirely around CSP"), but Go is not a language that encourages "sharing by communicating". Yes, Go has channels, but many other languages also have channels, and they are less error prone than Go[1]. For many concurrent use cases (e.g. caching), sharing memory is far simpler and less error-prone than using channels.
If you're looking for a language that makes "sharing by communicating" the default for almost every kind of use case, that's Erlang. Yes, it's built around the actor model rather than CSP, but the end result is the same, and with Erlang it's the real deal. Go, on the other hand, is not "built around CSP" and does not "encourage sharing by communicating" any more than Rust or Kotlin are. In fact, Rust and Kotlin are probably a little bit more "CSP-centric", since their channel interface is far less error-prone.
[1] https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-s...
- You can argue about how likely is code like that is, but both of these examples would result in a hard compiler error in Rust.
A lot of developers without much (or any) Rust experience get the impression that the Rust Borrow checker is there to prevent memory leaks without requiring garbage collection, but that's only 10% of what it does. Most the actual pain dealing with borrow checker errors comes from it's other job: preventing data races.
And it's not only Rust. The first two examples are far less likely even in modern Java or Kotlin for instance. Modern Java HTTP clients (including the standard library one) are immutable, so you cannot run into the (admittedly obvious) issue you see in the second example. And the error-prone workgroup (where a single typo can get you caught in a data race) is highly unlikely if you're using structured concurrency instead.
These languages are obviously not safe against data races like Rust is, but my main gripe about Go is that it's often touted as THE language that "Gets concurrency right", while parts of its concurrency story (essentially things related to synchronization, structured concurrency and data races) are well behind other languages. It has some amazing features (like a highly optimized preemptive scheduler), but it's not the perfect language for concurrent applications it claims to be.
- Most people in Japan live outside of the Yamanote circle in Tokyo. Rural and Suburban supermarkets have parking lots (although in central areas they can still be quite small) and people still use cars for shopping trips, especially in the countryside.
It is true that grocery packages are much smaller than the US (since Japanese houses, even in the countryside, are smaller and I guess the average household size is smaller as wel). Shopping carts in regular supermarkets are smaller than abroad, and are usually built to house 1 or 2 shopping baskets you can also carry by hand.
But hey, we still have Costco in Japan, and package sizes and shopping cart sizes are just as big as they are in the US (although the parking lot is probably considerably more crowded). And Costco is extremely popular here. It's far messier than a Japanese supermarket and I do see inconsiderate people sometimes in Costco, but the cars are still parked nicely and most people do return their shopping carts. It would be interesting to compare Costcos in Japan and the US directly though.
- I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
- This is a pretty in depth overview of a complex topic, which unfortunately most people tends to dumb down considerably. Commonly cited articles such as "What Color is Your Function?" or Revisiting Coroutines by the de Moura and Ierusalimschy are insightful, but they tend to pick on a a subset of the properties that make up this complex topic of concurrency. Misguided commentators on HN often recommends these articles as reviews, but they are not reviews and you are guaranteed to learn all the wrong lessons if you approach them this way.
This article looks like a real review. I only have one concern with it: It oversells M:N concurrency with green threads over async/await. If I understand correctly, it claims that async/await (as implemented by Rust, Python C# and Kotlin - not JavaScript) is less efficient (both in terms of RAM and CPU) than M:N concurrency using green threads. The main advantages it has is that No GC is required, C library calls carry no extra cost and the cost of using async functions is always explicit. This makes async/await great for a systems language like Rust, but it also pushes a hidden claim that Python, C# and Kotlin all made a mistake by choosing async/await. It's a more nuanced approach than what people take by incorrectly reading the articles I mentioned above, but I think it's still misguided. I might also be reading this incorrectly, but then I think the article is just not being clear enough about the issues of cost.
To put it shortly: Both green threads and async/await are significantly costlier than single-threaded code, but their cost manifests in different ways. With async/await the cost mostly manifests at "suspension points" (whenever you're writing "await"), which are very explicit. With green threads, the cost is spread everywhere. The CPU cost of green threads includes not only the wrapping C library calls (which is mentioned), but also the cost of resizing or segmenting the stack (since we cannot juts preallocate a 1MiB stack for each coroutine). Go started out with segmented stacks and moved on to allocating a new small stack (2KiB IIRC) for each new goroutine and copying it to a new stack every time it needs to grow[1]. That mechanism alone carries its own overhead.
The other issue that is mentioned with regards to async/await but is portrayed as "resolved" for green threads is memory efficiency, but this couldn't be farther from the truth: when it's implemented as a state machine, async/await is always more efficient than green threads. Async/await allocates memory on every suspension, but it only saves the state that needs to be saved for this suspension (as an oversimplification we can say it only saves the variables already allocated on the stack). Green threads, on the other hand, always allocate extra space on the stack, so there would always be some overhead. Don't get me wrong here: green threads with dynamic stacks are considerably cheaper than real threads and you can comfortably run hundreds of thousands of them on a single machine. But async/await state machines are even cheaper.
I also have a few other nitpicks (maybe these issues come from the languages this article focuses on, mainly Go, Python, Rust and JavaScript)
- If I understand correctly, the article claims async/await doesn't suffer from "multi-threading risks". This is mostly true in Rust, Python with GIL and JavaScript, for different reasons that have more to do with each language than async/await: JavaScript is single-threaded, Python (by default) has a GIL, and Rust doesn't let you have write non-thread-safe code even if you're using plain old threads. But that's not the case with C# or Kotlin: you still need to be careful with async/await in these languages just as you would be when writing goroutines in Go. On the other hand, if you write Lua coroutines (which are equivalent to Goroutines in Go), you can safely ignore synchronization unless you have a shared memory value that needs to be updated across suspension points.
- Most green thread implementations would block the host thread completely if you call a blocking function from a non-blocking coroutine. Go is an outlier even among the languages that employ green threads, since it supports full preemption of long-running goroutines (even if no C library code is called). But even Go only added full support for preemption with Go 1.14. I'm not quite since when long-running Cgo function calls have been preemptible, but this still shows that Go is doing its own thing here. If you have to use green threads on another language like Lua or Erlang, you shouldn't expect this behavior.
[1] https://blog.cloudflare.com/how-stacks-are-handled-in-go/
- ArrowKt is also worth a mention: https://arrow-kt.io/learn/typed-errors/
- I wish I could upvote this more. I can totally understand GP's sentiment, but we need to dispel the myth that results are just checked exceptions with better PR.
I think the first issue is the most important one, and this is not just an implementation issue. Java eschewed generics on its first few versions. This is understandable, because generics were quite a new concept back then, with the only mainstream languages implementing them then being Ada and C++ - and the C++ implementation was brand new (in 1991), and quite problematic - it wouldn't work for Java. That being said, this was a mistake in hindsight, and it contributed to a lot of pain down the road. In this case, Java wanted to have exception safety, but the only way they could implement this was as another language feature that cannot interact with anything else.
Without the composability provided by the type system, dealing with checked exceptions was always a pain, so most Java programmers just ended up wrapping them with runtime exceptions. Using checked exceptions "correctly" meant extremely verbose error handling with a crushing signal-to-noise ratio. Rust just does this more ergonomically (especially with crates like anyhow and thiserror).
- It only goes through "apt the program", but apt is just serving as a method of installing a package, which is hosted on one of the configured apt sources.
Calling all software installed through apt "first party" is a wild stretch, since you can apply the same logic to git, wget, or a web browser. For instance, it would probably be correct to say that most Windows software is downloaded and installed through Chrome, but nobody in their right mind would claim Google owns the largest first party store for Windows.
- I think you meant to say that tokenization is usually done with UTF-8 and a single Japanese character generally takes 3 or more code units (i.e. bytes). Unicode itself is not the culprit (in fact, even with UTF-16 tokenization, most Japanese characters would fit in a single code unit, and the ones that won't are exceedingly rare).
I have to admit I have not encountered significant mistokenization issues in Japanese, but I'm not using it on a daily basis LLMs. I'm somewhat dobutful this can be a major issue, since frontier LLMs are absolutely in love with Emoji, and Emoji requires at least 4 UTF-8 bytes, while most Japanese characters are happy with just 3 bytes.
- I think sentence should be easily readable to Flemish speaker: "A shprakh iz a dialekt mit an armey un flot"
https://en.wikipedia.org/wiki/A_language_is_a_dialect_with_a...
- Not sure if I should be get bonus points for that, but if mappa means map, the ultimate origin is still Semitic. Latin seem to have took the word maappa from a Canaanite language. The word mappa (and it's older version "manpa") is attested in Minshnaic Hebrew (meaning a napkin or a tablecloth), although you could say Hebrew "re-loaned" the cartographic meaning - which is much newer.
- Yes, this is very weird. But Go Playground[1] has always been using 8 spaces per tab for some reason. I always found that very jarring, particularly where almost every other editor or documentation has settled on 4 spaces per tab.
- But even this statement is incorrect. FRP frameworks with Observables will remain useful in Java (as they have in other languages that already had coroutines). It's only the use of Observables as _an alternative for coroutines_ that is a transitional technology.
Maybe this is what Brian Goetz meant to say, but this is not what he said.
- I feel like Go's concept of readability is very Blub-oriented[1]. Can a Blub programmer read this line? Then it's readable. Sometimes Go fans would say that this code:
return resultvar result := []string{} for i = 0; i < items.Length(); i++ { item := items.Get(i) if item < threshold { result = append(result, converToString(item)) } }I more readable than this code:
The argument is that everybody understand what the loop does, it's just a stupid loop. Bring back an an Algol 68, Pascal or C programmer from the 70s and they all understand what a for loop is. But my second example requires you to learn about filter and map and closures and implicit parameters like 'it'.return items .filter { it < threshold } .map { convertToString(it) }Of course, once you do understand these very complicated concepts, the code above is far more readable: it clearly states WHAT the program does (filtering all values below the threshold and converting them to string) rather than HOW it does that (which nobody cares about). "readability" here is only counted from the narrow perspective of an imperative programmer who is not familiar with functional declarative data processing.
I feel the same about the "ro" examples in the OP. I don't particularly like that ro takes (mostly because Go forces its hand, I assume), like having to put everything in an explicit pipe, but I find the example far more readable than the pure Go example which combines loops, channels and WaitGroups. That's far worse than the loop example I gave in this reply, to be honest, and I really don't know why people say this example is readable. I guess you can optimize it a little, but I always found both channel and WaitGroups waitable, unreadable and error prone. They are only "readable" in narrow perverted sense that has somehow become prevalent in the Go community, where "readability" is redefined to mean: no closures, no immutable values, no generics, no type safety and certainly nothing that smells like FP.
- I think there is an issue where reactive frameworks are massively overused in languages that have (or had) weak concurrency patterns. This was true for a while in JavaScript (before async/await became universally supported), but it's especially endemic in the Java world, especially the corners of it which refused to use Kotlin.
So yes, in this particular case, most of the usages of RxJava, Reactive Streams and particularly Spring Reactor is just there because programmers wanted pretty simple composition of asynchronous operations (particularly API calls) and all the other alternatives were worse: threads were too expensive (as Brian Goetz mentions), but Java did have asynchronous I/O since Java 1.4 and proactor-style scheduling (CompletableFuture) since Java 8. You could use CompletableFutures to do everything with asynchronous I/O, but this was just too messy. So a lot of programmers started using Reactive framework, since this was the most ergonomic solution they had (unless they wanted to introduce Kotlin).
That's why you'd mostly see Mono<T> types if you look at a typical reactive Spring WebFlux project. These Mono/Single monads are essentially CompletableFutures with better ergonomics. I don't mean to say that hot and cold multi-value observables were used at all, but in many cases the operator chaining was pretty simple: gathering multiple inputs, mapping, reducing, potentially splitting. Most of this logic is easier to do with normal control flow when you've got proper coroutines.
But that's not all what reactive frameworks can give you. The cases when I'd choose a reactive solution over plain coroutines are few and pretty niche: to be honest, I've only reached to a reactive 3 or 4 times in my career. But they do exist:
1. Some reactive operators are trivially mapped to loops or classic collection operators (map/reduce/filter/flatMap/groupBy/distinct...). But everything that's time-bound, is more complicated to implement in a simple loop, and the result is far less readable. Think about sample or debounce for instance.
2. Complex operation chains do exist and implementing them as a reactive pattern makes the logic far easier to test and reason about. I've had a case where I need to implement a multi-step resource fetching logic, where an index is fetched first, and then resources are fetched based on the index and periodically refreshed with some added jitter to avoid a thundering herd effect, as well as retries with exponential backoff and predictable update interval ranges which is NOT affected by the retries (in other words: no, you can't just put a delay in a loop). My first implementation tried to model that with pure coroutines and it was a disaster. My second implementation was RxJava, which was quite decent, and then Kotlin Flow came out, which was a breeze.
3. I'm not sure if we should call this "Reactive" (since it's not the classic observable), but hot and cached single values (like StateFlow in Kotlin) are extremely useful for many UI paradigms. I found myself reaching to StateFlow extensively when I was doing Android programming (which I didn't do a lot of!).
In short, I strongly disagree with Brian Goetz that Functional Reactive Programming is transitional. I think he's seeing this issue from a very Java-centric perspective, where probably over 90% of the usage we've seen for it was transitional, but that's not all that FRP was all about. FRP will probably lose its status a serious contender for a general tool for expressing asynchronous I/O logic, and that's fine. It was never designed to be that. But let's keep in mind that there are other languages than Java in the world. Most programming languages support some concept of coroutines that are not bound to OS kernel threads nowadays, and FRP is still very much alive.
- You can do a lot of things. Yes, there are formally verified programs and libraries written in C. But most C programs are not, including the GNU coreutils (although they are battle-tested). It's just the effort involved is higher and the learning curve for verifying C code correctly is staggering. Rust provides a pretty good degree of verification out-of-the-box for free.
Like any trendy language, you've got some people exaggerating the powers of the borrow checker, but I believe Rust did generally bring out a lot of good outcomes. If you're writing a new piece of systems software, Rust is pretty much a no-brainer. You could argue for a language like Zig (or Go where you're fine fine with a GC and a bit more boilerplate), but that puts even more spotlight on the fact that C is just not viable choice for most new programs anymore.
The Rewrites-in-Rust are more controversial and they are just as much as they are hyped here on HN, but I think many of them brought a lot of good to the table. It's not (just?) because the C versions were insecure, but mostly because a lot of these new Rust tools replaced C programs that had become quite stagnant. Think of ripgrep, exa/eza, sd, nushell, delta and difft, dua/dust, the various top clones. And these are just command line utilities. Rewriting something in Rust is not an inherently bad idea of what you are replacing clearly needs a modern makeover or the component is security critical and the code that you are replacing has a history of security issues.
I was always more skeptical about the coreutils rewrite project because the only practical advantage they can bring to the table is more theoretical safety. But I'm not convinced it's enough. The Rust versions are guaranteed to not have memory or concurrency related bugs (unless someone used unverified unsafe code or someone did something very silly like allocating a huge array and creating their own Von Neumann Architecture emulator just to prove you can write unsafe code in Rust). That's great, but they are also more likely to have compatibility bugs with the original tools. The value proposition here is quite mixed.
On the other hand, I think that if Ubuntu and other distros persist in trying to integrate these tools the long-term result will be good. We will get a more maintainable codebase for coreutils in the future.
- The Amish approach to technology is completely different from the Luddites, and it doesn't teach us anything about whether we, as a society, should accept or reject a certain technology.
To be more exact, there is no evidence that historical Luddites were ideologically opposed to machine use in the textile industry. The Luddites seemed to have been primarily concerned with wages and labor conditions, but used machine-breaking as an effective tactic. But to the extent that Luddites did oppose to machines, and the way we did come to understand the term Luddite later, this opposition was markedly different from the way Amish oppose technology.
The Luddites who did oppose the use of industrial textile production machines were opposed to other people using these machines as it hurt their own livelihood. If it was up to them, nobody would have been allowed to use these machines. Alternatively, they would be perfectly happy if their livelihood could have been protected in some other manner, because that was their primary goal, but failing that they took action depriving other people from being able to use machines to affect their livelihood.
The Amish, on the other hand, oppose a much wider breadth of technology for purely ideological reasons. But they only oppose their own use if this technology. The key point here is that the Amish live in a world where everybody around them is using the very technologies they shun, and they do not make any attempt to isolate themselves from this world. The Amish have no qualms about using modern medicines, and although they largely avoid electricity and mechanized transportation, they still make significant use of diesel engine-based machinery, especially for business purposes and they generally don't avoid chemical fertilizers or pesticides either.
So if we want to say Amish are commercially successful and their life is pretty good, we have to keep in mind that they aren't a representation of how our society would look if we've collectively banned all the technologies they've personally avoided. Without mass industrialization, there would be no modern healthcare that would eliminate child mortality and there would be no diesel engines, chemical fertilizers and pesticides that boost crop yields and allow family farm output to shoot way past subsistence level.
In the end, the only lesson that the Amish teach us is that you can selectively avoid certain kinds of technologies and carve yourself a successful niche in an wider technologically advanced community.
- I'm not sure why would you put "protect" in scare quotes here. This protection against fingerprinting is very real. Having any installed fonts that didn't come with the OS (and that includes fonts that are installed by other programs), makes your computer a lot more easy to fingerprint and track. Not everybody is interested in this protection, but this protection is very real.
It also doesn't seem to be enabled by default, since it tends to break some sites, as explained above. If you want to disable prevent Firefox from doing that, just don't set "Enhanced Tracking Protection" to Strict. You can even go for full Custom mode and enable "Protection from Suspected Fingerprinters" (which blocks some fonts as described by GP) only for private windows.
- GPU accelaration has become so common for new terminals nowadays, that I can't see the point in making that a headline feature. To be fair, the homepage has a more generic tagline ("A modern terminal for the 21st century") and a list of items. My problem is that these bullet items also don't say very much:
24-bit color is even more ubiquitous than GPU acceleration. Apparently even the builtin macOS Terminal will support 24-bit color in Tahoe, and I think Windows Terminal has been supporting that since it was released (which is more than 5 years ago). Even image support is kinda old news: many terminals support at the very least Sixel or the iTerm image protocol. Ligatures and splits are also quite common.
It would be more interesting if we started comparing terminals by the details of these features, since the devil really is in the details. For instance, not all image protocols are made equal. Sixels are very slow, while the iTerm protocol is quite limited - you have very little control on where the image is placed. The Kitty Graphics protocol is the most advanced protocol, but there are two different image placement methods: positioning, unicode placeholder and relative to other images. Besides that there are a couple of other features such as animation and communicating back with the terminal to get image IDs. I've seen several terminals claiming to have Kitty Image Protocol support, but I've never seen any of them put out a matrix of which features they support (other than Kitty that obviously supports everything).
The Kitty Keyboard Protocol is also another thing that is quite complex and I've ran into issues in the past with both iTerm and WezTerm behaving differently than Kitty and running into trouble with some programs which expected the kitty behavior.
- > the processor, which is a glorified, hardware implemented PDP-11 emulator.
This specific seems like just gratuitously rewriting history.
I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous.
PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium).
Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.