Preferences

Neither of those gives memory safety, which is what the parent comment is about. If you release the temporary allocator while a pointer to some data is live, you get use after free. If you defer freeing a resource, and a pointer to the resource lives on after the scope exit, you get use after free.

The dialetic beings with OP, and has pcw's reply and then mine. It does not begin with pcw's comment. The OP complains about rust not because they imagine Jai is memory safe, but because they feel the rewards of its approach significantly outweight the costs of Rust.

pcw's comment was about tradeoffs programmers are willing to make -- and paints the picture more black-and-white than the reality; and more black and white than OP.

While technically true, it still simplifies memory management a lot. The tradeoff in fact is good enough that I would pick that over a borrowchecker.
I don't understand this take at all. The borrow checker is automatic and works across all variables. Defer et al requires you remember to use it, and use it correctly. It takes more effort to use defer correctly whereas Rust's borrow checker works for you without needing to do much extra at all! What am I missing?
What you're missing is that Rust's borrowing rules are not the definition of memory safety. They are just one particular approach that works, but with tradeoffs.

Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't.

That rule prevents memory corruption, but it outlaws many programs that break the rule yet actually are otherwise memory safe, and it also outlaws programs that follow the rule but wherein the compiler isn't smart enough to prove that the rule is being followed. That annoyance is the main thing people are talking about when they say they are "fighting the borrow checker" (when comparing Rust with languages like Odin/Zig/Jai).

That is true of `&mut T`, but `&mut T` is not the only way to do mutation in Rust. The set of possible safe patterns gets much wider when you include `&Cell<T>`. For example see this language that uses its equivalent of `&Cell<T>` as the primary mutable reference type, and uses its equivalent of `&mut T` more sparingly: https://antelang.org/blog/safe_shared_mutability/
Rust doesn't outlaw them. It just forces you to document where safety workarounds are used.
> The borrow checker is automatic and works across all variables.

Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".

That's what you're missing.

What's hilarious about "fighting the borrow checker" is that it's about the lexical lifetime borrow checking, which went away many years ago - fixing that is what "Non-lexical lifetimes" is about, which if you picked up Rust in the last like 4-5 years you won't even know was a thing. In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.

Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.

Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)

As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".

I appreciate what you're saying, though isn't undefined behavior having to do with the semantics of execution as specified by the language? Most languages outright decline to specify multiple threads of execution, and instead provide it as a library. I think C started that trend. I'm not sure if Jai even has a spec, but the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.

This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.

>In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.

NLL's final implementation (Polonius) hasn't landed yet, and many of the original cases that NLL were meant to allow still don't compile. This doesn't come up very often in practice, but it sure sounds like a hole in your argument.

What does come up in practice is partial borrowing errors. It's one of the most common complaints among Rust programmers, and it definitely qualifies as having to fight/refactor to get obviously correct code to compile.

Maybe I'm spoiled because I work with Rust primarily these days but "fighting the borrow checker" isn't really common once you get it.
A lot has been written about this already, but again I think you're simplifying here by saying "once you get it". There's a bunch of options here for what's happening:

1. The borrow checker is indeed a free lunch 2. Your domain lends itself well to Rust, other domains don't 3. Your code is more complicated than it would be in other languages to please the borrow checker, but you are unaware because its just the natural process of writing code in Rust.

There's probably more things that could be going on, but I think this is clear.

I certainly doubt its #1, given the high volume of very intelligent people that have negative experiences with the borrow checker.

If your use case can be split into phases you can just allocate memory from an arena, copy out whatever needs to survive the phase at the end and free all the memory at once. That takes care of 90%+ of all allocations I ever need to do in my work.

For the rest you need more granular manual memory management, and defer is just a convenience in that case compared to C.

I can have graphs with pointers all over the place during the phase, I don't have to explain anything to a borrow checker, and it's safe as long as you are careful at the phase boundaries.

Note that I almost never have things that need to survive a phase boundary, so in practice the borrow checker is just a nuissance in my work.

There other use cases where this doesn't apply, so I'm not "anti borrow checker", but it's a tool, and I don't need it most of the time.

You can explain this sort of pattern to the borrow checker quite trivially: slap a single `'arena` lifetime on all the references that point to something in that arena. This pattern is used all over the place, including rustc itself.

(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)

I remember having multiple issues doing this in rust, but can't recall the details. Are you sure I would just be able to have whatever refs I want and use them without the borrow checker complaining about things that are actually perfectly safe? I don't remember that being the case.

Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.

This item has no comments currently.