Managing FFI will always require cooperation with the GC if there is one. If the GC doesn't expose adequate APIs for doing that cooperation, that feels like more of a design problem with that GC than a fact of nature. You shouldn't be trying to "trick" the compiler/runtime to keep your thing live until you've finished using it: you should tell it when you need to keep it live until, and it should listen to you.
In other words, managing FFI resources will remain "manual" or otherwise deterministic. The mechanism will cooperate with GC by releasing an object to be collected once its interaction with FFI is done.
In absence of finalizers, I suspect, FFI resources could also be garbage-collected.
If looks like traditional GC-based languages (like Java or Lisp) are hurt by absence of static data flow analysis, which would guarantee that a finalizer cannot revive the object being collected (e.g. by creating a new live reference to it elsewhere). Finalizers can likely be made safe enough if their code is more restricted; that would still allow many reasonable finalizers that calmly release external resources.
An FFI resource of this kind needs to be finalized via the FFI, almost by definition. So the problem isn't whether you can do data flow analysis in the host language, it's whether you can do sufficient analysis of the language that you're embedding (assuming what you're embedding isn't an opaque call to some library for which you only have an ABI, which is the usual way to do FFI).
> Finalizers can likely be made safe enough if their code is more restricted; that would still allow many reasonable finalizers that calmly release external resources.
If a finalizer calls external code to release external resources (a not uncommon use case), there’s no way static data flow analysis can determine that external code doesn’t make a call back into the VM that revives objects, is there?
Yes, it's basically a kind of RAII. The FFI needs to add the data as a GC root whilst it's holding a reference to it, and release it when it's done. There are papers discussing this explicitly for the case of Ocaml, though I don't have a formal reference right now.
The LuaJIT example isn't correct though, the lifetime of garbage collected objects is clearly documented: https://luajit.org/ext_ffi_semantics.html#gc
In the example function `blob` will not be collected because it isreachable from the `blob` argument local variable (IOW it is on the Lua stack). `ffi.string`() copies the string data into a new Lua string, and the lifetime of blob is guaranteed until the return of the function. So not sure what the issue is.
function blob_contents(blob) -- <- this ensures liveness until past return
local len_out = ffi.new('unsigned int')
local contents = hb.hb_blob_get_data(blob, len_out)
local len = len_out[0];
return ffi.string(contents, len)
end
Unfortunately things aren't so simple, as when doing JIT compilation, LuaJIT _will_ try to shorten the lifetimes of local variables. Using the latest available version of LuaJIT (https://github.com/LuaJIT/LuaJIT/commit/0d313b243194a0b8d239...), the following reliably fails for me:
local ffi = require"ffi"
local function collect_lots()
for i = 1, 20 do collectgarbage() end
end
local function f(s)
local blob = ffi.new"int[2]"
local interior = blob + 1
interior[0] = 13 -- should become the return value
s:gsub(".", collect_lots)
return interior[0] -- kept alive by blob?
end
for i = 1, 60 do
local str = ("x"):rep(i - 59)
assert(f(str) == 13) -- can fail!!
end
Well that is from 3 weeks ago. If that remains then it’s a bug or the documentation is wrong.
What are the rules for keeping a GC object alive? What earthly useful meaning can “Lua stack” have in the FFI GC documentation if not to local bindings since that is the only user visible exposure of it in the language.
From the LuaJIT docs:
So e.g. if you assign a cdata array to a pointer, you must keep the cdata object holding the array alive as long as the pointer is still in use:
ffi.cdef[[
typedef struct { int *a; } foo_t;
]]
local s = ffi.new("foo_t", ffi.new("int[10]")) -- WRONG!
local a = ffi.new("int[10]") -- OK
local s = ffi.new("foo_t", a)
-- Now do something with 's', but keep 'a' alive until you're
done.
What on earth does "OK" here mean if not the local variable binding? It's the expectation because this is what it says on the tin.
This then isn’t a discussion about fundamental issues or "impossibilities" with GC, but with poor language implementations not following their own specifications or not having them.
Since LuaJIT does not have an explicit pinning interface the expectation that a local variable binding remains until the end of scope is pretty basic. If your bug case is expected then even the line: interior[0] = 13 is undefined and so would everything after local s in the documentation, ie you can do absolutely nothing with a pointed to cdata until you pin it in a table. Who would want to use that?
You're absolutely right. I'm not particularly familiar with LuaJIT so when I read the article I got the impression the LuaJIT GC semantics weren't documented. Looks like the LuaJIT behavior is well defined and the implementation isn't keeping its own promises.
The argument is that the JIT might realise that blob is never used beyond that line, and collect it early. In general that would be a desirable feature.
I know it says this: "The semantics of LuaJIT do not prescribe when GC can happen and what values will be live, so the GC and the compiler are not constrained to extend the liveness of blob to, say, the entirety of its lexical scope. "
But it is flat wrong. From the LuaJIT documentation:
"All explicitly (ffi.new(), ffi.cast() etc.) or implicitly (accessors) created cdata objects are garbage collected. You need to ensure to retain valid references to cdata objects somewhere on a Lua stack, an upvalue or in a Lua table while they are still in use. Once the last reference to a cdata object is gone, the garbage collector will automatically free the memory used by it (at the end of the next GC cycle)."
The Lua stack in this case includes all the local variables in that function scope.
It's a non-issue/straw man and is common sense.
If LuaJIT FFI worked the way the author supposed, it would be near impossible to use practically.
“It is perfectly valid to collect blob after its last use”
This is a useless statement. It’s perfectly “valid” for LuaJIT to not even read your source code and exit immediately, but that isn’t what it does because it would be useless. What counts as a reference is both PUC Lua and LuaJIT is defined.
As far as the desirability of finer grained liveness, Lua has block scope (do end), but in practice LuaJIT does well inlining so functions ought to be short anyway.
> or attempt to manually extend the lifetime of a finalizable object, and then pray the compiler and GC don’t learn new tricks to invalidate your trick.
This is also silly. There is no reason whatsoever that a GC can’t offer an actual API to keep an object alive.
GC is usually not specified to happen at particular times, or saying which values are definitely going to be GCed.
Instead it relies on the language semantics, so that any value which is used later in the program, is not going to be GCed. How and when the runtime system determines that a value is not going to be used again is an optimization problem, not a correctness problem.
So everything you quote Lua a saying here is consistent.
The thing is that it only considers "used later" as "used later by the Lua program".
Or rather, it only considers Lua values as "values". A value stored in non-managed memory is not a value. It's not GCed. The `ffi.new`-created Lua value is, and it's finalizer happens to free the native memory that the pointer refers to.
So non -Lua "values" are not GCed, they are freed as side effects of Lua values being GCed.
Okay, LuaJIT is a bit specific example, what about .NET CLR? Because it absolutely does this optimization, and e.g. an object can get GC'd while one of its instance methods is still running given that this instance method is statically (or maybe even dynamically?) guaranteed to not access `this`.
Yes, and? It has mechanisms with well defined semantics such as GCHandle and KeepAlive. That’s literally what it is there for so it makes this: “or attempt to manually extend the lifetime of a finalizable object, and then pray the compiler and GC don’t learn new tricks to invalidate your trick” a non starter.
This is the essence of Rusts lifetime analysis; a pointer to an object can't be live for longer than the object itself is.
In this particular example, you'd make an object with a finalizer and hide the raw pointer inside of it. Then you can only touch that pointer by going through a Rust object which participates in lifetime analysis, and it'll clean it up when it's done. Any more attempts to touch that object/pointer will fail to compile.
Expressed that way it makes sense that some people call it "GC at compile time".
Then, you can hand out zero-cost lifetime-checked references to the owned foreign instance like this.
impl Deref for OwnedForeignInstance {
type Target = UnownedReference;
fn deref(&self) -> &Self::Target {
unsafe { &*(self.0.as_ptr() as *mut _) }
}
}
impl DerefMut for OwnedForeignInstance {
fn deref_mut(&mut self) -> &mut Self::Target {
unsafe { &mut *(self.0.as_ptr() as *mut _) }
}
}
Once you've done that, you expose your FFI functionality on UnownedReference, relying on auto-deref. Unless it consumes the receiver, in which case you put it on the OwnedForeignInstance. This way you can't destroy the object while references to it continue to exist.
It's not perfect, but it's the best way I've found so far for making FFI wrapper objects that look and feel like Rust objects while respecting the FFI contract.
Anything that stops languages from just exposing some functions that solve this exact problem?
function blob_contents(blob)
ffi.pin(blob)
-- ...
ffi.unpin(blob)
end
Where pin disables garbage collection of the given object while pin re-enables it, forming the exact region where its lifetime is guaranteed which is apparently what the author needs.
It's manual memory management and the code will have to be written carefully if the language has exceptions or other forms of unwinding. It should work though.
Moving garbage collectors also have a concept of pinning objects since code can save pointers to them. Seems like the same problem to me.
Scheme has Guardians for this. They're available in Guile[1], and have recently been submitted as an SRFI[2] for standardization. Original proposal is from Kent Dybvig et al in 1993[3]
I'm not convince this is particularly hard to use functionality, all things considered. Supporting explicit deallocation in a safe way is much harder, especially if FFI callbacks are involved.
This is often what happens, and this is often what’s fragile. In the blog these are referred to as “lifetime extension”. The code is written as carefully as it ever is and I can confirm the observation that it’s just begging for a segfault or a leak :) Note that finalizers are asynchronous, and there’s an inversion of control/scoping issue with the way you’ve described it.
Haskell's FFI has `withForeignPtr :: ForeignPtr a -> (Ptr a -> IO b) -> IO b` [1].
A ForeignPtr is a GC-managed pointer with an associated finalizer.
The finalizer runs when the ForeignPtr gets GC'd.
`withForeignPtr` creates a scope (accepting a lambda) in which you can inspect the pointer `(Ptr a -> IO b)`.
This works well in practice, so I do not really understand why "among GC implementors, it is a truth universally acknowledged that a program containing finalizers must be in want of a segfault".
I’m deeply familiar with this technique, have used it plenty, have encountered the perils, and so I do not really understand why you think it works well on practice.
It works well in only the case where you have perfectly well scoped small regions that you can model as your lambda. When you need to actually do anything intricate with the lifetime where you want it to escape (probably into a datastructure), the callback won’t cut it and it’s on you to ensure the foreignptr’s lifetime becomes interlinked with the returned b
> it’s on you to ensure the foreignptr’s lifetime becomes interlinked with the returned b
Yes; the simplest way to do that is to make sure that your data types never contain raw `Ptr`, only `ForeignPtr` -- same as in C++, where seeing `mytype * x` should ring the alarm bells.
You could say "but what if I call another FFI function that needs the Ptr as an argument"? In that case, surely the function needs to document if it takes ownership of the pointer. If it doesn't document that, yes, it'll crash; but that's unrelated to finalizers (the "impossibility of composing" of which which is what the post claimed); it would also crash if no finalizers were involved.
The possibility of segfaults is kind of a given though. I mean the whole point of foreign interfaces is to reuse existing C code. The pinning functions just expose the manual C resource management that programmers would have to deal with if they were writing C. You just turn off the automatic resource management for the objects involved so you can do it yourself, running the risk of leaking those resources.
The only viable way to escape all this is to rewrite the software in the host language. A worthy goal but I don't see anyone signing up for that herculean task outside the Rust community.
The pin and unpin could be tied to a reference count in the byte string object that was extracted. When blob's get_data is called to get the byte string, its pin count is bumped up. When the byte string is reclaimed by GC, it bumps down the blob's pin count.
I don’t dispute the possibility of using pinning correctly, in practice it’s a source of bugs. Fuzzy and loose ownership regimes just don’t compose well, people are bad at running region checkers in their head and anything beyond the absolute simplest smallest scoped is prone to eventual error.
I don't think it's difficult, if you're working inside the run-time.
The difficulty is getting the behaviors if you're outside of the run-time, writing writing FFI bindings, where you don't have the option of hacking in new ownership behaviors into the target objects, and your FFI may be lacking in expressiveness also.
If it's a bad problem for a certain kind of object, and that object is relatively important (lots of people want to use bindings for it), the way to go may be a lower level extension module rather than FFI, or a wrapper library around it which is more amenable to FFI.
This is unnecessary. The blob argument binding itself is making the object reachable throughout the function.
You can easily test it with collectgarbage('collect')
The author's example is simply mistaken about the relatively straightforward semantics of LuaJIT.
Reading your other comments I've realized you're right about that. I'm not very familiar with LuaJIT so I assumed the garbage collection semantics were undefined. That was the impression I got from the article at least.
In Haskell you would write a newtype that keeps a pointer back to blob along with the data that's being returned. This makes the result perfectly correct. There's nothing impossible here. You could even write yourself a small function to access the blob that ensures the results are always wrapped this way.
If the problem with the second work-around (the reason it's not satisfactory) is that it's not supported as part of the platform, forcing you to use a "trick" to "outsmart the compiler", then C# has this solved: System.GC.KeepAlive [1] is an official part of the .NET platform and documented to do exactly this, so presumably Microsoft would not break it when making changes to the GC.
Given how much research there is into this topic, I am sure that I just don't understand the complexity of it. But to me it seems like you could have a function that associates one value with another in the GC. Something like `gc_borrows_from`. You would than write the problematic code like this:
function blob_contents(blob)
local len_out = ffi.new('unsigned int')
local contents = gc_borrows_from(blob, hb.hb_blob_get_data(blob, len_out))
local len = len_out[0];
return ffi.string(contents, len)
end
This would tell the GC that the data returned by `hb_blob_get_data` is borrowed from blob, and it can't collect `blob` until `contents` is also unreachable. How to implement that would be up to the runtime, but it seems reasonable to have a wrapper type that holds a traceable reference back to blob.
Essentially, if you want to (externally - in code you wrote) associate two objects with each other without being able to modify the code of either of those objects, dependent handles are the ticket. It only creates a one-way liveness relationship, though, which may not be sufficient for every use case...
That object should take care of it. Even if the parent object is reclaimed, the byte string should independently persist for as long as is necessary. Since byte strings don't contain pointers to anything, reference counting could be used.
The refcount could be in the parent blob, such that when its nonzero, the blob is pinned against reclamation.
Of course you can't just share out the internals of an object, such that GC doesn't know about them. It doesn't matter if it's a foreign object set up with FFI or something built into the run time.
Copying the data and letting the FFI data structures go is the only way to get correct behaviour without inconveniently (or impossibly) pinning objects to elude garbage collection.
When I see all the trouble that async Rust and normal C# have had with finalizers, I must wonder if anything composes with anything else, or it's all just banging rocks and praying
This is my favorite problem! So cool to see it written up.
The only nice way to have a GC language interop with another language is if that other language is also GC’d and that language’s GC interops with yours (what I like to call frankengc).
(Once I add GC to Fil-C and give it proper frankengc hooks, you’ll be able to do destructorless FFI to whatever C code Fil-C can run. It’ll be great I promise.)
Yes. Yet another problem with finalizers is the question of "which thread does actually run the finalizers"? In case of some C libraries, objects must be released exactly by the thread in which they were allocated; this means that e.g. in Steel Bank Common Lisp it is impossible to use finalizers in these cases.
but as long as you are telling the compiler through a first class API, it's not brittle because there compiler knows the meaning and intent of the statement, and therefore can avoid optimizing it in invalid ways.
While people spend much more time arguing matters of taste and trading urban legends about performance back and forth when discussing garbage collection, finalizers may be the most potent issue that garbage collection raises, at least from an overall correctness perspective. As near as I can tell, it has proved an intractable problem to get perfectly correct finalization with garbage collection. Many, many implementations have tried it, and all the ones I've ever seen have either ended up with a laundry list of warnings around them along with a warning that the list of warning itself may still be incomplete, or they pull them out entirely.
It may be worth reading that previous sentence again. I've seen it so many times. Even in the comments on the linked article I see people trying to propose solutions to the problem. But it turns out to be one of those problems that looks simple, even super simple ("just" provide a function to do it), but then it turns out that hundreds of smart people have poured years into solving it... and failed, as far as I know, universally.
Fortunately, you need to have a very particular kind of program to get hit hard by the problem, and an even more particular kind of program to not have the ability to mitigate it into an acceptable issue somehow, most often something like being more careful to close files when done with them. I think I brushed the issue once and it was addressed by exactly that, being more careful to close files when done with them because we accidentally were leaning on GC to close them and that become a de facto file handle leak.
That said, the problem may be exposed by GC but it isn't entirely caused by it. Sufficiently complicated manually-managed memory programs can encounter things that stem from the same underlying issue. If the lifetime of objects that need a "finalizer" (or destructor, if you prefer) becomes sufficiently complicated, it can also become very difficult to figure out when they need to be cleaned up. But you get more runway, and you have more options to deal with it when you have the ability to more forcibly descope things and deallocate them at a known point in time. When I was programming with libpurple, the generalized IM library written in C, it used a lot of "closures" (implemented in C), and figuring out when to correctly deallocate and finalize on them was very, very difficult, because their lifetimes were complicated and not terribly well documented. This is the same underlying problem in a lot of ways, it just manifests differently. But in a manually-managed world, it is at least a solvable problem, however hard the problem may be to solve in some languages (while they had no better option at the time, I gotta say in hindsight libpurple would really have been served by Rust, and I'm not thinking the generalized "rewrite everything in Rust" here but specifically that all those closures would have been much safer to work with with lifetime annotations enforced by the compiler), and however difficult you may make the problem on yourself with your own programming practices.
struct S { int s; };
int *i;
{
struct S s;
i = &s.s;
}
*i = 7; // and in C++ you can even write int& j = *i;
No GC or finalizers involved here.
Hope you have ubsan.
The point being that GC is sort of a red herring in the entire discussion. Easily subverted lifetime and ownership semantics are going to be error prone irrespective.
In absence of finalizers, I suspect, FFI resources could also be garbage-collected.
If looks like traditional GC-based languages (like Java or Lisp) are hurt by absence of static data flow analysis, which would guarantee that a finalizer cannot revive the object being collected (e.g. by creating a new live reference to it elsewhere). Finalizers can likely be made safe enough if their code is more restricted; that would still allow many reasonable finalizers that calmly release external resources.
If a finalizer calls external code to release external resources (a not uncommon use case), there’s no way static data flow analysis can determine that external code doesn’t make a call back into the VM that revives objects, is there?
From the LuaJIT docs: So e.g. if you assign a cdata array to a pointer, you must keep the cdata object holding the array alive as long as the pointer is still in use:
What on earth does "OK" here mean if not the local variable binding? It's the expectation because this is what it says on the tin.This then isn’t a discussion about fundamental issues or "impossibilities" with GC, but with poor language implementations not following their own specifications or not having them.
Since LuaJIT does not have an explicit pinning interface the expectation that a local variable binding remains until the end of scope is pretty basic. If your bug case is expected then even the line: interior[0] = 13 is undefined and so would everything after local s in the documentation, ie you can do absolutely nothing with a pointed to cdata until you pin it in a table. Who would want to use that?
But it is flat wrong. From the LuaJIT documentation: "All explicitly (ffi.new(), ffi.cast() etc.) or implicitly (accessors) created cdata objects are garbage collected. You need to ensure to retain valid references to cdata objects somewhere on a Lua stack, an upvalue or in a Lua table while they are still in use. Once the last reference to a cdata object is gone, the garbage collector will automatically free the memory used by it (at the end of the next GC cycle)."
The Lua stack in this case includes all the local variables in that function scope. It's a non-issue/straw man and is common sense. If LuaJIT FFI worked the way the author supposed, it would be near impossible to use practically.
“It is perfectly valid to collect blob after its last use”
This is a useless statement. It’s perfectly “valid” for LuaJIT to not even read your source code and exit immediately, but that isn’t what it does because it would be useless. What counts as a reference is both PUC Lua and LuaJIT is defined.
As far as the desirability of finer grained liveness, Lua has block scope (do end), but in practice LuaJIT does well inlining so functions ought to be short anyway.
> or attempt to manually extend the lifetime of a finalizable object, and then pray the compiler and GC don’t learn new tricks to invalidate your trick.
This is also silly. There is no reason whatsoever that a GC can’t offer an actual API to keep an object alive.
> An existing finalizer can be removed by setting a nil finalizer, e.g. right before explicitly deleting a resource:
https://luajit.org/ext_ffi_api.html#ffi_gcSo everything you quote Lua a saying here is consistent. The thing is that it only considers "used later" as "used later by the Lua program". Or rather, it only considers Lua values as "values". A value stored in non-managed memory is not a value. It's not GCed. The `ffi.new`-created Lua value is, and it's finalizer happens to free the native memory that the pointer refers to.
So non -Lua "values" are not GCed, they are freed as side effects of Lua values being GCed.
It’s not a trick, it’s a documented interface.
In this particular example, you'd make an object with a finalizer and hide the raw pointer inside of it. Then you can only touch that pointer by going through a Rust object which participates in lifetime analysis, and it'll clean it up when it's done. Any more attempts to touch that object/pointer will fail to compile.
Expressed that way it makes sense that some people call it "GC at compile time".
It's not perfect, but it's the best way I've found so far for making FFI wrapper objects that look and feel like Rust objects while respecting the FFI contract.
It's manual memory management and the code will have to be written carefully if the language has exceptions or other forms of unwinding. It should work though.
Moving garbage collectors also have a concept of pinning objects since code can save pointers to them. Seems like the same problem to me.
[1]:https://www.gnu.org/software/guile//manual/html_node/Guardia...
[2]:https://srfi.schemers.org/srfi-246/
[3]:https://www.cs.tufts.edu/comp/250RTS/archive/kent-dybvig/gua...
I think I saw this first in MLton, which has a touch function for this purpose: http://mlton.org/MLtonFinalizable
I'm not convince this is particularly hard to use functionality, all things considered. Supporting explicit deallocation in a safe way is much harder, especially if FFI callbacks are involved.
A ForeignPtr is a GC-managed pointer with an associated finalizer. The finalizer runs when the ForeignPtr gets GC'd.
`withForeignPtr` creates a scope (accepting a lambda) in which you can inspect the pointer `(Ptr a -> IO b)`.
This works well in practice, so I do not really understand why "among GC implementors, it is a truth universally acknowledged that a program containing finalizers must be in want of a segfault".
[1]: https://hackage.haskell.org/package/base-4.19.1.0/docs/Forei...
It works well in only the case where you have perfectly well scoped small regions that you can model as your lambda. When you need to actually do anything intricate with the lifetime where you want it to escape (probably into a datastructure), the callback won’t cut it and it’s on you to ensure the foreignptr’s lifetime becomes interlinked with the returned b
> it’s on you to ensure the foreignptr’s lifetime becomes interlinked with the returned b
Yes; the simplest way to do that is to make sure that your data types never contain raw `Ptr`, only `ForeignPtr` -- same as in C++, where seeing `mytype * x` should ring the alarm bells.
You could say "but what if I call another FFI function that needs the Ptr as an argument"? In that case, surely the function needs to document if it takes ownership of the pointer. If it doesn't document that, yes, it'll crash; but that's unrelated to finalizers (the "impossibility of composing" of which which is what the post claimed); it would also crash if no finalizers were involved.
The only viable way to escape all this is to rewrite the software in the host language. A worthy goal but I don't see anyone signing up for that herculean task outside the Rust community.
The difficulty is getting the behaviors if you're outside of the run-time, writing writing FFI bindings, where you don't have the option of hacking in new ownership behaviors into the target objects, and your FFI may be lacking in expressiveness also.
If it's a bad problem for a certain kind of object, and that object is relatively important (lots of people want to use bindings for it), the way to go may be a lower level extension module rather than FFI, or a wrapper library around it which is more amenable to FFI.
[1] https://learn.microsoft.com/en-us/dotnet/api/system.gc.keepa...
https://learn.microsoft.com/en-us/dotnet/api/system.runtime....
And there's an older API that wraps them which I've found quite handy: https://learn.microsoft.com/en-us/dotnet/api/system.runtime....
Essentially, if you want to (externally - in code you wrote) associate two objects with each other without being able to modify the code of either of those objects, dependent handles are the ticket. It only creates a one-way liveness relationship, though, which may not be sufficient for every use case...
Eventually you’ll want an object graph cycle that traverses through the non-GC heap and then you’re really screwed.
That object should take care of it. Even if the parent object is reclaimed, the byte string should independently persist for as long as is necessary. Since byte strings don't contain pointers to anything, reference counting could be used.
The refcount could be in the parent blob, such that when its nonzero, the blob is pinned against reclamation.
Of course you can't just share out the internals of an object, such that GC doesn't know about them. It doesn't matter if it's a foreign object set up with FFI or something built into the run time.
The only nice way to have a GC language interop with another language is if that other language is also GC’d and that language’s GC interops with yours (what I like to call frankengc).
(Once I add GC to Fil-C and give it proper frankengc hooks, you’ll be able to do destructorless FFI to whatever C code Fil-C can run. It’ll be great I promise.)
It may be worth reading that previous sentence again. I've seen it so many times. Even in the comments on the linked article I see people trying to propose solutions to the problem. But it turns out to be one of those problems that looks simple, even super simple ("just" provide a function to do it), but then it turns out that hundreds of smart people have poured years into solving it... and failed, as far as I know, universally.
Fortunately, you need to have a very particular kind of program to get hit hard by the problem, and an even more particular kind of program to not have the ability to mitigate it into an acceptable issue somehow, most often something like being more careful to close files when done with them. I think I brushed the issue once and it was addressed by exactly that, being more careful to close files when done with them because we accidentally were leaning on GC to close them and that become a de facto file handle leak.
That said, the problem may be exposed by GC but it isn't entirely caused by it. Sufficiently complicated manually-managed memory programs can encounter things that stem from the same underlying issue. If the lifetime of objects that need a "finalizer" (or destructor, if you prefer) becomes sufficiently complicated, it can also become very difficult to figure out when they need to be cleaned up. But you get more runway, and you have more options to deal with it when you have the ability to more forcibly descope things and deallocate them at a known point in time. When I was programming with libpurple, the generalized IM library written in C, it used a lot of "closures" (implemented in C), and figuring out when to correctly deallocate and finalize on them was very, very difficult, because their lifetimes were complicated and not terribly well documented. This is the same underlying problem in a lot of ways, it just manifests differently. But in a manually-managed world, it is at least a solvable problem, however hard the problem may be to solve in some languages (while they had no better option at the time, I gotta say in hindsight libpurple would really have been served by Rust, and I'm not thinking the generalized "rewrite everything in Rust" here but specifically that all those closures would have been much safer to work with with lifetime annotations enforced by the compiler), and however difficult you may make the problem on yourself with your own programming practices.
Alas, no silver bullet.