Preferences

One little deno feature I didn't realize until digging in to play with it more was that it can 'compile' code and the V8 runtime into a single standalone executable. IMHO this is really nifty and feels very go-like in building tools with zero other dependencies. Obviously the V8 runtime adds a lot of size (seems like a hello world is 90MB or so in quick testing) but I like the potential for building tools that are easy to give to others to install and use.

It seems great for internal use where you have analysts and such using a mish-mash of scripts and one-offs with lots of dependencies and little documentation or time to help people setup and use them. Just hook up your CI to spit out new executables and be done with walking people through how to troubleshoot their broken homebrew node, python, etc. environments.


There used to be a '--lite' option that produced smaller binaries, but it looks like they removed it recently [0]. There's also a CLI called pkg that lets you achieve something similar with Node. It actually compiles the source code into V8 bytecode and bundles that with the Node binary [1].

[0] https://github.com/denoland/deno/issues/10507

[1] https://github.com/vercel/pkg

You can do this with Node.js as well using: https://github.com/vercel/pkg
Personally I weap for the death of shared libraries. It's great developer convenience to statically bundle everything, and ever so manageable...

But it makes me so sad thinking not just of the storage footprint of dozens of copies of the library floating around (be it v8 for deno, or chrome for electron), but the memory cost too of having that near-to duplicate library loaded multiple times too.

And then there's the maintenance cost to the user, of needing to update each package independently to get updates. With something like the web, I as a user would far prefer having bleeding edge shared libraries/v8's/browsers underfoot, and carry the expectation that each app can load & run atop this most recent version.

The trends have been very much in the other way. One of the main Go champions tout is it's static compilation, I think rust too. What are containers but entire static system images, as opposed to just having some static libraries? Part of me is willing to acknowledge that the larger footprints aren't that impactful, aren't really a problem, but it contravenes the deliberate & elegant simplicity that something like an OS distribution used to represent: I look at Debian as thousands of different pieces, all wonderfully integrated & interlinked, a cohesive system, that one ought to be able to bring additional projects onto to compile & extend. But our appetite for such has waned. The convenience for you & your users of just bundling everything you personally need to ship your stuff is quite high, at least when you're not updating dozens of oversized apps or running janky old versions because of this easy-to-get-started convenience.

Static linking does not bundle the entire library into a binary. It only bundles what's necessary. When you load a shared library, you load an entire copy of the library into memory; when you load a statically linked binary, you load only the relevant bits.

Static linking also enables much better link-time optimization as well as several additional hardening measures; Clang's CFI sanitizer in particular works much better with access to the whole program.

Furthermore, the shared global state between all programs on a traditional *nix box is a limitation, not an ideal. Compartmentalizing through sandboxing is the way forward. Namespace isolation, separate users, and cgroups are already widespread. Static linking makes this much easier as it avoids having duplicate copies of a full library loaded into memory.

Dependency updates should be managed by a package manager. When a library updates, the package manager will handle dependency resolution.

This only leaves two concerns: disk space and network bandwidth of software updates. Depending on your situation, these may or may not be worthwhile tradeoffs. In the face of the benefits to security, portability, and simplicity that come with static linking, I'd argue that it should be our default preference.

Thank you for the thought out reply. I'm going to write a reply here again, but know that in many ways I agree with you. I'm not convinced either way, but I do definitely see that we are losing much more than your reply has been able to confess to ("disk space and network bandwidth of software updates").

> Static linking does not bundle the entire library into a binary. It only bundles what's necessary.

Fair enough. Static linking is only a partial extra copy.

> When you load a shared library, you load an entire copy of the library into memory; when you load a statically linked binary, you load only the relevant bits.

I doubt that the entire library is loaded into memory. I'm not sure which regions of a library are fully loaded- some are loaded most likely- but my belief is many regions (including the code) are not automatically paged in, they are simply mapped in, should they be needed.

I'm not sure that functionally there's any win here for statically built binaries.

> Static linking also enables much better link-time optimization as well as several additional hardening measures; Clang's CFI sanitizer in particular works much better with access to the whole program.

This sounds like a good real win. I wonder what it would take to try to claim back some of these wins for dynamically linked libraries, if that is at all possible.

> Furthermore, the shared global state between all programs on a traditional nix box is a limitation, not an ideal. Compartmentalizing through sandboxing is the way forward. Namespace isolation, separate users, and cgroups are already widespread. Static linking makes this much easier as it avoids having duplicate copies of a full library loaded into memory.*

This is opinion stated as fact. Yes containers & cgroups & isolation are eating all the systems & it feels like the wheel of destiny is turning & perhaps it is. Stating that fact like it implies this path is 100% equal to or better in every way than what we do is, to me, harmful to understanding the real nuance & complexities of this change we are undergoing.

To me it feels more like it has been convenient & expedient. I feel my operating system has done a better job & been a viable platform, having some inherent complexity but at the benefit of consistency & legibility that running dozens of different unalike containers can not replicate. While we ignore problems like user namespaces in containers initially, those problems have a tendency to creep back in in new forms.

I would have liked to have seen an alternative world where more companies did things like build Debian packages to deploy their releases. (I've seen a couple but has felt exceptional to me. Reaching a critical mass adoption of any OS would have been interesting.) Containers allow everyone to do what they want & not care or think about how to enmesh or integrate- isolation is freeing and liberating that way, and there are huge advantages to that, but I also recognize & appreciate that systems, historically, used to operate with more sympathetic coherent & consistent singular nature.

Modern Linux is gaining many of the composability advantages that made/make containers so attractive, with plans for the /usr merge nearly coming to fruition (an upcoming Debian Bullseye objective) and ongoing progress adopting "Revisiting How We Put Together Linux Systems"'s composed filesystem (http://0pointer.net/blog/revisiting-how-we-put-together-linu...), which permits for modular system images, runtimes, & frameworks to be composed or pluggable-ized together when making Linux systems. This brings many of the management wins of containers, with their aggregate of overlayed layers, to Linux.

> Dependency updates should be managed by a package manager. When a library updates, the package manager will handle dependency resolution.

I'm not sure what this particular opinion is opining, so it's hard for me to discuss.

Right off the bat, I think it's worth pointing out: which dependency manager? Npm, yarn, yarn2, or pnpm? And that's just for node.js. Should we use nvm, nodebrew, n, or asdf to manage node versions? What if two projects don't adopt the same dependency management systems or tools?

My OS had a dependency manager that tackled all these questions very well, even when mixing multiple versions of my OS at once (I run Debian unstable with testing & experimental & other repos pinned on, for example, and can manage packages & their dependencies across all these releases) by taking responsibility for crafting cross language tools & solutions for how to work with dependencies.

> This only leaves two concerns: disk space and network bandwidth of software updates.

Not sure where memory usage got ignored, but it certainly comes to mind. Along with memory usage, it's even/often more pressing concern, cache efficacy springs to mind: this isn't just a matter of how much memory you need in the system: if there are dozens or hundreds of programs using the same base libraries, the impact of launching one more can be minimal on the system. If each launch requires it's own universe, the system pressure is higher.

> In the face of the benefits to security, portability, and simplicity

I can convince myself both that these are real wins, and also see reasons why I think static linking or containers have no strong claim to being inherently better in any of these categories. Much of it, to me, boils down to get started costs/mentalities versus long-term costs/mentalities.

And that question of views helps describe what I think of as the biggest win most obvious win that is unaccounted for rhere: the simplicity of having a well designed all inclusive systems & operating system, with established well tread practices for doing things. Yes it's "simpler" to not have to think about all that stuff, to be able to build & run whatever container you want, without concerning yourself, but when it comes time to run & understand & maintain your systems, I often consider that it was much simpler when there were well tread & consistent paradigms that all the software on the system could conform to. Updating one libssl or one cert store was a simplicity & a security. From a higher level, the limitation & constraints of the operating system is what made operating it simpler & more predictable. How things related was planned for & considered. Trying to say things don't need to relate (containers, isolation, static linking) is convenient when it works, but it misses some technical advantages, many of the skipped over problems tend to rear their heads again over time, and it leaves those forging isolated containers little overarching blueprint or pattern for how to make their work, making most containers fairly special snowflakes instead of predictable & expectable.

I wasn't necessarily referring to containers; I was referring to sandboxing tools like Bubblewrap and Minijail, which isolate many parts of the host system, the namespace, and filter syscalls. This approach combined with restrictive SELinux policies should be much more widespread. By default, most *nix systems don't really have any security model besides user accounts.

When programs run with some more isolation/sandboxing, they won't be able to access the same copies of shared libs loaded by other programs so they'll re-load them. In most benchmarks I've seen, a much greater amount of memory is used by a program loading a single shared library than a statically-linked program, even excluding the overhead of the ld interpreter.

I was also referring to the OS package manager, not bleeding-edge programming-lang-specific package managers.

>contravenes the deliberate & elegant simplicity that something like an OS distribution used to represent

That simplicity is also the root cause of an incredible amount of issues. Want to install a program that needs a different library version than is already installed? Well, too bad, guess you'll have to manually patch it (assuming the source code is even available). And there's countless such stories, 'dependency hell' is still an issue to this very day for dynamically linked libraries. Docker and the like didn't come into existence for no reason but because the experience of simple package managers is just not adequate.

Now, dynamic libraries on their own aren't at fault of course. If more complex package managers like Nix were common this wouldn't be an issue, but simplicity 'won', so static linking it is.

> Want to install a program that needs a different library version than is already installed?

You can install libthing.1.1.so and libthing.1.2.so and link mybinary to v1.1 if you need it?

Sure, then you're essentially back to manual dependency management though and I don't want to waste my or my users time with that.
Not necessarily, you can have packages and automatic dependencies using yum or apt?
Honestly the memory cost isn't that much.

Code size has not grown as fast as memory and storage capacity has. One is constrained by manufacturing progress, the other is constrained by programmer output.

Deno is sub 50MB

For any non trivial app, your assets and working set data will likely dwarf the code size itself.

EDIT: Was looking at an old bug, deno is closer to 100MB. Starting to feel a bit large but still.

I don’t agree with this comment, because I think these tradeoffs have mostly been shown to be worth it, in most cases.

But I also think those of you downvoting it are Doing HN Wrong.

If it's an internal tool used by 10's of people I really don't care one bit about memory use, the elegance of shared dependencies, etc. The kind of optimizations debian is going for are great at scale but it's the kind of last 10% of work that will take 90% of your time. This is optimizing squarely for fastest agility with lowest developer and maintenance time.
The problem seems to stem from library developers making should-be-stable APIs and later fixing/updating them in a way that shouldn’t break compatibility but does, all without a major version change. While this doesn’t happen to the majority of libraries out there, every developer has encountered this issue at least once, and the easy way to fix this is to statically link all dependencies, thus enabling Uber-compatibility as future dependency updates won’t be automatically introduced in your program. When these dependencies are updated, the app developer can perform a full suite of validation that guarantees the update doesn’t break anything.

The only way to fix this is with either an extremely stable API (eg. libc) or a mostly-stable api that developers know they have to validate often, and that’s going to be the web with Chrome/Firefox PWAs.

bytecodealliance's efforts in the space look super promising for serverless ("nanoprocess") computing especially. Have you looked into those?

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal