- I have trouble gauging the effects of `*`, aka "will I cp/mv the dir or all contents of the dir"? Then shell expansion getting in the way, like expanding it all ("argument list too long"). I try to use rsync when possible, where intermittent `.` in paths are relevant, as well as trailing slashes... and don't forget `-a`! Then I forget if those same things apply to cp/mv. Then I cp one directory too deep and need to undo the mess -- no CTRL+Z!
The wide-spread use of `rm -rf some-ostensibly-empty-dir/` is also super dangerous, when `rm -r` or even `rmdir` would suffice, but you need to remember those exist. I find it strange there's no widely (where applicable) available `rm-to-trash`, which would be wildly safer and is table stakes for Desktop work otherwise.
Then there's `dd`...
I use terminals a lot, but a GUI approach (potentially with feedback pre-operation) plus undo/redo functionality for file system work is just so much easier to work with. As dumb as drag-and-drop with a mouse is, a single typo can't wreck your day.
- > git on the command line is the power tool. The VS Code plugin is the training wheels version.
I don't disagree with your underlying point, but git is perhaps the worst example. It is a horrible tool in several ways. Powerful yes, but it pays too large a price in UX.
I've only ever used git through the CLI as well, but having switched to jujutsu (also CLI) I am not going back. It is quite eye-opening how much simpler git should be, for the average user (I realize "average user" is doing some heavy-lifting here -- git covers an enormous number of diverse use cases).
What jujutsu-CLI is for me (version control UX that just works) might be VSCode's GUI git integration for other people, or magit, or GitButler, or whatever other GUI or TUI.
Who cares about training wheels? If the real deal is a unicycle with one pedal and no saddle, I will keep using training wheels, thank you very much.
- All IDEs including VSCode have fuzzy-finding for files, and fuzzy-finding for symbols as well. Between these, I never find myself using the file tree, except when it's the best tool for the job ("what other files are in this directory?", file tree manipulation (which IDEs recognize you doing, adjusting imports for you!) etc.).
I actually notice how this pattern is very fast, but I lose a code base's mental map. Coworkers might take longer to open any individual file but have a much better idea of repo layout as a whole. That makes them effective otherwise.
- No thank you...
One of the primary things shells are supposed to excel at is file system navigation and manipulation, but the experience is horrible. I can never get `cp`, `rsync`, `mv`, `find` right without looking them up, despite trying to learn. It's way too easy to mess up irreversibly and destructively.
One example is flattening a directory. You accidentally git-cloned one too deep, into `some/dir/dir/` and want to flatten contents to `some/dir/`, deleting the then-empty `some/dir/dir/`. Trivial and reversible in any file manager, needlessly difficult in a shell. I get it wrong all the time.
Similarly, iterating over a list of files (important yet trivial). `find` is arcane in its invocation, differs between Unix flavors, you will want `xargs` but `{}` substitution is also very error-prone. And don't forget `print0`! And don't you even dare `for f in *.pdf`, it will blow up in more ways than you can count. Also prepare to juggle quotes and escapes and pray you get that right. Further, bash defaults are insane (pray you don't forget `set -euo pipefail`).
How can we be failed by our tools so much, for these use cases? Why aren't these things trivial and safe?
- > just another kind of complex?
To some extent yes. If all you have is 2 GitHub Actions YAML files you are not going to reap massive benefits.
I'm a big fan of CUE myself. The benefits compound as you need to output more and more artifacts (= YAML config). Think of several k8s manifests, several GitHub Actions files, e.g. for building across several combinations of OSes, settings, etc.
CUE strikes a really nice balance between being primarily data description and not a Turing-complete language (e.g. cdk8s can get arbitrarily complex and abstract), reducing boilerplate (having you spell out the common bits once only, and each non-commit bit once only) and being type-safe (validation at build/export time, with native import of Go types, JSON schema and more).
They recently added an LSP which helps close the gap to other ecosystems. For example, cdk8s being TS means it naturally has fantastic IDE support, which CUE has been lacking in. CUE error messages can also be very verbose and unhelpful.
At work, we generate a couple thousand lines of k8s YAML from ~0.1x of that in CUE. The CUE is commented liberally, and validation imported from native k8s types, and sprinkled in where needed otherwise (e.g. we know for our application the FOO setting needs to be between 5 and 10). The generated YAML is clean, without any indentation and quoting worries. We also generate YAML-in-YAML, i.e. our application takes YAML config, which itself is in an outer k8s YAML ConfigMap. YAML-in-YAML is normally an enormous pain and easy to get wrong. In CUE it's just `yaml.Marshal`.
You get a lot of benefit for a comparatively simple mental model: all your CUE files form just one large document, and for export to YAML it's merged. Any conflicting values and any missing values fail the export. That's it. The mental model of e.g. cdk8s is massively more complex and has unbounded potential for abstraction footguns (being TypeScript). Not to mention CUE is Go and shipped as a single binary; the CUE v0.15.0 you use today will still compile and work 10 years from now.
You can start very simple, with CUE looking not unlike JSON, and add CUE-specific bits from there. You can always rip out the CUE and just keep the generated YAML, or replace CUE with e.g. cdk8s. It's not a one-way door.
The cherry on top are CUE scripts/tasks. In our case we use a CUE script to split the one-large-document (10s of thousands of lines) into separate files, according to some criteria. This is all defined in CUE as well, meaning I can write ~40 lines of CUE (this has a bit of a learning curve) instead of ~200 lines of cursed, buggy bash.
- A similar problem applies to Go, just inverted. Take iteration. The vast majority of use cases for iterating over containers are map, filter, reduce. Go doesn't have these functions. That's very simple! All Go developers are aligned here: just use a for loop. There's no room for "10% of concepts corners", there's just that 1 corner.
But, for loops get tedious. So people will make helper functions. Generic ones today, non-generic in the past. The result is that you have a zoo of iteration-related helper functions all throughout. You'll need to learn those when onboarding to a new code base as well. Go's readability makes this easier, but by definitions everything's entirely non-standard.
- > This is a non sequitur. Both Rust and Zig and any other language has the ability to end in an exception state.
There are degrees to this though. A panic + unwind in Rust is clean and _safe_, thus preferable to segfaults.
Java and Go are another similar example. Only in the latter can races on multi-word data structures lead to "arbitrary memory corruption" [1]. Even in those GC languages there's degrees to memory safety.
- A DB action and audit emission have to run transactionally anyway.
So on cancellation, the transaction times out and nothing is written. Bad but safe.
The problem is the same on other platforms. For example, what if writing to the DB throws an exception if you’re on Python? Your app just dies, the transaction times out. Unfortunate but safe.
If it does not run transactionally you have a problem in any execution scenario.
- Not the OP but I’m a big fan of this pattern as well.
At work we generate both k8s manifests as well as application config in YAML from a Cue source. Cue allows both deduplication, being as DRY as one can hope to be, as well as validation (like validating a value is a URL, or greater than 1, whatever).
The best part is that we have unit tests that deserialize the application config, so entire classes of problems just disappear. The generated files are committed in VCS, and spell out the entire state verbatim - no hopeless Helm junk full of mystery interpolation whose values are unknown until it’s too late. No. The entire thing becomes part of the PR workflow. A hook in CI validates that the generated files correspond to the Cue source (run make target, check if git repo has changes afterwards).
The source of truth are native structs in Go. These native Go types can be imported into Cue and used there. That means config is always up to date with the source of truth. It also means refactoring becomes relatively easy. You rename the thing on the Go side and adjust the Cue side. Very hard to mess up and most of it is automated via tooling.
The application takes almost its entire config from the file, and not from CLI arguments or env vars (shudder…). That means most things are covered by this scheme.
One downside is that the Cue tooling is rough around the edges and error messages can be useless. Other than that, I fully intend to never build applications differently anymore.
- I have not, thanks for the suggestion though.
A second challenge with the particular setup I’m trying is peer authentication with Postgres, running bare metal on the host. I mount the Unix socket into the container, and on the host Postgres sees the Podman user and permits access to the corresponding DB.
Works really well but only if the container user is root so maps natively. I ended up patching the container image which was the path of least resistance.
- One challenge I have come across is mapping multi-UID containers to a single host user.
By default, root in the container maps to the user running the podman container on the host. Over the years, applications have adopted patterns where containers run as non-root users, for example www-data aka UID 33 (Debian) or just 1000. Those no longer map to your own user on the host, but subordinate IDs. I wish there was an easy way to just say "ALL container UIDs map to single host user". The uidmap and userns options did not work for me (crun has failed executing those containers).
I don’t see the use case for mapping to subordinate IDs. It means those files are orphaned on the host and do not belong to anyone, when used via volume mapping?
That is a long list of linters (only a few enabled by default however). I much prefer less fragmented approaches such as clippy or ruff. It makes for a more coherent experience and surely much higher performance (1x AST parsing instead of dozens of times).$ golangci-lint linters | wc -l 107- > Now if only Go was consistent about methods vs functions
This also hurts discoverability. `slices`, `maps`, `iter`, `sort` are all top-level packages you simply need to know about to work efficiently with iteration. You cannot just `items.sort().map(foo)`, guided and discoverable by auto-completion.
- Yes, Python is massively ahead there. The largest wart is that types can be out of sync with actual implementation, with things blowing up at runtime -- but so can Go with `any` and reflection.
Python, for a number of years at this point, has had structural (!) pattern matching with unpacking, type-checking baked in, with exhaustiveness checking (depending on the type checker you use). And all that works at "type-check time".
It can also facilitate type-state programming through class methods.
Libraries like Pydantic are fantastic in their combination of ergonomics and type safety.
The prime missing piece is sum types, which need language-level support to work well.
Go is simplistic in comparison.
- > Rust has multiple ways of handling Strings
No, none outside of stdlib anyway in the way you're probably thinking of.
There are specialized constructs which live in third-party crates, such as rope implementations and stack-to-heap growable Strings, but those would have to exist as external modules in Go as well.
- Go is not thread safe. It even allows data races in its own runtime. Go 1.25 fixed a nil-checking error that has been sitting in the runtime for two years.
Generally, Go will let you compile just about anything when it comes to channels, mutexes (or not..), WaitGroups, atomics, and so on. It usually compiles as there are no static checks, and then just fails at runtime.
- One of my favorite features of jj is file watching. Once set up, jj will snapshot the repo on every filesystem event. This means on every file save, you get a git commit! It provides arbitrarily fine-grained commit history, and works across all tools (not just your IDE). I set it up in the config and it has "just worked" ever since.
The result is that `jj evolog -p` will show detailed history of your actions. But all but the most recent commit are hidden, so neatly tucked away behind the same change as usual.
Another favorite is git no longer yelling at me and having meltdowns about switching branches - "files would be overwritten, please stash". This never happens in jj, by design. It's nicer than "auto-stash" options of recent git versions.
- In colocated repos, running both git and jj commands is supported. I use it in release workflows which require git tags, which jj does not support creating. However, jj will pick up ("import") on git-created tags just fine afterwards.
AFAIK, jj runs "import" before and "export" (to git) after every invocation. That means it always has a consistent view.
jj can also handle concurrent edits by itself, think in a repo shared across a network. That said, I wouldn't think concurrent git commands are safe.
The things I want to get done on my computer are so much richer than moving files back and forth in a terminal all day. Simple things should be simple. Tools should enable us. Moving files is a means to an end and these commands having so many sharp edges makes me unhappy indeed.
But yes, of course I am an IDE toddler and cannot even tie my shoes without help from an LLM, thanks for reminding me.