515
points
185
comments github.com
Devbox is a command-line tool that lets you easily create isolated shells and containers. You start by defining the list of packages required by your development environment, and devbox uses that definition to create an isolated environment just for your application.
In practice, Devbox works similar to a package manager like yarn – except the packages it manages are at the operating-system level (the sort of thing you would normally install with brew or apt-get).
See it in action: https://youtu.be/WMBaXQZmDoA
Congrats to Daniel and the team! Excited to see what’s next after this.
When you think about bringing it to production (eg getting dev teams to migrate to it), Nix goes from a genuinely interesting idea to an "oh, that's cute" experimental toy because no one is going to spend hours learning Nix's weird DSL. It's simply not approachable in its base form.
I spent hours converting my devboxes to NixOS and managing my dev environments with home-manager and I still don't have a clue how any of it works. Errors are opaque and annoying to debug. Dev environments constantly break and change in ways that belay the "reproducible" nature of Nix. If someone actively interested in Nix can't easily grasp it, how does anyone expect it to catch on in the real world?
Here's an opposite opinion:
> My hot take is that Nix actually has great syntax
> In particular, the way Nix handles record syntax is exemplary and more languages should copy what Nix does in this regard
And even more related to this discussion:
> A lot of the times, when people say they hate "Nix's syntax", what they more likely mean is that they hate the domain-specific languages of the Nixpkgs overlay system and/or the NixOS module system
https://mobile.twitter.com/GabriellaG439/status/156300116656...
The issue is essentially that configuration options all get merged into a global namespace, but there are no facilities to track where they came from. So when configuration mismatches of certain kinds occur, you get an error in some library code that's trying to merge or coerce two incompatible values, and nothing pointing you to the two places where the conflicting values are originally set.
(This kind of error, the most common mostly-useless error message, is typically easily debugged by searching for the relevant options in your configuration and in the source code of that collection of modules. But that is still backwards and a chore, and deserves a real solution.)
Anyway, the package definitions and builds in Nixpkgs don't use any such module system in any way. So this tool is not wrapping the functionality that is associated with opaque error messages. :)
(Also Nix and Nix-flakes are two different things, like Javascript and React.)
"Nix" isn't really a thing, no more than "Linux" is. It's a collection of tools and languages and frameworks people use to solve various very different problems.
Doesn't devbox depend on Docker, though? I figure any performance losses from Docker would happen with this too.
When writing javascript there's often a desire to have "isomorphic" or "universal" applications. Write the code once and run it in _either_ the client or the _server_.
Devbox is taking a similar approach to the development environment: declare it once, run it locally as a shell, and when you're ready, turn it into a container without having to re-declare it. It's only the latter functionality that has a Docker dependency.
I got that from the README btw, no guarantees :)
[1] https://github.com/benchkram/bob
They are definitely faster and great improvements.
But when I can use Linux through qemu and compile my companies Haskell application in 45s rather than 3m30s...
The choice is obvious to use Linux.
What changed?
Out of interest, what in your dev process makes using containers so impactful?
Using the dodgy file share volume mounting, maybe for particular file access patterns.
Not in my experience.
If you set up volumes for node_modules or any folders where where dependencies are stored you get the same performance on Mac as anywhere else.
For rust, which I use, I'm able to get better performance on Mac under docker than using rust tools natively. See https://purton.tech/blog/faster-rust-incremental-builds/
> Inside a container the file system is very slow
Because you are not using "a container".
You are using a container that happens to be running in a linux VM on your OS X laptop. It's not the containers fault, it's the entire virtual machine that you are running because you're trying to run technology built on top of one operating system while running a completely different one.
I find it sad that many people use Docker, but the majority of them run Linux in a VM inside an expensive proprietary platform to do so, sometimes without even realizing it.
Meanwhile, the Linux desktop ecosystem is deteriorating and should be used more and receive more financial support.
it seems like everybody here is using Macs for development to the point that if you don't say you aren't, you are assumed to be on a mac. Windows with WSL2 is actually pleasant to use, I can recommend trying it out. while WSL2 is a VM technically, the level of integration makes it basically native (if you use the VM disk for your workspace, which you should.)
Yep, if you try to build something that is stored on /mnt/c you'll have the same horrible performance.
It also has some early functionality where you can turn that shell into a Docker container, so you can run the same shell + program in other environments or in the cloud
I wonder if it takes this approach because there's some issue with using Nixpkgs' dockerTools on macOS— those tools let you create Docker/OCI images without even having Docker installed.
I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container. If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project, then I'm not sure nix-shell would be able to help, but I would be happy to learn there is an option to do otherwise.
I build ISO's from nix all the time to run special compute nodes. The machines boot from the ISO, so every boot they get the same, sane environment.
> [I] read somewhere that [nix-shell]'s not really made for this purpose originally, rather for just building packages, bash being only one of the limitations.
Yeah, nix-shell was originally made for debugging Nix builds. The first capability it gained was setting up the build environment of a given package, which equips you with the same compiler, linker configuration, etc., as the package would get if you ran nix-build, or when the package is built on CI/CD or whatever. It even loads the bash functions that the build system invokes so that you can experiment with manually adding steps before and after them and things like that.
But it's gained other capabilities since then, like `nix-shell -p`, whose purpose is a more general try-before-you-buy CLI and magic shebangs. It also has two descendants, `nix shell` which is just about letting you set up software, and `nix develop` which is more oriented toward setting up whole development environments and all the env vars associated with it. Anyway I think that's mostly trivia; it doesn't pose any problems for devbox afaict.
> I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container
That's true, and that's really the beauty of it: you can set up complex toolchains and use them as if they were simply part of your normal system, but without worrying that they unwittingly depend on parts of your base system that may be unique to you. Likewise, they don't require any permanent changes to your normal, global environment at all. If you've used Python before, you can think of nix-shell like a generalized venv.
> If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project
Nix can provide sandboxing for builds, for proper packages. So if you want to make sure your environment is complete, adding to your Nix shell development environment to make a complete package may help you.
But the purpose of shells like this isn't to protect you from running `rm -rf /`, if that's what you're after. It doesn't protect you from dogecoin miners in your `npm install` hooks, if you're just using Nix to provide `nodejs` and then running `npm install` as usual.
What something like this does do is allow you to use all that software without installing it. So if you open up a new session, none of that stuff will be loaded.
Nix can generate container images and VMs for you, though, and that is also one of the things `devbox` can do, if your isolation concerns have more to do with security (disallowing access to your system) than 'purity' (disallowing dependency on your system, not installing things to your system).
I hope that makes sense :)
> But the purpose of shells like this isn't to protect you from running `rm -rf /`, if that's what you're after. It doesn't protect you from dogecoin miners in your `npm install` hooks, if you're just using Nix to provide `nodejs` and then running `npm install` as usual.
This is absolutely fair. I was mostly saying what I wish I could have: isolation (as in can't write outside of the current directory) together with the ease of getting packaged without installing them that nix-shell provides, without the overhead of docker or a vm. I don't think it's impossible to build although I appreciate that it may be out of scope for this particular project.
It seems easy to start using, but will I run into some issues a few months down the line like a package that's not available through Nix, or some Nix issue, and then I'm back to dealing with Nix's complexity only that I got there while staying clueless about Nix.
For instance, my nix installation `locale` was not `UTF-8` (after the first `devbox shell` run), but all my tools and code needs the locale to be UTF8. When you quick check `locale` after `devbox shell` it keeps returning to your host shell locale instead of the nix one, so you can easily be fooled by which layer you're really running. `which` command also loses its purpose.
Then you need an extra step to fix the locale: https://nixos.wiki/wiki/Locales
Then you find other similar issues that need to be taken care, like conflicts with `rbenv`, `poetry` and so on.
It's an awesome idea, but with `docker`, `podman` or fedora's `toolbox` I have some more certainty on what it's running.
With `docker` you can `docker inspect` or something like that. `devbox shell` tries to merge the nix layer with my own shell in a not predictable way.
Anyway, nice project indeed.
Dockers ecosystem aims more for repeatable, not reproducible.
> At the end of the day we all need to consider versioning (i.e., the image version) the versions (i.e., package versions) anyway.
The granularity of pinned versions and feasibility of having a culture of everything being pinned versus needing to know a crazy amount of things that need pinned a big difference.
https://grahamc.com/blog/nix-and-layered-docker-images
Nix gives repoducibility, docker gives repeatability but not reproducibility.
Also see the video "Use flake.nix, not Dockerfile": https://www.youtube.com/watch?v=0uixRE8xlbY
you can swim upstream and make your own docker image reproducible, but that doesn't change an ecosystem of images that aren't
> Devbox was originally developed by jetpack.io and is internally powered by nix.
If this wasn't mentioned, I'd have easily mistaken devbox to be an escaped implementation of Brazil VersionSets and Apollo VersionFilters (used internally at Amazon).
If I may, in the near-term does jetpack.io plan to continue to use and sponsor devbox's development? And, in the long-term, does jetpack plan to monetize this project, or donate it to a foundation, or some such?
Will definitely keep and eye on this.
I know what you're gonna say. You'll say, "It's 2022. We developers don't have internet connections; even when we do, our internet is off most of the time, and we can barely transfer 1.25 megabytes per second. How could we possibly use an IDE and filesystem monitor to copy a 10kB source file when we write to it, to a remote container + open shell with our app running? The technology is simply too complicated for us. We will never see such wonders in our lifetime."
And I would say to you: take courage. If you believe in yourself, and maybe with some luck, it might be possible for you to do development remotely, like every server admin has been doing over telnet and ssh for over 30 years.
Maybe in another 30 years, we will have gained the technological insight to be able to figure out how to have a team full of people to work on a single server at the same time, all connected to a network over an internet connection, with some crazy interface that keeps their shell session alive even if their internet disconnects. A man can dream...
Dictating to people how their workflow should look - _that_ is the antipattern. Developers should choose their OS, their editor, and virtually everything about their workflow. Forcing them to work remotely robs them of the opportunity - you can work graphically over SSH, and you can use SSH from Windows (if that is your preferred OS), but it's a hack that isn't always going to work quite right and is going to create unnecessary friction for people with different workflows than you or I. That's to say nothing of auxillary tools, like debuggers, that people may want to bring to bear. Don't get me started on getting debuggers to work over SSH, yes it can be done, yes I have done it, no I don't ever want to do it again.
The infrastructure should bend to conform to the developer's needs, and not the other way around. Generally technology should conform to the needs of humans, and humans should not be asked to contort themselves for the benefits of technology.
For instance, at my last company, I was asked to use a Mac laptop so that the team could standardize on Mac. I complied to be a team player, and this turned out to be a mistake. It robbed me of so much efficiency in the first few months, and it never worked as well for me as Debian running on my personal laptop, which cost a _third_ of the price of that Mac. I had so many frustrating issues and things that just never worked properly. But for other people, it was a great choice; I have no interest in bashing the Mac - my point is that people should be allowed to choose and customize their own tools.
I'm not convinced it is. Back in the day, each developer had their workstation (not laptop) configured just so, with their own pile of `doit.sh` scripts to tickle the system in just the right way, to get things to compile and render and send it off to production. But we're no longer there. Developer velocity is a thing companies take seriously, so developer workflow is actually important to them, and standardizing one true editor as the supported editor means that all the developers get improvements whenever the internal tooling team does a release. And I'm saying this as a vim person who's tried to move over to VSCode (and have failed so far).
Of course, smaller companies don't have an internal tooling or developer productivity team, and thus allowing people to choose their own tools is optimal. But I've also seen the big picture efficiency loss that results from every developer having a bespoke configuration thats intelligible only to themselves, and the inability to unilaterally improve people's tooling and integrations with various systems they interact with. Once the team gets debugging over SSH working or whatever, they can just deploy that, in a working state, to everyone.
There's a world where Slack is the one true communication method and everyone is on a Macs. Unfortunately for me I'm set in my ways would rather use Ubuntu on a Thinkpad with vim, but that doesn't mean I haven't see that other world.
When it comes to everyone building things the same way - that's what containers and CI/CD is for. If we didn't have those tools, then I may be more inclined to agree, but we do and they work. Do they work perfectly? No. But they ought to surpass this fairly low bar. If they didn't then I don't see how we'd have confidence in a centralized solution either.
I literally cannot use an editor other than neovim without hurting myself due to an RSI problem. Believe me, I have tried them all. If I was told that I must adopt VSCode, I would literally have to quit for my health. That being said, VSCode is an excellent tool, and some people I know with RSI issues swear by it. More power to them.
Developer velocity is definitely not going to be helped by handicapping your developers and forcing them to use tools that don't quite fit their workflow. I'm reminded of an anecdote about fighter pilots, it may be apocryphal, I don't know. But it goes like this; the air force was designing the seat for their fighter planes. So they took measurements of a statistically significant number of fighter pilots (all or most of whom were men), averaged them, and designed a seat to accommodate that figure. But pilots complained the seats were uncomfortable, and as the demographics changed and more women became fighter pilots, they were very poorly served by the chairs. Eventually they realized that they had designed the seat for a body that no one had. So when they redesigned it, they designed a seat which was customizable, and allowed each pilot to get a seat which was comfortable for them.
Anointing a "one true workflow" is a similar mistake. It will chaff at your developers and you'll lose velocity due to the chaffing. Over time requirements will change and the workflow will cease to meet anyone's needs, and if your developers remain productive it will be because they are going behind your back to use what actually works for them. Your developers will leave you because they're tired of being patronized, and you'll lose velocity and institutional knowledge onboarding new people.
The internal tooling team can't be all things to all developers, that's true. Let them ship what they ship, and let other people figure out how to adapt it into their workflows. If someone is performing well and getting stuff done, do you care if they're using the internal tools? Or if they've wrapped the internal tool in some macros or something to better suit them? If someone is using the anointed workflow but can't work for more than an hour at once because it's too mouse heavy and it hurts their wrist, is anyone in the situation happy?
Single repository, yes. Single development environment, no.
Development environments should be isolated, so developers don't step on each other's toes...
Also, centralised cloud development machines are by definition, a single point of failure, with small gains in consistency in development experience...
> all connected to a network over an internet connection, with some crazy interface that keeps their shell session alive even if their internet disconnects.
You know what's better than this? Not relying on the internet connection at all!...
Decentralized development on local machines is simply a better experience and relies less on a giant cloud infrastructure
Why is this desirable?
As for disadvantages:
- no work offline
- no freedom for different OS, IDE, tools
- Single Point of Failure (that server goes down, connection drops and nobody can do any work).
IMO the cons are much worse than the pros. I am curious what other advantages you would see in this approach.
EDIT: grammar, format
I never saw a company developing on a shared server. Do you work for one doing that?
In that environment I expect developers to need separate environments not to stop all the team in case of mistakes. Let's say: docker containers running on the server instead of pulling an image locally. I don't see much of a gain.
Personally I could use my emacs to edit files of the server, my terminal to ssh on the server and my browser getting pages from there. For people using IDEs, those IDEs should either work in a different way or be in a remote connection (RDP, VNC, X11.) I remember Citrix thin terminals but I don't remember developers using them. They were for end users.
It was a lovely environment to work in, in part because sharing our work was a matter of "yeah, I've stood that up on port 6001, can you take a look?" Or "take a look at /home/foo/whatever.py, I think the bug's in there but I can't spot it".
The other part was that it was an absolute beast of a machine for the time. RAM for days, and more cores than hot dinners. And, critically, a very close match to our production machines. That matters more than you'd think, for a large set of problems.
Can you believe the nerve of that guy? Keeps trying to convince us that a single centrally managed system is easier or more reliable (or whatever) than 30 randomly configured ones on multiple operating systems! I think he's nuts, personally. I have half a mind to tell my manager that he's trying to disrupt our ideal workflows.
And really, what's the problem with testing my app in an environment that isn't the same as production? Yeah, sure, I might save some time not having to maintain a local environment that everyone on my team can replicate. And, sure, developing remotely would let me change laptops without spending a week to set everything up again; but it's only a week, I have a ton of those left. And yeah, maybe the cloud network has a bunch of services that I need proxies and VPNs and other things to test from my laptop. And granted, doing development on a server with 40 CPU cores and the same network as the database and webserver is faster than my laptop.
...But if all that was better, I'd be doing it already. If it's on my laptop, I know how it works. What's better than what you already know?
It sounds like something really frustrating is going on though, and I hope it works out.
You sound really frustrated because you're being asked to do things that you feel aren't or shouldn't be part of your job, and perhaps it's taking time from your other responsibilities and creating a larger workload for you. It seems like maybe you've come to resent your coworkers and their individual preferences for the complexity it creates and the stress it causes you. It seems like what you want to do is erase that complexity, take away their agency, and centralize on a solution that works well for you.
There's a couple of things to consider. For one, what you're asking is for your coworkers to have a degraded work experience so that your work experience can be better, which is totally understandable but not reasonable.
Another one is that this is a pretty well understood problem with known solutions - why is it you can't get them implemented? Is there something going on in the politics or culture of your workplace that is stopping you? Is there some kind of elephant in the room, and would things get better if you called it out?
The last one is, are you getting burned out? Were you always this frustrated? Should you take some time off? Should you start looking for other jobs?
I can really feel the stress and frustration in your tone, and that sucks. I hope things get better for you.
Granted, we don’t run any node, python or Ruby, because yeah I don’t want to spend all of my time debugging monkey patching bugs.
This is a solved problem, see eg mosh. Not so crazy.
But, I don't want my development environment running on the same box as other people's environments, it just doesn't make sense: people stepping on each other's toes, the whole box breaking... I really don't want that.
Give me an environment that's possible to build from scratch with a push of a button, and I'll be happy. Anyway in a larger project there needs to be a person/team responsible for keeping the dev env up - either they can keep the shared one up with whatever black magic, or they can make sure the button to create the environment will work for people wherever and whenever they want.
But developing in a monolithic machine may be not. The development environment should be clean and isolated, and products like gitpod and coder is promising.
Besides this, maybe you can have a look at https://github.com/tensorchord/envd and https://github.com/okteto/okteto
Curious engineers with oddball configurations greatly contribute to the overall health of a codebase. Forcing these folks to use a standardized configuration is a missed opportunity at best, and disgruntling at worst.
A cool option is give everyone the same laptop and set them up from a disk image. Keep data you want to keep on another partition. Reimage every so often with the latest required tooling. New starters will be thankful.
Today, getting a server more performant than my laptop (M1 MB PRO) is not economically viable.
I can't even imagine how expensive it would be to run like this. Sharing an environment is a _terrible_ idea, you need isolated environments.
this is of course extremely hated on HN, which understandably loves independence and self reliance.
Is it logically consistent with other things I don't own? Like Chrome, within which all my apps run?
No. But is there something that feels right about having all my code running on the CPU on my lap? Unaccountably, yes.
still, your feelings matter. imo, totally welcome to them.
some people also like keeping gold in their safes, cash under their mattress, running on their own power and food off grid. i will fight for their right to do that.
but also i observe that most people have a demonstrated tendency towards centralization for convenience, and the technology is coming along for mass availability of this tech.
- security. the most common vector of attack is through dev boxes. centralizing creates a single point of failure.
- vendor capture. right now, it's cheap to "do a dev". if you put everything in the cloud, the FAANG that be can start charging rent.
- vscode or (insert cloud IDE here) might work great for some, but my local emacs is better. fight me.
- the longer the distance between your devs and the metal, the dumber they get. you learn best by working with visibility into what your code is doing. many of the efficiencies mentioned will lead to educational drift, which is bad.
- connection speed. he gets real hand wavy about "developing on a plane" but even developing on a satellite connection makes latency-dependant tasks super slow. try running Ansible from your star link and get back at me.
How does this guarantee consistency across Docker, Kubernetes and the local environment?
How are base images selected? Where do the packages come from? Are the packages all Alpine/x86_64 or from Nix directly? Who builds them? Who signs them? Who deploys them?
Following the demo makes this seem more unlikely to be made with reproducable builds in mind, because it shows go's version to be darwin on the host system, which probably isn't going to be used on Docker!?
I believe though that something has restart from scratch rather than rely on Nix.
Nix ushered the idea but needs a new face. Building on top of it may be too problematic long term.
https://github.com/devboxup/devbox
I'm curious if you attempted to support macOS by doing this with Nix's dockerTools and cross-compiling (there may be better sources, but it's at least hinted at in https://nix.dev/tutorials/building-and-running-docker-images...)? If so, I'm wondering where that failed or bogged down?
---
Background: I build a tool (https://github.com/abathur/resholve) for ~packaging Bash/Shell (i.e., for demanding all dependencies be present). The tool's technically agnostic, but I built it specifically to fix Shell packaging in Nix.
I think it could benefit a lot of other Shell projects, since one of Shell's big tribulations is dealing with heterogenous environments, but most Shell projects wouldn't see much reason to endure the pain of adopting Nix if they still had to support the heterogenous environments.
Much like you're doing here, I've been hoping to figure out how to build a Nix-based packaging flow that can generate deployable standalone bundles or containers. It'd be a heavy way to bundle Shell, but I imagine some projects would take the tradeoff for predictability and reduced support load. But since it would need to take place within a Nix build, I'd need to cross-compile for it to work on macOS. Hoping you know if it's a dead-end or not :)
That said, I do want to experiment with building the container directly through nix and seeing if there’s advantages to doing that. I just haven’t had the time yet.
Usually, I use Viscose development containers where system programs are installed via Docker. However, it's quite tedious to manage installation and versioning of these programs.
I can currently add a .tool-versions file in my project folder and run `asdf install` to get everything on the same versions for all our devs.
nix goes the next step and has packages defined as an install script plus all their dependencies (all the way down to the basic C library, compilers, etc.). It caches everything based on a hash and provides a public repository where it will just download prebuilt versions of things instead of compiling them from scratch every time. It can do this because it has extremely strict hermeticity and reproducibility guarantees for all of its packages--asdf has none of this and you'll almost certainly just be compiling tools over and over or pulling down pre-built versions that will probably work (as long as you carefully read the package readme and installed all its dependencies).
Don't get me wrong, asdf is nice and great for simple things. If it works for you keep using it. If you start to run into trouble with the quality of its packages or you start writing your own packages, you might want to look at a more comprehensive system like nix.
Also devbox can dump Docker containers for you
This installs certain tools like helm, terraform or kubectl in the specified version as well as python dependencies or ansible roles and collections in a container to use on your laptop or in cicd-pipelines.
Another nice addition for this is https://github.com/upciti/wakemeops, that provides apt-repository for many tools from the Cloud landscape.
https://www.dev-box.app/
https://developer.nvidia.com/devbox
https://devbox.ewave.com/#/
https://azure.microsoft.com/en-us/blog/announcing-microsoft-...
A better option iis a direnv plugin that modifies environmental variables inside of your editor to match your nix shell.
https://github.com/containers/toolbox/
https://github.com/89luca89/distrobox
With the downside of not having the option to select a Linux distro and being locked to the nixpkgs repositories?
Wsl/nix should work.
And it will be great if users can build and deliver devbox images with a CI/CD systems like GitHub Action.
Docker is only lightweight in Linux.
Great to see it being repurposed and presented with a much nicer UX this time.
That said, we'll improve the error message so that when nix is not installed it tells you you should install it.
EDIT: I installed Nix with the old-fashioned "pipe shit to bash" method and it seems to work now. When I say "work", I mean it in the sense of "doesn't fail", because it's been stuck at `devbox shell` for a good five minutes now with no indication.
Not to be too negative, I think this is a great idea and is going to be fantastic, but I guess, like any pre-release software, there are some teething problems.
The Nix package for Debian has some other deviations in the way it's plugged into the system and the initial setup. The default channel (source of packages) doesn't get set up for you, the PATH ordering is different, and NIX_PATH and the NIX_PROFILES_PATH (and maybe PATH?) aren't configured for `env_keep` with PAM's `sudo` configuration so interactions with `sudo` are different. Anyway failure to find any packages is probably due to the lack of enabled channels or the setup not being completed (needing that systemd-setup package).
(All of this stuff is up to the maintainers of the Debian package and Debian policy. It's fine, but it violates the assumptions of some third-party Nix tooling.)
> For the best experience, we recommend installing Nix via the official installer, i.e., via this one-liner:
> > curl | bash, blah blah blah
> (Existing Nix users who have or prefer an alternative setup can still use devbox! See this page about compatibility: [link to a 1/2 page reference doc]
(Happily, the experimental features now experiencing widespread uptake in the community will make some of those differences I outlined less relevant.)
Sorry for the hiccups – it is our first pre release version but feedback like yours helps us harden it. It should definitely be considered beta software right now, but we’ll have it hardened before announcing a GA/1.0 version
[0] https://github.com/0rion3/dock
https://docs.microsoft.com/en-us/azure/dev-box/overview-what...
I am not sure if they would send a cease-and-desist, but you might want to consider a new name for this tool.
However, Microsoft has a team of lawyers and have been known to go after other tech products to protect their trademarks and copyrights.
Devbox is an awesome tool. If it becomes popular enough, I would not be surprised if it attracts unwanted legal attention. (Sadly, it has happened to other tools and products.)
The container export functionality is based on BuildKit via the plain `docker buildx` CLI: https://github.com/jetpack-io/devbox/blob/main/docker/docker...
and it uses CUE to validate its configuration, which is JSON.
All-in-all it actually looks extremely simple. I guess the basic idea is to give you access to a subset of the power of Nix and all the goodies in Nixpkgs without exposing you to the Nix language or the Nix CLI.
Longstanding Nix users will probably not be super excited about this (though they might as well try it! it does look very nice to use). However, for folks who are put off by Nix's reputation for difficulty but might be tempted to enjoy freely drawing upon the 80,000+ software packages in Nixpkgs, this might be a way to have your cake and eat it, too.
What you describe is also a sensible use of CUE. I suppose if one of your goals is 'avoid asking users to use a language they don't already know', it might be for the best. :)
But personally I wouldn't mind seeing a bit of CUE usage to work with devboxes, hehe.
For continuity, here is my aside on CUE from that comment, apropos of nothing but my lack of reading comprehension :D
----
CUE is, like Nix, a simple, configuration-oriented DSL. But unlike Nix, it's really just extended JSON (a JSON file can be thought of as a CUE file where all of the values are concrete), it has a different kind of type system where values are types, and the language is decidedly not Turing complete. The type system is pretty neat, and the basic idea is that you can put constraints in the place of values, and write data structures that contain mixes of each. Then latter you can apply those constraints to a configuration/specification and CUE will tell you whether they're compatible. It's cool because you can write your specification and your data in the same format.
While CUE is not a full programming language, it does have a small stdlib and basic 'comprehensions' (like list comprehensions in Python) for generating data structures from other data structures. This gives at least a little flexibility and some hope of concision, defining repetitive data structures.
Having used Nix a lot and CUE a little, I'm not sure which set of tradeoffs in terms of power and simplicity is it right one, but I do think that CUE's choices are interesting and reasonable, and its type system is clever and easy to work with.
``` $ nix-shell -p python27 ```
If you are comfortable with nix itself, then it allows you to do a lot more than devbox does, but at the cost of extra complexity.
Depends on what level of abstraction you're looking for.
If you wanted you could totally use this most of the time and then write custom Nix expressions only if you felt like that would serve you better. For many use cases, this might be sufficient.