Dependencies, env vars, dev databases, ports, everything all written up in code and ready to use once you open the workspace. All without messing up your host pc. Works also via ssh.
You can also connect via JetBrains Gateway.
Isn’t it just like remote development? I’d expect you to be able to e.g. run Vim inside the container, use Emacs TRAMP to connect to it, or `rmate` to edit the files inside in e.g. Sublime.
It's not ergonomic at all thought, because now you need to have 2 IDE's running. The one that managed the container and the one inside the container
even emacs supports it: https://happihacking.com/blog/posts/2023/dev-containers-emac...
I've seen devcontainers used in various projects, and how it further leads to a bigger divide between dev and ops. Your team should standardize on a single OS and software stack and life will be easier, + developers will also learn something about Ops and the systems they deploy to. I know, unpopular opinion, with the pervasiveness of Kubernetes and backend systems developed on Macs.
[0] https://plugins.jetbrains.com/plugin/21962-dev-containers
Now, for every project I start, I quickly create a Docker container with all dependencies the project needs, and then I develop inside the container using vscode. If I need to develop in another machine (laptop instead of my desktop for example), I just download the Docker image, and everything is already setup to start coding.
> for the fun of needing to compile everything myself
> I had to compile VSCode server myself
You kind of did that to yourself
My next goal was to use Docker for this, but since I don't know docker yet, I'm not sure if that would a good idea for having a ready prod env right after a commit.
Yesterday, though, I ran into an issue where Bun was occasionally not completing http requests. I thought it was a bug with Bun (it might be), but the issue went away when I ran it outside of a container.
Containers do add layers which sometimes cause issues, or at least make diagnosis trickier.
Does the container approach bring something more than using Nix?
You get process isolation, remote execution, the ability to cache and restore your environment… a long list of added benefits of using a container.
Most folks who would be attracted to this solution are already using containers.
Personally, for me, this solution is basically the "Cloud Vagrant" solution that Hashicorp should have launched something over a decade ago, but couldn't get out of their own way to do. Whether it's containers or VMs, it just makes sense.
It's made sense for a long time. We started using VMs for dev work when I was deploying servers on Windows back in the early 2000s. Registry chaos, dealing with patches, and having to sit through multi-DVD installs of Visual Studio tooling just made this make sense.
Personally, I don't understand people that do development work without at least a VM. If not a VM, a netboot remote host that you can restore to a known state really quickly without interfering with your "productivity" desktop (email, calendar, chat, videoconf, document editing, browser, &c.).
It is easy to hype things if you do not consider the additional complexity, and then do not consider the corresponding down sides of the benefits in the "... long list".
Note that while I do not use Dev Containers, I use docker extensively in development process. The point I am making is that it is not a simple cost/benefit analysis.
It's great to see products like GitHub start to bake it in. In my context, the cloud hosted solution makes perfect sense. I'm a GCP user, and Google has a competing offering that I would use in a heartbeat if GitHub would allow for direct integration, or if Google would get off their ass and launch a GitHub competitor / purchase GitLab and make it a first-party offering integrated directly with Cloud Workstations.
I've used VMs on Windows, netboot on *bsd without VMs, Vagrant on Mac, and random wrappers around Docker. Personally, I like that MS has integrated VS code with the remote execution part. For developers that want that style of IDE, it's freaking magic. For the rest of us, you can just continue using vim/emacs like you always did.
My other personal favorite, is using the same base VM/container image you use for dev environments for your CI/build environment. Guaranteed to have the exact same tools/versions available in both places. Let's you easily do testing/validation before updating / changing things. Makes on-boarding new developers easier. Makes recovering from a lost workstation trivial. Allows you to scale your dev environment on-demand vs. over-provisioning your dev workstations and barely using them. Makes isolating your dev environments behind a firewall/vpn much easier.
It's not a _simple_ cost/benefit analysis. But it's still just a cost/benefit analysis.
Personally, I wish the GitHub Desktop application on Mac/Windows had the ability to launch local workstations using a cache of your repo's devcontainers in the event you want to try and work off-line, or if you actually have headroom on your local machine that you want to use.
Even if you are doing native app development, a lot of folks still use VMs because testing on different OS versions is such a PITA. The nice thing about this approach, is you aren't just walling off the version of your test environment, you're also walling off the version of your dev tooling and all the supporting bits.
Oh, and last but not least, I can do my work assuming a Linux environment regardless of whether I'm hosting on Windows/Mac/Linux.
If I'm here to hype anything, its to convince developers who use Macs to stop trying to treat it like a Unix. Put your Unix in a VM and treat your Mac like a Mac. Life is so much easier.
Does the container approach bring something more than using Nix?
This came out in 2019: https://github.com/microsoft/vscode-docs/blob/3bafb9f610bd6b...
A form of this which mixes devcontainer and nix might be the holy grail
https://www.youtube.com/watch?v=kpBXrsVg83Y&t=941s
Positives
- You lean on the power of dev containers while still using a portable spec
- You get access to nixpkgs which is arguably the most exhaustive package library
- You get the "true reproducibility" of nix over docker
Negatives
- Docker remains the king for ephemeral databases, more convoluted to manage with nix
In your case, is maintaining 2 dev env not annoying?
(nobody in my team use vscode or devcontainers but enjoy using docker)
https://github.com/microsoft/vscode-docs/blob/3bafb9f610bd6b...
Hacker Olds, amirite.
All of this to say, I wish articles were upfront on original publication date.
- Re-opening the project in a dev container felt like overhead
- I sometimes stuggled with setting the work directory
- Restarting the development container for various reasons interrupts the development
I've been doing this for years, it seems to me like the absolutely hands-down easiest way to keep projects encapsulated and easily shareable with coworkers.
Set up the env in a docker container, mount a volume to a local fs where all the sources live, ssh into the container if necessary, interact with app(s) within the container over http.
What's MS adding here, what am I missing? They streamline the process and integrate scripts with VS?
But anything with VS(Code) is hyped up, since half of the OSS ecosystem is now on the take, directly or indirectly, from MSFT. Until MSFT pulls the plug.
There are other goodies but those are very VS Code specific, like the ability to describe which extensions are used on the DevContainer, that way you can very quickly have everyone in your team use the same extensions and also avoid having a globally installed extension that you only need for a single one of your projects.
--
0: not the GUI part, a daemon.
DevContainers are good. I find them especially nice for working on k8s stuff. For those comparing them against Nix, it's likely that most developers are already familiar with Docker, while Nix's language and tools have a pretty steep learning curve.
If you are doing a k8s deployment anyways then tilt is really quiet nice once properly setup.
The one thing that isn't so nice is local package caching. That's a hard problem to solve and containers makes it annoying to get right.
I wanted to use it via ssh, but the port forwarding flow (that I figured out) was a little wonky:
- spin up pod
- adjust .devcontainer to open port
- --recreate
now my port is open. Do you happen to know if there is a smoother way to do that?
I expected to be able to do it in the devpod up command with a simple --port-forward=3000 put I didn't find that documented.
Docker compose also has a "watch" command that can do lots of the the things devcontainers does, and I use it for more simple setups.
(I'm also a biased person as I'm a Nix person at heart)