• lewantmontreal 2 days ago |
    Don’t fall for this. It’s too good and I can’t get out now since nothing else than vscode supports it.

    Dependencies, env vars, dev databases, ports, everything all written up in code and ready to use once you open the workspace. All without messing up your host pc. Works also via ssh.

    • numbsafari 2 days ago |
      You can SSH in and use vim or emacs the way god intended.

      You can also connect via JetBrains Gateway.

    • setopt 2 days ago |
      > nothing else than vscode supports it

      Isn’t it just like remote development? I’d expect you to be able to e.g. run Vim inside the container, use Emacs TRAMP to connect to it, or `rmate` to edit the files inside in e.g. Sublime.

      • osigurdson 2 days ago |
        If using vim, you don't really need any of this. kubectl exec is enough. Too bad I can never seem to get good enough at vim for this workflow however (even though I really want to).
      • LelouBil 2 days ago |
        Intellij supports it (a bit) via remote development. It starts up a container and runs the ide back end inside of it.

        It's not ergonomic at all thought, because now you need to have 2 IDE's running. The one that managed the container and the one inside the container

    • bananapub 2 days ago |
      huh? are you doing a bit or just didn't look in to it at all?

      even emacs supports it: https://happihacking.com/blog/posts/2023/dev-containers-emac...

      • jayemar 2 days ago |
        Thanks for the link, this definitely didn't seem to be supported a few years ago when I last checked.
    • mhitza 2 days ago |
      JetBrains has an official DevContainer extension for a while now[0], though I can't vouch how well it works.

      I've seen devcontainers used in various projects, and how it further leads to a bigger divide between dev and ops. Your team should standardize on a single OS and software stack and life will be easier, + developers will also learn something about Ops and the systems they deploy to. I know, unpopular opinion, with the pervasiveness of Kubernetes and backend systems developed on Macs.

      [0] https://plugins.jetbrains.com/plugin/21962-dev-containers

      • shepherdjerred 2 days ago |
        JetBrain's implementation is far worse, though I hope Fleet will be their saving grace.
      • number6 2 days ago |
        I tried JetBrains version first. Sadly it doesn't really work well.
  • gabrielteixeira 2 days ago |
    I recently started using this workflow and I'm very pleased by it.

    Now, for every project I start, I quickly create a Docker container with all dependencies the project needs, and then I develop inside the container using vscode. If I need to develop in another machine (laptop instead of my desktop for example), I just download the Docker image, and everything is already setup to start coding.

  • Neywiny 2 days ago |
    I like this but I use an x86 32-bit box for my "home lab" (for the fun of needing to compile everything myself) and it'd be nice if VSCode supported that natively. As is I had to compile VSCode server myself, which isn't exactly what I wanted. Unsure if I can even add 32-bit x86 to this functionality.
    • wongarsu 2 days ago |
      > 32 bit

      > for the fun of needing to compile everything myself

      > I had to compile VSCode server myself

      You kind of did that to yourself

  • spaniard89277 2 days ago |
    Mmm, I already do this with LXC, but could be better integrated, I guess?

    My next goal was to use Docker for this, but since I don't know docker yet, I'm not sure if that would a good idea for having a ready prod env right after a commit.

  • darkxanthos 2 days ago |
    We've been using this where I work for over a year and I'm never going back.
  • topaztee 2 days ago |
    is this like rookout?
  • christophilus 2 days ago |
    I’ve been developing inside of Podman containers using Neovim. It’s mostly great. I love that my machine doesn’t get cluttered with random dev dependencies.

    Yesterday, though, I ran into an issue where Bun was occasionally not completing http requests. I thought it was a bug with Bun (it might be), but the issue went away when I ran it outside of a container.

    Containers do add layers which sometimes cause issues, or at least make diagnosis trickier.

  • humanfromearth9 2 days ago |
    Somehow I still prefer using Nix to have consistent dev env...

    Does the container approach bring something more than using Nix?

    • numbsafari 2 days ago |
      Use nix to build your container.

      You get process isolation, remote execution, the ability to cache and restore your environment… a long list of added benefits of using a container.

      • ninetyninenine 2 days ago |
        Using nix and containers just ups the complexity of a project by so much.
        • numbsafari 2 days ago |
          Using nix ups the complexity of a project by so much.

          Most folks who would be attracted to this solution are already using containers.

          Personally, for me, this solution is basically the "Cloud Vagrant" solution that Hashicorp should have launched something over a decade ago, but couldn't get out of their own way to do. Whether it's containers or VMs, it just makes sense.

          It's made sense for a long time. We started using VMs for dev work when I was deploying servers on Windows back in the early 2000s. Registry chaos, dealing with patches, and having to sit through multi-DVD installs of Visual Studio tooling just made this make sense.

          Personally, I don't understand people that do development work without at least a VM. If not a VM, a netboot remote host that you can restore to a known state really quickly without interfering with your "productivity" desktop (email, calendar, chat, videoconf, document editing, browser, &c.).

          • ninetyninenine 2 days ago |
            Containers makes sense. Nix and containers is unnecessary complexity imo
      • talkingtab 2 days ago |
        Cost benefit analysis. You get process isolation, remote execution. The cost: additional complexity. More cost processes are isolated so integration is harder, remote execution instead of local, etc.

        It is easy to hype things if you do not consider the additional complexity, and then do not consider the corresponding down sides of the benefits in the "... long list".

        Note that while I do not use Dev Containers, I use docker extensively in development process. The point I am making is that it is not a simple cost/benefit analysis.

        • numbsafari 2 days ago |
          I'm not hyping anything. This is how I've worked for over two decades now. Every time I've deviated from this path, it's been annoying and painful.

          It's great to see products like GitHub start to bake it in. In my context, the cloud hosted solution makes perfect sense. I'm a GCP user, and Google has a competing offering that I would use in a heartbeat if GitHub would allow for direct integration, or if Google would get off their ass and launch a GitHub competitor / purchase GitLab and make it a first-party offering integrated directly with Cloud Workstations.

          I've used VMs on Windows, netboot on *bsd without VMs, Vagrant on Mac, and random wrappers around Docker. Personally, I like that MS has integrated VS code with the remote execution part. For developers that want that style of IDE, it's freaking magic. For the rest of us, you can just continue using vim/emacs like you always did.

          My other personal favorite, is using the same base VM/container image you use for dev environments for your CI/build environment. Guaranteed to have the exact same tools/versions available in both places. Let's you easily do testing/validation before updating / changing things. Makes on-boarding new developers easier. Makes recovering from a lost workstation trivial. Allows you to scale your dev environment on-demand vs. over-provisioning your dev workstations and barely using them. Makes isolating your dev environments behind a firewall/vpn much easier.

          It's not a _simple_ cost/benefit analysis. But it's still just a cost/benefit analysis.

          Personally, I wish the GitHub Desktop application on Mac/Windows had the ability to launch local workstations using a cache of your repo's devcontainers in the event you want to try and work off-line, or if you actually have headroom on your local machine that you want to use.

          Even if you are doing native app development, a lot of folks still use VMs because testing on different OS versions is such a PITA. The nice thing about this approach, is you aren't just walling off the version of your test environment, you're also walling off the version of your dev tooling and all the supporting bits.

          Oh, and last but not least, I can do my work assuming a Linux environment regardless of whether I'm hosting on Windows/Mac/Linux.

          If I'm here to hype anything, its to convince developers who use Macs to stop trying to treat it like a Unix. Put your Unix in a VM and treat your Mac like a Mac. Life is so much easier.

      • throw156754228 2 days ago |
        Sorry docker run foo is already running isolated, what do you mean by process isolation in the context of docker?
  • humanfromearth9 2 days ago |
    Somehow I still prefer using Nix to have consistent dev env...

    Does the container approach bring something more than using Nix?

  • remlov 2 days ago |
    I fail to see how this is any better than IntelliJ/PyCharms remote interpreter feature and Services tab and a proper docker-compose file. It feels like VSCode is finally catching up.
  • Yanael 2 days ago |
    I started to develop in containers before VSCode introduced the dev container to keep my local machine clean. A few years ago, I switched to the VSCode dev container, and the integration is very good. Having the ability to have a dev environment ready per project is very neat. We started to adopt it in my company. As a team it saves a lot of hassle, and onboarding is much faster. However, we have encountered some issues, mainly when we want to work with GPU and PyTorch dependencies, and that is the opposite of pleasant! Otherwise, now each new project/repo I create comes with a dev container.
    • synergy20 2 days ago |
      same here, from dev container to just docker here, works well
    • ikety 2 days ago |
      Recently went from using devcontainers to nix-shell. But in every new project I include instructions for both nix-shell + direnv as well as dev containers, so people can fully ignore nix if they choose.

      A form of this which mixes devcontainer and nix might be the holy grail

      https://www.youtube.com/watch?v=kpBXrsVg83Y&t=941s

      Positives

      - You lean on the power of dev containers while still using a portable spec

      - You get access to nixpkgs which is arguably the most exhaustive package library

      - You get the "true reproducibility" of nix over docker

      Negatives

      - Docker remains the king for ephemeral databases, more convoluted to manage with nix

      • Yanael 2 days ago |
        Nix sounds like a great fit as well, but my experience with it has not been that great, and I feel it is not ideal for Python. I recently gave Flox a try, which simplifies the use of Nix. I use it from time to time, but it has not yet replaced my development environment.

        In your case, is maintaining 2 dev env not annoying?

      • ossusermivami 2 days ago |
        I use devenv.sh and generate oci containers out of it, which give me kinda the best of both worlds!

        (nobody in my team use vscode or devcontainers but enjoy using docker)

    • ronef 2 days ago |
      We were building something very similar to this in collaboration with the VSCode teams a while back at Meta. The goal was to get development on demand via hooking into some beefy linux server farm shile having VSCode stay as the local experience. Though one of the problems even back then was similar to what you are mentioning. It did however remove all last mile network reliance during lockdown which was a plus.
  • taspeotis 2 days ago |
    This feature is at least 5 years old, why isn't (2019) in the title?

    https://github.com/microsoft/vscode-docs/blob/3bafb9f610bd6b...

    Hacker Olds, amirite.

  • jFriedensreich 2 days ago |
    i much prefer using docker “compose watch”, i never fully managed to wrap my head around the relationship and proper setup of vscode workspace, project and container.
  • janpmz 2 days ago |
    I like the concept. What I didn't like was:

    - Re-opening the project in a dev container felt like overhead

    - I sometimes stuggled with setting the work directory

    - Restarting the development container for various reasons interrupts the development

  • btbuildem 2 days ago |
    I'm not sure I follow -- do they suggest running VS locally, and the work/project inside the container?

    I've been doing this for years, it seems to me like the absolutely hands-down easiest way to keep projects encapsulated and easily shareable with coworkers.

    Set up the env in a docker container, mount a volume to a local fs where all the sources live, ssh into the container if necessary, interact with app(s) within the container over http.

    What's MS adding here, what am I missing? They streamline the process and integrate scripts with VS?

    • akfrt 2 days ago |
      They are not adding much. It is still Docker, the feature is from 5 years ago.

      But anything with VS(Code) is hyped up, since half of the OSS ecosystem is now on the take, directly or indirectly, from MSFT. Until MSFT pulls the plug.

    • emmanueloga_ 2 days ago |
      The missing piece is that the IDE runs on the same environment as your code (0). This is useful, for instance, to communicate with the various LSP services [1] VS Code may need in order to provide proper support (auto-completion and all sorts of other IDE features). I mean, even something as simple as "Run this file" may not work if you just mount the folder somehow.

      There are other goodies but those are very VS Code specific, like the ability to describe which extensions are used on the DevContainer, that way you can very quickly have everyone in your team use the same extensions and also avoid having a globally installed extension that you only need for a single one of your projects.

      --

      0: not the GUI part, a daemon.

      1: https://en.wikipedia.org/wiki/Language_Server_Protocol

      • btbuildem a day ago |
        Oh interesting... I definitely host the build chain in the container, but the integration between the elements of that and the IDE itself is cool. Who needs globally-installed Prolog extensions when it's just that one legacy project that needs them!
  • emmanueloga_ 2 days ago |
    I guess I've been living in SF for too long... upon seeing the headline, my first thought was "a developer working from his container home, makes sense".

    DevContainers are good. I find them especially nice for working on k8s stuff. For those comparing them against Nix, it's likely that most developers are already familiar with Docker, while Nix's language and tools have a pretty steep learning curve.

  • Goofy_Coyote 2 days ago |
    My biggest problem with DevContainers has been getting the debugger to work. I need to walk through the code for my job, and it’s been a pain to get it to work in a container. Never understood what I’m doing wrong though, theoretically it shouldn’t be any different than how I set it up on my local machine.
  • cogman10 2 days ago |
    We've been using tilt [1] for a similar process.

    If you are doing a k8s deployment anyways then tilt is really quiet nice once properly setup.

    The one thing that isn't so nice is local package caching. That's a hard problem to solve and containers makes it annoying to get right.

    [1] https://tilt.dev/

    • throw156754228 2 days ago |
      what's so hard about local package caching? Just bind mount all containers npm package cache location to a location on the host.
      • cogman10 a day ago |
        Biggest issue I run into is building the initial container. I'd like to take advantage of docker caching so while doing dev you can just start up the container with most everything built. However, that requires working with non-docker caches. Unfortunately there's no way to bind the caches while building you can only do that with the running container image.
        • throw156754228 a day ago |
          Yeah, that's a one off cost though.. it's the equivalent of running npm install or whatever for the first time on a new pc.
  • cjr 2 days ago |
    https://devspace.sh also gives a similar workflow but where the container is running in a remote k8s cluster
  • ErneX 2 days ago |
    I’ve been using DevPod (https://devpod.sh) for this and it’s been pretty great.
    • lbotos 2 days ago |
      I literally just set up DevPod 2 days ago -- are you using it via the CLI?

      I wanted to use it via ssh, but the port forwarding flow (that I figured out) was a little wonky:

      - spin up pod

      - adjust .devcontainer to open port

      - --recreate

      now my port is open. Do you happen to know if there is a smoother way to do that?

      I expected to be able to do it in the devpod up command with a simple --port-forward=3000 put I didn't find that documented.

      • ErneX 2 days ago |
        I use the GUI, the port forwarding I do it from vscode.
  • mikedelago 2 days ago |
    I like tilt [https://tilt.dev] if the dev infra has any significant complexity to it.

    Docker compose also has a "watch" command that can do lots of the the things devcontainers does, and I use it for more simple setups.

    https://docs.docker.com/compose/file-watch/

  • ronef 2 days ago |
    While I personally don't love developing at all times within a container. When I need to, I've found the flow of setting up with Nix and then containerizing to be the most ideal today. Is there something new recently that folks are using?

    (I'm also a biased person as I'm a Nix person at heart)

  • throw156754228 2 days ago |
    Isn't this just glorified bind mount? I skimmed the article but could not see what this does other than let you write files in your docker container through VS code. Why not just set up a bind mount and run any old editor?