Another little addendum: you can trivially create bootable custom NixOS installer images with whatever configuration you want pre-applied[0].
[0] https://nix.dev/tutorials/nixos/building-bootable-iso-image
- didn't check if nixos uses it, but coreutil has a single binary mode (like busybox, a single binary is built and symlinks or hardlinks are used to provide all the commands); that might save some space
- instead of trying to strip down the system, maybe go the other way around: only include the command you need with its closure? closure computation is done in a few places (apparmor profile or systemd.confinement come to mind) and it should be possible to just copy whatever your server binary needs, your kernel (since microVM and not container), and run the binary directly as init (maybe with a simple wrapper that hardcodes network or whatsnot)
good luck!
It does. If you have coreutils from Nixpkgs installed you can check this with
basename "$(realpath "$(which ls)")"
If you see 'ls', you're looking at separate binaries. If you see 'coreutils', it's a symlink to a singular 'coreutils' binary.I'm not sure about just the one closure since the result needs to boot, but more generally that looks reasonable if your use case fits. There's a couple efforts, IIRC, mostly centered around router firmware where "real" nixos doesn't fit, ex. https://gti.telent.net/dan/liminix
It's for running AppFS inside a VM on QEMU. It uses a statically linked Tcl (which AppFS is written in) to bring the system up.
I spend a weekend every so often defining the core of what I want next time I upgrade, but just find it so annoying I'm sure I won't use anything I've written until there's a major change in the ecosystem.
See the 'RATIONALE' document: https://github.com/tweag/nickel/blob/378ece30b3e3c0ab488f659...
Putting aside the poor typing (the lack of proper typing is a shame, so valid criticism), I actually really like the language - it's genuinely a great DSL for the particular problems it's supposed to handle.
It does take a bit of use for it to click, though. A lot of it has to do not with Nixlang itself but about learning nixpkgs' idioms.
I think a good IDE integration could solve this, but not sure how much is possible.
That said, I strive to structure my nix source so that portions of it can easily be pasted into a repl. ReadTree goes a long way in that regard: https://github.com/tvlfyi/kit/tree/canon/readTree
More to your point, though: I think a lot is possible. Although nix is very dynamic, it is also, for all intents and purposes, side effect free. I've had this idea that a sufficiently advanced IDE should be able to evaluate your nix code and tell you exactly what the possible values (not just types, but value!) are for any particular variable.
Similarly to the REPL, I'm often using `nix-instantiate --eval -E 'somethingsomething'` so it should definitely be possible.
It has jump to definition and autocomplete. Which is very nice.
It's not perfect. But it's pretty good
But I think this also stems from the fact that the default state of nixos is "a general purpose linux system" and so instead of just starting at 0 and adding the things you need, you have to mix adding and removing things which IMO makes things much more complicated (except maybe for newbies to linux who don't know what's necessary for a running system).
With a default config you start with a console, systemd, dbus and some things to make it boot. There is barely anything.
NixOS is much better because you can inspect the changes after the fact. You also know which code to look for, which is a luxury. If the code seems too much, there's the repl to help. Changes are also much easier to revert.
`nix-instantiate --eval -A config.services.resolved.enabled /etc/nixos/configuration.nix`
This is way better than a stateful package manager making a non-revertible change without even telling me.
For "what symbols are available", the nil LSP implementation[1] works for anything in scope that doesn't require evaluation. It also includes completions for the stdlib and NixOS options (in certain contexts).
Another LSP implementation is nixd[2], which is trying to tackle the problem of evaluations for completion.
$ nix repl
nix-repl> :l <nixpkgs>
nix-repl> {press tab for auto-complete}
Unlike most languages, the symbols available are completely determined by the scope. Just look at the let expressions in effect. There's no magic.
As for nested expressions, that's a typing problem, which was already mentioned above as a pain point (although there are several efforts to fix this).
1. At the end of local variables
let
a = 1;
b = 2;
in
a + b
result: 3
2. At the end of each attributes in an attribute set (a.k.a. dictionaries or key-value pairs) {
a = 1;
b = 2;
}
result: { a = 1; b = 2; }
3. with expressions with pkgs;
coreutils
result: (the coreutils attribute in the pkgs attribute set)
4. Assertions assert a != null;
a
result: (the value of a)
Now, you'll never be confused again.Replacing the language requires duplicating all the work that went into Nix, to reach parity, so it is not easy.
That seems like a design flaw in Nix, there's no reason the data model should be so tightly coupled to the scripting implementation that you can't reuse packages written in a different language.
For example, see zb: https://www.zombiezen.com/blog/2024/09/zb-early-stage-build-...
Using a different language to depend on packages derived from .nix would be very much akin to depending on a docker image whose Dockerfile you can not inspect.
Speaking of Docker images and Dockerfiles, that's actually a real-world example of how you can achieve this kind of effect without relying on a specific language. Ironically, you can use Nix to build Docker images; there's a bunch of other alternative builders (e.g. Kaniko, Buildah); you can also just stitch together some files&metadata into tarballs, and then 'docker import' it.
Nix or Guix are of course much more powerful and expressive than Docker images, but there's always a cost to complexity.
If there is something that can be done in the nix language that can't be expressed in the underlying model that needs to be used by another frontend then it should be represented in the underlying model so another frontend can use it.
To put it another way, if you're designing a client-server model where there may be multiple client implementations you don't bake big chunks of the implementation into the clients, you provide it in the server interfaces and data types.
Not having functions as values (true of pretty much any serialization scheme I've ever seen) makes serialized data structures strictly less powerful than data structures in code.
> that needs to be used by another frontend
I don't think this was ever a goal of Nix. But if it was, well, you would end up with something considerably less powerful for the reasons I stated.
If nothing changed, they also have a strong ideological drive and funny support any non-free software.
Also, Guix supports proprietary software just fine. It's just not in the main official repo. But there are other repos that have it, e.g. nonguix.
https://www.gnu.org/software/guile/manual/html_node/Macros.h...
Racket is IMO a pretty compelling environment for prototyping DSLs because of how malleable it can be, so I think the ceiling for ergonomics can be pretty high.
Guix is conceptually similar to Nix but uses scheme.
With that said tweag has been working on a kind of nix 2.0 / nix with types for a while with the aim (I think) of being able to use it in nixpkgs: https://github.com/tweag/nickel
Part of that just comes from lazy evaluation, which makes debugging a lot harder in general (you feel this in Haskell...), but also just from nix not being a big popular language that gets lots of polish, and being completely dynamically typed.
It seems inevitable to me that some of the design choices around immutability and isolation are going to result in a larger server image (both on disk and in memory) than if you are prepared to forgo those things. For most people that tradeoff is probably worth it but if you want something to run in an embedded server or with a very low disk footprint it's probably not right for you.
Around 20 years ago people who wanted to do this[2] used to make tiny immutable redhat servers by remounting /usr and a few other things read-only after boot so it's certainly doable but it's a lot more of a pain than what nix does and there is no process isolation and no rollback etc when things go wrong.
[1] ...or generally in fact but that's a matter of opinion and I know people feel differently about this.
[2] me for one, but others also.
The language won't go away and you should try to look at it for more than I don't like it.
Personally I have nothing against the Nix language, and use it without issue, but it's untrue to suggest that the language itself requires uncommon support for this kind of thing.
Terraform et al, despite not being my favorite, have much simpler semantics than Pulumi. It's not always a good idea to write DSLs into languages with huge paradigm mismatches.
> Why should I care about writing 'new' in front of all my declarative configuration?
Because that’s how your choice of language instantiates an object. Try F# or Swift or Go if it’s that annoying to you.
> What happens when an if statement depends on a concrete value?
What do you think “count = var.concrete_value ? 1 : 0” is doing in Terraform, exactly?
> The leakiness of the abstraction is too terrible to even consider.
While you are are entitled to your opinion, I’d suggest you are very much mistaken, and would implore you to actually consider it for a minute.
The problem isn't the language, the problem is that nixpkgs (and NixOS) are just huge.
> But doing it on top of NixOS currently feels like a bad path to take.
The author of this blog post might be interested in playing with not-os, another, much smaller OS built with Nix: https://github.com/cleverca22/not-os
Could be a decent source of inspiration!
Thanks! I have to admit that I've had the itch to build my own NixOS-inspired system more than once, and I haven't done that because I just don't have time to dedicate to this among all the other projects I'm working on. I wasn't aware of not-os before, but I'll definitely dig into the code!
author does not activate new config on host machine, but deploys new host machines as needed.
he evals on build/dev/ci machine only.
Granted, if you local machine is low on RAM, or isn't Linux, then you will be in trouble.
As others said, I've moved away from doing nix builds on servers and into a less wasteful (if you're running multiple servers) approach of building once, deploying the bits into a cache, and making the servers fetch them from a cache. I've been slowly working on my own tool to make this workflow easier (nixless-agent, you can find an early version on GitHub).
Caveat: it is a gnu project so no proprietary stuff like firmwares and drivers included out of the box (but there is a community guix nonfree project available [3]). I believe that isn't a problem for virtual machine servers anyway.
[1] guix cookbook: https://guix.gnu.org/cookbook/en/html_node/index.html#Top
[3] guix manual: https://guix.gnu.org/manual/en/html_node/index.html#Top
Also it’s very famous and loved in embedded software circles.
Like you would with Yocto, I just build my systems on a proper host then remotely deploy them.
Then on boot for my other systems, they boot into a minimalist nixos image (via netboot) that (1) lookups up the hostname assigned to the system, (2) uses `nix copy` to move the closure of the current system from the main host where I store my builds to the local one, (3) switches into the system (explained below), and (4) then uses nixos's kexec support to switch into the proper kernel booting into the new system.
How to switch into a system: every nixos closure has a top level `bin/switch-to-configuration` script that nixos-rebuild suse. Just run `/nix/store/my-hash/bin/switch-to-configuration switch` and your system will silently be replaced by the new NixOS configuration. Very easy!
Having just installed the entirety of NetBSD on an i386 system (a 200 MHz Pentium), I see it weighs in at around 1 gig. But that's everything including X11 with WM and the toolchain (gcc 10). That's not bad, but it's really amazing how much of that isn't necessary for running the OS on a server. Particularly where you might want tiny VM images
I'm not sure how exploitable a read-only virtiofs share is, so this is perhaps not appropriate in some circumstances.
For multiple guests, you should rely instead on: * A snapshot-able filesystem with the option to create clones (like ZFS). I think this is a great idea actually. * Exporting /nix/store via NFS, so you can have multiple writers (but this creates some tight coupling in that accidentally deleting stuff there may disrupt all guests).
The problem with that is that the VM can see all the needless software, so if your goal is isolation, having a smaller closure is much better from a security point of view: if there's no coreutils bash etc then there's no risk of getting a shell spawned by an attack...
NixOS gives you a month of time to move to a new release. Most LTS distros 5 to 10 years.
Deterministic systems are a cool idea, but we're just not there yet. The headaches and pain involved in maintaining these systems and warping the software to obey are too great.
Everything in NixOS works, until it doesn't. And when it doesn't, woe be unto you.
And yes, I have put a lot of blood sweat and tears into making things work in nix/NixOS. The thing that keeps me invested is once I get something working, it is far easier to keep it working. If nixpkgs updates break my things, I'm one git bisect away from figuring out what happened.
I basically build up Proxmox container templates, and then build upon those similar to how Docker does it (I don't use Docker because they don't allow you to specify your MAC address, so you can't control them from a separate LAN-based DHCP server - instead you have to map a bunch of ports on your host and then configure all external clients to match... so dumb).
I've basically gone full circle at this point:
- Docker
- LXD with Bash scripts
- LXD with a ton of Python
- NixOS
- Proxmox with Ansible
- Proxmox with Bash scripts (albeit much simpler and flatter than last time)
Everything is containerized and has its own IP address on the physical LAN, the templates can be regenerated with a simple script, important data is mapped to a host directory (/home/data/my-container, which gets backed up), and destroying and rebuilding an instance container is a cinch.
One really nice feature of this setup is that I can tear down and rebuild a template, launch a test container from that, copy the instance data in /home/data to the new container, make sure it works with the new stuff, and then launch it for real.
Now it doesn't matter what technology (container or VM) I use. Everything is a completely separate machine as far as the LAN is concerned, which greatly simplifies things.
Everything, from host to software to containers & VMs is built "deterministically" (i.e. deterministically enough) from the scripts. Rebuilding the whole thing (server and all) from scratch takes about an hour and a half. I just use the same set of scripts on all of my servers to make management easier. Hosts have minimal software and configuration, and guests do all the real work. Migrating is an rsync /home/data away.
Do you have any tips on how to get started? Do you simply make the change and then paste the commands needed into a script?
Also I assume you have a script to set up the (Proxmox) host machine?
I also have quite a few Proxmox specific things that I had to change (e.g. GPU pass-through) which seems to break your "Now it doesn't matter what technology (container or VM) I use" advantage.
Here's one of my host setup files (run this immediately after installing Proxmox from the iso):
DATA_SIZE=600GiB
# Switch to free mode
rm /etc/apt/sources.list.d/ceph.list
rm /etc/apt/sources.list.d/pve-enterprise.list
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" >/etc/apt/sources.list.d/pve-no-subscription.list
# Setup locale
sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen
locale-gen
update-locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_ALL=en_US.UTF-8
apt update
apt dist-upgrade -y
apt install -y software-properties-common
add-apt-repository -y non-free
apt install -y vainfo intel-media-va-driver-non-free lm-sensors
apt install -y ansible pip python3-proxmoxer htop strace git
# Setup user data area
lvcreate -V ${DATA_SIZE} -T pve/data -n home_data
mke2fs -t ext4 -O ^has_journal /dev/pve/home_data
mkdir /home/data
echo "/dev/pve/home_data /home/data ext4 rw,discard,relatime 0 2" >>/etc/fstab
mkdir -p /mnt/containers/media
echo "//media/media /mnt/containers/media cifs uid=100000,gid=100000,ro,guest,x-systemd.automount 0 0" >>/etc/fstab
mkdir -p /mnt/containers/files
echo "//files/files /mnt/containers/files cifs uid=100000,gid=100000,ro,credentials=/etc/samba/credentials,x-systemd.automount 0 0" >>/etc/fstab
systemctl daemon-reload
mount -a
The host is kept VERY simple.Here's my basic Ubuntu container template:
# ============
# Local config
# ============
TEMPLATE_IMAGE="ubuntu-24.04-standard_24.04-2_amd64.tar.zst"
INSTANCE_CT=200
INSTANCE_NAME=template-ubuntu
INSTANCE_MEMORY=2048
# ======
# Script
# ======
pveam download local ${TEMPLATE_IMAGE} || true
pct create $INSTANCE_CT local:vztmpl/${TEMPLATE_IMAGE} \
--hostname ${INSTANCE_NAME} \
--memory ${INSTANCE_MEMORY} \
--rootfs local-lvm:1 \
--net0 name=eth0,ip=dhcp,ip6=dhcp,bridge=vmbr0 \
--ostype ubuntu \
--start 1 \
--timezone host \
--features nesting=1 \
--unprivileged 1
# Use a fast mirror
pct exec $INSTANCE_CT -- sed -i 's/archive.ubuntu.com/ftp.uni-stuttgart.de/g' /etc/apt/sources.list
# Set up locale
pct exec $INSTANCE_CT -- sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen
pct exec $INSTANCE_CT -- locale-gen
pct exec $INSTANCE_CT -- update-locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_ALL=en_US.UTF-8
# Bring everything up to date
pct exec $INSTANCE_CT -- apt clean
pct exec $INSTANCE_CT -- apt update
pct exec $INSTANCE_CT -- apt dist-upgrade -y
pct exec $INSTANCE_CT -- apt install -y software-properties-common curl
# Turn this into a template
pct stop $INSTANCE_CT
pct template $INSTANCE_CT
Here's my Plex template (with GPU passthrough): # ============
# Local config
# ============
TEMPLATE_CT=200
INSTANCE_CT=201
INSTANCE_NAME=template-plex
INSTANCE_MEMORY=2048
# ======
# Script
# ======
passthrough_gpu() {
local instance_id="$1"
echo 'lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.hook.pre-start: sh -c "chown 0:108 /dev/dri/renderD128"
' >> /etc/pve/lxc/${instance_id}.conf
}
pct clone $TEMPLATE_CT $INSTANCE_CT --full 1
pct resize $INSTANCE_CT rootfs 2G
pct set $INSTANCE_CT \
--hostname ${INSTANCE_NAME} \
--memory ${INSTANCE_MEMORY} \
--net0 name=eth0,ip=dhcp,ip6=dhcp,bridge=vmbr0
passthrough_gpu $INSTANCE_CT
pct start $INSTANCE_CT
pct exec $INSTANCE_CT -- apt update
pct exec $INSTANCE_CT -- apt dist-upgrade -y
# Install Plex
apt_add_key $INSTANCE_CT plexmediaserver https://downloads.plex.tv/plex-keys/PlexSign.key CD665CBA0E2F88B7373F7CB997203C7B3ADCA79D
apt_add_repo $INSTANCE_CT plexmediaserver "https://downloads.plex.tv/repo/deb public main"
pct exec $INSTANCE_CT -- apt update
pct exec $INSTANCE_CT -- apt install -y plexmediaserver
# Turn this into a template
pct stop $INSTANCE_CT
pct template $INSTANCE_CT
Here's the Plex instance script (starts a Plex instance going from the Plex template): # ============
# Local config
# ============
TEMPLATE_CT=201
INSTANCE_CT=20001
INSTANCE_NAME=plex
INSTANCE_ADDRESS=99
INSTANCE_MEMORY=2048
# ===========
# Host config
# ===========
HOST_DATA=/home/data
HOST_BASE_UID=100000
HOST_BASE_GID=100000
# ======
# Script
# ======
pct clone $TEMPLATE_CT $INSTANCE_CT --full 1
pct set $INSTANCE_CT \
--onboot 1 \
--hostname ${INSTANCE_NAME} \
--memory ${INSTANCE_MEMORY} \
--net0 name=eth0,hwaddr=12:4B:53:00:00:${INSTANCE_ADDRESS},ip=dhcp,ip6=dhcp,bridge=vmbr0
# Mount the media dir
pct set $INSTANCE_CT -mp0 /mnt/containers/media,mp=/media
# Get the UID and GID of the plex user
pct start $INSTANCE_CT
PLEX_UID=$(pct exec $INSTANCE_CT -- id -u plex)
PLEX_GID=$(pct exec $INSTANCE_CT -- id -g plex)
PLEX_HOST_UID=$(($HOST_BASE_UID+$PLEX_UID))
PLEX_HOST_GID=$(($HOST_BASE_GID+$PLEX_GID))
pct stop $INSTANCE_CT
# Mount the Plex config dir
mkdir -p /home/data/${INSTANCE_NAME}/var-lib-plexmediaserver
chown -R ${PLEX_HOST_UID}:${PLEX_HOST_GID} ${HOST_DATA}/${INSTANCE_NAME}
chown -R ${PLEX_HOST_UID}:${PLEX_HOST_GID} ${HOST_DATA}/${INSTANCE_NAME}/var-lib-plexmediaserver
pct set $INSTANCE_CT -mp1 ${HOST_DATA}/${INSTANCE_NAME}/var-lib-plexmediaserver,mp=/var/lib/plexmediaserver
pct start $INSTANCE_CT
pct exec $INSTANCE_CT -- apt update
pct exec $INSTANCE_CT -- apt dist-upgrade -y
IP_ADDR=$(pct exec $INSTANCE_CT -- hostname -I | xargs)
set +x
echo "==========================================================="
echo "FOR FIRST TIME SETUP (no existing config):"
echo
echo "Go to https://www.plex.tv/claim/"
echo "pct exec $INSTANCE_CT -- curl -X POST 'http://127.0.0.1:32400/myplex/claim?token=PASTE_TOKEN_HERE'"
echo "Then, go to ${IP_ADDR}:32400"
echo "==========================================================="
There wasn't really a good point between bash and nix.
The last NixOS stable release I deployed in ~3 hours onto ~30 VMs with only minor issues and everything continued to just work.
The Proxmox team ensures that the hypervisor stuff works, and now I don't have to worry about a basic change to a nix file resulting in "Unknown entity flibblefrazzle" coming from some random place 18 levels deep in the bowels of the package system/os. It really got to the point where I was afraid to touch anything anymore.
Now I can run an up-to-date Plex! And Chrome Remote Desktop! Without spending 3 whole weekends knee deep in nix guts!
Proxmox has its own issues (they all do), but it's a much more inviting experience and nicer community.
So I'm jumping through all these hoops, stepping so far out of the norm that I become the effective maintainer and debugger of all of this, and ... Why was I doing this again?
Oh yeah, deterministic builds and immutability. And I needed these ... why?
Turns out I can get a similar effect on a mainstream platform with some scripts and a little bit of discipline.
I thought that I could just power through it, but there's just no end to the edge cases, bizarre magic, lack of useful error feedback (if any at all - the most common result is: nothing happens), and situations where you simply cannot do what you want to do.
So you either conform to their model and live with the limitations (and spend countless weekends debugging your builds), or you give up and move on.
I just got tired it all. I want to spend my time USING the computer, not setting it up. So now I use Debian, because everything includes a build for Debian and drivers for Debian. And the best part is: SOMEONE ELSE is maintaining it and keeping it current with the latest security fixes.
I was really surprised you were able to replace systemdMinimal with systemd in dbus though.
I thought it was there to break the cyclic dependency between systemd and dbus
Personally I believe systems that start simple (e.g. Alpine) are easier to mess with. Plus you don't have to give up all benefits of declarative configuration; for example apk has a single file (/etc/apk/world) that defines the exact package set that needs to remain installed. You can edit it and run "apk fix", much like you can edit /etc/nixos/configuration.nix and rerun "nixos-rebuild switch". It's not as powerful as Nixos, but power (and complexity) always has a price.
I find the premise of a carefully re-compilable/re-creatable system very appealing, but not having a stable LTS style release rather incongruous. It takes a huge effort to get all the pieces working together - and if it's rolling and the sand is shifting/breaking underneath you it feels you never reach a meaningful stable system. Sure you can recreate your well tested working configuration, but the configurations is effectively immediately out of date and unmaintained once any packages are updated
I think this is why they effectively only target x64. I'm not a "distro guy" so maybe I'm missing something. It seems it'd be sensible to just 1-to-1 copy Ubuntu LTS package versions (+ patches) and build a NixOS "stable" version that can be patched and updated as people find issues
Nix and Nixpkgs is the best in class when it comes to cross platform & cross architecture support. It has good support for x86_64 / aarch64 /macOS / Linux. Getting Musl or static variants of existing packages just work for many packages. There's even some work on BSD / Windows support. Cross compiling is far easier to setup compared to other package managers. If anything, other projects should be copying what Nix is doing.
I'm not sure how feasible it would be to compare nixpkgs and pkgsrc given how different they are, but I'd encourage people who need that to poke around at both and see which one feels like a better fit for their use case.
Nothing will break when the package gets updated as long as you keep to your specific release - backported changes are backwards compatible.
I know I can revert easily if there are problems when upgrading, but that doesn't really apply if security fixes only land in the new branch
This would imply only 9 months of security patches before I would need to upgrade the server. That is of course a far less risky process with NixOS, so perhaps that is ok, but it is a lot more work than the 5 years you get (free) with Ubuntu/Debian
And since a release happens every 6 months, while you do have an extra month's window, you still have to upgrade... every 6 months.
Can I define VM images with nix? Can those VM images be loaded into VirtualBox? Also, possible to do similar to build AMIs or other cloud VMs such as Hetzner where my "Dockerfile" is a nix file which defines the system to be built and then it has everything in place once built including tools, libraries, configuration and such?
Thoughts?
EDIT: Typos
(There's also a terraform module ) And for state changes https://github.com/serokell/deploy-rs Or Colmena /nixops/ and x other
For secret handling perhaps https://github.com/ryantm/agenix/ + https://github.com/oddlama/agenix-rekey
The ecosystem is in my experience very well fleshed out (7 yrs of use), as long as you don't require a knowledgebase/wiki/ up2date documentation, it's not been a issue for me since I could always fall back on Linux knowledge and just looking for how other distributions do x / how the thing itself is configured , and looking at how perhaps a existing nix module wraps that
- a language friendly as Haskell, so while fit for purpose definitively it's not well digested by most, also by various longtime NixOS users;
- an unclear direction, there are countless of "side projects" and no clear path, most are not even indexed in a wiki page so you just discover by accident interacting with someone else or after a search;
- a terrible documentation probably due to the lack of a clear direction stated above.
The biggest "mean install" is true, but it's not that much impacting in the real world, NixOS real purpose is AVOIDING containers in designing an infra, not being wrapped by them or wrapping them and true x86 zero-overhead virtualization does not exists. So far only IBM Power Systems with AiX seems to have something nearly-zero-overhead built-in in the (big)iron.
IMVO that's the main point: most people, NixOS devs included, fails to see a world different than the current one. A possible answer could be keeping up the evolution of zfs and mirror some IllumOS features so we can have light paravirtualization thanks to zones on zfs clones. But as per NixOS most people fails to see a different storage than the most common today, a relict model from the '80s (does anyone remember the infamous "zfs is a rampant layer violation" phrase?). A damn real modern system should be: a SINGLE application, yes, the OS as a framework, development environment who produce a running system live out of itself. A coupled package-manager/installer/storage, because those are effectively a unique thing so we do not need a network of symlinks or containers, we have a storage behind that simply expose needed software pieces together, also a system who manage in-memory stuff the same way. Zfs was the first step in this direction, with boot environments, clones, zones glued by the Image Package System, lacking the language for a proper system integration, unfortunately almost nobody have taken care of that. NixOS and Guix System offer another piece, the language to integrate package management, installers but they lack the storage integration to generate a unique new system model.
Rediscover IllumOS (OpenSolaris) would bridge the gap providing all needed piece to start a new kind of distro and infra management for a FLOSS world where there is no need of monsters to deploy simple infra and those simple tools could scale at monster level, killing the commercial IT model of the giants and given the humanity the desktop model, the "pioneering internet" of interconnected personal system model a new start.
The lack of independent universities, big labs, is probably the root cause but as always good things tend to happen anyway sooner or later, it's the interim the bad part.
So a vibrant community is now bad? Also there where big improvements in the last years like freeform settings.
Try reading Debians Postgres Documentation and you get a sense what terrible doc is. Not only do they point to each withput instructions but they are also about 9.5 which is stone age old.
It's not bad experimenting, but a thing is experimenting and have a mainline with a clear path, another is having only experiments, mostly undocumented, hard to discover, and so on.
What most people want from a distro? Being rock solid and functional with the minimum effort for anything. NixOS offer that formally, well, stating it's legacy because Flakes are the future, but their are still not there, than NixOps/Disnix and so on, this does not play with this other etc etc etc. Essentially a generic user, not a developer, have hard time to craft a stable infra with stable tech and so most fear the change reducing the community to a nearly devs-only show and enterprise player show.
Debian docs in general aren't excellent but Debian is a well known dinosaurs so most of it's users already know it, there is no need to teach anything to most, those who do not know simply ask their side friend. NixOS while not new it's still unknown to many so it have to teach well from zero various people, who aren't interested in NixOS development itself but only in it's use, model, that will contribute with return of experience, translations, casual patches and no more. They might be seen as a burden to devs, but they are "the base" that grant any distro enough popularity to really thrive.
Ubuntu back than succeed over Debian because of that. They gives a sane, ready-to-work base. There is no need to a NixOS GUI installer or so, that's not a potential target, but there is a damn need to tell anyone "start with that, learn that, in 5 and 10 years it will be the same evolved properly" No one want to learn and relearn things.
Nixos as a general purpose headless server OS with nix defined services, or as a desktop system with declarative and portable config is pretty neat.
As a language it's plagued by very sparse documentation. As an package manager, it still has the same issues with documentation, (compounded by tools providing the same functionality superseding each other) and then it requires maintaining EVERY software package ever created.
I think the base idea is great, the language is a little weird (but manageable), the DX is unintuitive and badly documented.
The killer reason that made me abandon NixOs is packages not being up to date. I can't fix that and I don't want to spend my life writing my own derivations when the alternative is to use Arch repos + AUR for exotic stuff.
- creating minimal OCI images from Nix packages
- creating microvms from OCI images
There are certainly some tradeoffs with this approach, but given that the author is trying to optimize for size, in addition to one of the primary benefits to this approach being a really clean, structured build/deploy loop, it seems like it could be worth exploring.
They OCI images generated are super light weight.