Suppose you have a web app/service you want to run in the cloud, not “just because”, but because you are a startup and just trying it out and don’t want to commit to building your own data center, or maybe you have some of your own dedicated stuff, but you want to make use of on-demand resources for load balancing (and, btw, you should store at least one backup of your data yourself, locally; I heard a bunch of horror stories about people who did not, like, most recently, InfuxDB deleted customers’ data when decommissioning instances on AWS). And, of course, you want it all for the cheap.
NixOS updates are atomic and can be rolled back easily, and NixOS’ configuration.nix
is declarative and easy to replicate, thus, theoretically, by using NixOS you could select your cloud/VPS provider based purely on price, uptime, network speed and latency, and ignore any advanced features like OS disk snapshots, backups or the like, since in NixOS they are very easy to setup yourself (incremental xdeltas with something like bup are also much more efficient – thus much cheaper – than disk snapshots).
In practice, however, there are some problems.
Nixpkgs evaluations take tons of memory, build outputs of even the minimal NixOS configs are rather large now.
Evaluating the whole Nixpkgs package list a-la OfBorg of NixOS GitHub org via nix 2.15 (stable) on nixpkgs/master (rev 2f393e7d42bd6fef8ab171a14563b4d03061c614) by running
NIX_SHOW_STATS=1 HOME=/homeless-shelter NIXPKGS_ALLOW_UNFREE=1 command time -v nix-env -f . -qaP --argstr system "x86_64-linux" --drv-path > /dev/null
consumes 12GiB of RAM (which is actually pretty good compared to 14.5GiB it takes on nix 2.3).
Searching Nixpkgs with nix search
by running
NIX_SHOW_STATS=1 NIXPKGS_ALLOW_UNFREE=1 command time -v nix --extra-experimental-features nix-command --extra-experimental-features flakes search . > /dev/null
takes 3.3GiB of RAM the first time, and then about 197MiB of RAM for each subsequent run (it caches evaluation results).
Evaluating GNU Hello package
takes 127MiB of RAM.
Evaluating Mozilla Firefox
takes 498MiB of RAM.
Evaluating Wine
takes 595MiB of RAM.
Evaluating the minimal NixOS config
takes 622MiB of RAM.
This is a long-standing issue of Nixpkgs https://github.com/NixOS/nixpkgs/issues/38635 that seems to only be getting worse with time.
Moreover,
NixOS minimal ISO (https://nixos.org/download.html#nixos-iso) is over 900MiB at the moment, and
a reasonable NixOS install with all the compilers and other development tools and such takes at least 30GiB on disk (and you need at least twice that for the worst-case upgrade and about twice more of that for keeping all the sources and build-only outputs).
Basically, the above means that if you don’t have a dedicated development machine running NixOS, it is pretty expensive to do DevOps on NixOS.
For instance, at the moment of writing, for 8 vCPUs, 32GiB RAM and 128GB SSD (which is a minimum you would want for your Hydra machine) under 100% load on AWS today you are going pay $0.08 cents per GB a month for storage and $0.204 for CPU-hour, so up to ~$1600 a month 0.08 * 128 + 0.2688 * 8 * 24 * 30
. (See https://aws.amazon.com/ebs/pricing/, https://aws.amazon.com/ec2/pricing/on-demand/.) That also does not include data transfer costs. Ridiculous.
On DigitalOcean, a VPS with 16 vCPUs, 32 GiB RAM, and 200 GB SSD costs $336/month. (See https://www.digitalocean.com/pricing/droplets.)
On Linode, a VPS with 16 vCPUs, 32 GiB RAM, 640 GB disk (not SSD) costs $288/month. (See https://www.linode.com/pricing/.)
On VirMarch, a VPS with 8 vCPUs, 32 GiB RAM, 1TB disk costs $120/month. (See https://virmach.com/pricing/.)
Similarly, for deployment, the cheapest cloud instances usually get 512MiB of RAM or less. Meaning, that if you use vanilla Nixpkgs, don’t add any overlays, use https://hydra.nixos.org public binary cache, don’t change anything that would make it rebuild anything, you still must have a dedicated evaluation machine and a remote tool (like NixOps discussed below) for deployment, because you won’t even be able run nix-env --install -A package
or nixos-rebuild
on the deployment machine without paying for a swap storage volume (and cheap AWS instances don’t allow temporary local volumes, and having swap on EBS is a bit crazy, and slow).
Which, actually, means that you wouldn’t even be able to install NixOS on a cheap cloud instance, since it requires running nixos-install
which does nixos-rebuild
internally.
Which, actually, holds true even when you use a tool like NixOps, since NixOps requires the deployment machines run NixOS already.
Sure, for AWS, NixOS provides pre-installed AMIs (https://nixos.org/download.html#nixos-amazon), and on AWS and elsewhere, where it is supported, you could rent a bigger cloud instance, install there, shut it down, destroy it, but keep the root volume, and then run a smaller instance on the result (and then do the reverse and repeat on each update if you don’t have a dedicated development machine with NixOps and similar), but this is not always possible, and when possible it is inconvenient.
If you do have an evaluation machine with enough RAM (the price calculations above show it is much cheaper to simply buy an old PC/laptop on E-bay for $100-$200 for this, upgrade its memory for $50, and its SSD to 2TB for $100, it will pay for itself in a month or two; in fact, 2013-era laptop with 16GiB of RAM would suffice for this, since you won’t be actually building anything on it, only editing code and evaluating Nix-expressions), as I expect most people who use NixOS/Nixpkgs in the cloud do, then Consequence 1 does not apply to you.
In which case, NixOps solves Consequence 2 by allowing you to describe you configuration a-la (this is basically taken from NixOps documentation, but made a bit more realistic, but not too realistic, since a realistic setup will have at least one fixed dedicated machine for scaling down and a lot of conditionals, this is just an illustration):
{ nrProxyInstances ? 100
, nrAppInstances ? 20
, nrDBInstances ? 2
, nrNSInstances ? 2
, underDDOS ? false
}:
let
makeProxy = n: nameValuePair "proxy-${toString n}"
({ config, pkgs, ... }:
{
# internet-facing reverse-proxy instance (for caching and load balancing)
# this does the TLS stripping and anti-DDOS mitigation, like
# making users mine Monero to get a response when underDDOS is
# true :)
services.nginx = { ... };
# this does reverse-proxy/caching
services.varnish = { ... };
});
# obviously, this assumes you can't just use a CDN instead of makeProxy,
# like say your app has a search function and you partially invalidate
# search caches from app instances when users edit pages or something.
# otherwise, makeProxy can and probably should be replaced by a CDN.
makeApp = n: nameValuePair "app-${toString n}"
({ config, pkgs, ... }:
{
# for FCGI, routing, etc
services.nginx = { ... };
# the actual web app back-end
services.myapp = { ... };
});
makeDB = n: nameValuePair "db-${toString n}"
({ config, pkgs, ... }:
{
# database with replication across instances
services.postgresql = { ... };
});
makeNS = n: nameValuePair "nameserver-${toString n}"
({ config, pkgs, ... }:
{
# DNS-based load balancing goes here
services.bind = { ... };
});
in {
network.description = "My awesome App deployment";
} // listToAttrs (map makeProxy (range 1 nrProxyInstances))
// listToAttrs (map makeApp (range 1 nrAppInstances))
// listToAttrs (map makeDB (range 1 nrAppInstances))
// listToAttrs (map makeNS (range 1 nrNSInstances))
Now, theoretically, all you need to do to respond to a change in load is just a couple of commands, like
# a ton of new read-only users (Slashdot/Reddit/whatever effect)
nixops set-args --arg nrProxyInstances 200
nixops deploy
# lots of new actual users
nixops set-args --arg nrProxyInstances 110
nixops set-args --arg nrAppInstances 40
nixops deploy
# we are getting DDOSed
nixops set-args --arg underDDOS true
nixops deploy
which you could then automate with a bit of scripting (though, it would be nice if there was a tool for this already, but whatever).
Now, having solved the deployment problem, Consequence 3 is what is left.
nixos-anywhere is a tool to evaluate NixOS configuration.nix
locally, and install them remotely. It is clearly based of off nixos-assimilate
, nixos-infect
, and NixOS installer from kexec, though they don’t give any credit to them anywhere. To do disk partitioning it uses disko.
So, in theory, all the pieces to install and deploy your NixOS service to the cloud are here.
In practice, this is incredibly frustrating to work with:
you write your configuration.nix
,
then you repeat all the disk setup using a different configuration format for disko
,
then you make your configuration.nix
into a flake, because nixos-anywhere
wants a flake,
then you boot your target host (not/cloud machine/VPS/VM) and run nixos-anywhere
,
then, likely, since you screwed up the first time and now you have to start over, you fix your configuration.nix
, then repeat the fix in the disko
config, wipe your target host and run nixos-anywhere
again, possibly multiple times,
then, you convert your configuration.nix
into a NixOps network file,
and finally, you can deploy properly.
… and now you have to repeat that for each target host in your NixOps network file unless you are using AWS EC2 or similar, where you can just create new instances by cloning some generic configuration.nix
install, and then deploying real configuration.nix
’es as needed. (AWS is fine for storage and CDN, but kinda crazy expensive for actual compute, though, so you probably want to deploy your compute targets somewhere else.)
I propose to basically integrate all of that into vanilla NixOS and NixOps:
NixOS has most of the disk layout options already, it only lacks a way to describe DOS, GPT, and LVM partitions, so I’m thinking something like
# this is new
disk = {
"/dev/vda" = {
device = "/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_-XXXXXXXXXXXX"; # primary device path
deviceAliases = [ "/dev/sda" ]; # aliases
format.dos = {
"boot" = {
index = 1; # /dev/sda1
start = 63; # starting sector
size = "256M"; # size
};
"swap" = {
index = 2; # /dev/sda2
# starting sector computed automatically
size = "32G";
};
"cryptoroot" = {
index = 3; # /dev/sda3
size = "100%"; # the rest of the available space
};
};
};
"/dev/vdb" = {
device = "/dev/disk/by-id/ata-Hitachi_HDD_XXXXXXXXXXXXXX";
format.lvmPV = {
metadatacopies = 2;
};
};
};
# also new
lvm = {
"vg1" = {
pvs = [ "/dev/vdb" ];
lvs."data" = {
size = "2T";
};
};
};
boot.initrd.luks.devices = [
{ name = "u-root";
device = "/vda/cryptoroot";
allowDiscards = true;
}
];
fileSystems = {
"/boot" = {
device = "/vda/boot";
label = "boot"; # the label used while creating the FS and mounting
fsType = "ext4";
options= [ "errors=remount-ro" "noatime" "discard" ];
};
"/" = {
device = "/dev/mapper/u-root";
fsType = "xfs";
options= [ "noatime" ];
};
"/srv" = {
device = "/dev/vg1/data";
fsType = "zfs";
};
};
swapDevices = [
{
device = "/vda/swap";
label = "swap";
}
];
should suffice.
This is much cleaner than what disko
wants you to do and there’s no need to repeat yourself twice. This could also generate udev rules to name the devices properly, disko
can’t, disko
’s configuration format is hierarchical, but block devices are a DAG. For instance, dmcache block devices don’t have a “main device” they are attached to, they are a function over several block devices.
NixOS could also properly do dependency and sanity-checking on all these things, currently it does not.
The generated script should also be idempotent, disko
’s one is not.
Allow nixos-install
to install from a pre-evaluated derivation, a-la
and allow nixops install
use that one instead of nix copy
’ing and then nixos-install
’ing using the build output of the evaluated NixOS system.
This way is much more efficient if your configuration.nix
does not override any core packages, thus allowing the target host to fetch stuff from hydra build cache. Most users probably will want this as the default.
Add nixops install
command that would basically do the same thing that nixos-anywhere
does.
Note that most of the above does not even really need NixOps all that much.
In principle, NixOS with disk layout options added could just generate /run/current-system/bin/install-via-ssh
script that would do the latter half what nixos-anywhere
does (the actual install), the kexec
part could be a separate script distributed as Nixpkgs package, and NixOps would just call them in sequence.
This also would have an advantage by saving some bandwidth and disk space since NixOS installer kexec
image would share nix
with the system it would be installing (at least 86MiB at the moment of writing).
Moreover, NixOS with disk layout options added could generate other similar scripts, like /run/current-system/bin/install-to-dir
and /run/current-system/bin/install-to-disk
which would simply install the configuration into a given directory or onto a given disk. With some simple additions, this would allow you to clone/backup your system to another disk, or make a bootable USB drive from a system config in a couple of commands.
Moving even more in this direction, why not separate the after-nix
-is-installed part of those scripts into /run/current-system/bin/deploy-via-ssh
and deploy-to-dir
scripts. Now you can use NixOS systems like Docker containers, but without efficiencies associated with re-creating and moving them around: you just incrementally nix copy
a NixOS system via ssh to another machine, and run it in a chroot (like nixos-enter
) or container there.