You can use Cachix (I haven't tried it yet but people are happy with it), or spend time like I do crawling through similar notes.
I have to put back my notes in order, because of this tweets: https://twitter.com/noteed/status/1285875859468029958
It turns out I did use a cache on Digital Ocean Spaces in the past, but I didn't have much notes.
- https://nixos.wiki/wiki/Binary_Cache: In particular this shows how to use
nix-serve
, to serve a local (to the server) Nix store. In addition to what is said there, an environment variable can be used to provide the secret key: https://discourse.nixos.org/t/setting-up-a-binary-cache/4338 - https://nixos.org/nix/manual/#ssec-ssh-substituter: If you have SSH access to a machine, it can serves as a cache.
- https://nixos.org/nix/manual/#ssec-s3-substituter
- The documentation has some steps to create a signing key and a post-build hook to upload to a S3 cache: https://nixos.org/nix/manual/#chap-post-build-hook. Using a post-build hook is nice as users (or CI scripts) don't need to always specify in each build to upload to a cache (they also list some caveats when using the provided script).
- https://www.tweag.io/blog/2019-11-21-untrusted-ci/: Untrusted CI: Using Nix to get automatic trusted caching of untrusted builds
- https://www.tweag.io/blog/2020-07-08-buildkite-for-nix-ci/: Use a HTTP-writable store, which is actually a proxy to Google Storage: https://github.com/tweag/nix-store-gcs-proxy
- http://www.lpenz.org/articles/nixchannel/index.html: A nice addition to a binary cache, is a channel, i.e. a "list" of available packages (I'm not sure channels are intended to be something else than nixpkgs, or that they can be made private).
- https://fzakaria.com/2020/07/15/setting-up-a-nix-s3-binary-cache.html: Simple tutorial to use S3 as a binary cache.
- https://nixos.wiki/wiki/Distributed_build
- Clarify wording: "cache" and "substituter". The documentation says "Deprecated: binary-caches is now an alias to substituters." Also, they seem quite similar to "store".
- A NAR, or Nix archive, is a set of store paths exported out of the Nix store as a standalone file. A NAR can then be imported into the store. (This reminds me how a Docker image can be
docker save
d anddocker load
ed.) - Can I configure a SSH-accessible machine as a cache (instead of specifying it with
--substituters
) ? Yes, see e.g. here: http://softwaresimply.blogspot.com/2018/07/setting-up-private-nix-cache.html - How can I list store paths that are not yet uploaded to the cache ? How can I make the example upload-to-cache.sh script better (e.g. when there is no network) ?
A cache is a location containing optionally signed store paths, that can be used to download (the documentation says "fetch") those store paths instead of actually building them when using e.g. nix-build
. Caches can be local directories, directories served through HTTP(S), or S3.
A HTTP cache URL can look like cache.nixos.org
, my-cache.cachix.org
, cache.ams3.digitaloceanspaces.com
, ... (i.e. so a S3 bucket can naturally be accessed through HTTP too.)
When configuring a cache on a client, in addition of the URL, a matching public key can be set to verify the downloaded store paths.
A cache public key can look like gravity.cs.illinois.edu-1:yymmNS/WMf0iTj2NnD0nrVV8cBOXM9ivAkEdO1Lro3U=
(I forgot what this specific key is about). Here gravity.cs.illinois.edu-1
is the key name and probably matches a hostname (that is the recommended practice) but it can actually be anything.
Generating a signing key: https://nixos.org/nix/manual/#operation-generate-binary-cache-key
Uploading to a local cache:
nix copy --to "file://$(pwd)/cache" $(nix-build ... --no-out-link)
Uploading to a S3 cache can be done with:
nix copy --to 's3://example-nix-cache?profile=cache-upload®ion=eu-west-2' nixpkgs.hello
nix copy --to 's3://example-nix-cache?profile=cache-upload&scheme=https&endpoint=minio.example.com' nixpkgs.hello
See https://nixos.org/nix/manual/#ssec-s3-substituter-authenticated-writes
In my old notes, I have
find /path/to/cache/ -maxdepth 1 -not -name cache | xargs -I {} s3cmd put {} s3://cache --acl-public --recursive
(I guess I din't know yet about the s3cmd sync
command.)
When using Backblaze B2, this worked (listing the profile as above didn't work (I don't know if Backblaze has such profiles, the the scheme is not needed):
$ nix copy --to 's3://noteed-actions?endpoint=s3.eu-central-003.backblazeb2.com' nixpkgs.hello
noteed-actions
was the bucket name, and the endpoint was given when I created the application key and is repeated on the bucket in the Backblaze web interface.
If you don't want all the files at the root of the bucket, a directory name can be specified, e.g. s3://noteed-actions/cache
.
- It seems the simpler way to make a cache private, is to use a SSH-accessible cache (thus controlling access with SSH keys).
- There is a netrc-file option for Nix.
- For S3: https://nixos.org/nix/manual/#ssec-s3-substituter-authenticated-reads but I haven't seen how to configure the credentials. I guess environment variables can be set. As often with Digital Ocean, it seems that Spaces access key have access to all buckets instead of just one. It seems Backblaze supports limiting a key to a specific bucket:
If an Application Key is restricted to a bucket, the listAllBucketNames permission is required for compatibility with SDKs and integrations. The listAllBucketNames permission can be enabled upon creation in the web UI or using the b2_create_key API call. More: https://www.backblaze.com/b2/docs/s3_compatible_api.html
(Backblaze, Digital Ocean and Packet are members of the Bandwidth Alliance.)