feat: move things around

This commit is contained in:
eric
2026-03-18 17:41:10 +01:00
parent f558ab4ba9
commit 1de34c1869
22 changed files with 54 additions and 2692 deletions

415
README.md
View File

@@ -1,414 +1,35 @@
# nix-nodeiwest
NixOS flake for NodeiWest VPS provisioning and ongoing deployment.
Employee and workstation flake for NodeiWest.
This repo currently provisions NixOS hosts with:
Server deployment moved to the sibling repo `../nix-deployment`.
- the `nodeiwest` employee helper CLI for safe provisioning
- shared base config in `modules/nixos/common.nix`
- Tailscale bootstrap via OpenBao AppRole in `modules/nixos/tailscale-init.nix`
- Home Manager profile in `modules/home.nix`
- disk partitioning via `disko`
- deployment via `colmena`
This repo now owns:
## Current Model
- shared Home Manager modules
- employee shell packages and environment variables
- workstation-side access to the `nodeiwest` helper by consuming it from `../nix-deployment`
- Employees should use `nodeiwest` as the supported provisioning interface
- New machines are installed with `nixos-anywhere`
- Ongoing changes are deployed with `colmena`
- Hosts authenticate to OpenBao as clients
- Tailscale auth keys are fetched from OpenBao namespace `it`, KV mount `kv`, path `tailscale`, field `CLIENT_SECRET`
- Public SSH must work independently of Tailscale for first access and recovery
This repo no longer owns:
## Repo Layout
- NixOS server host definitions
- Colmena deployment state
- Tailscale server bootstrap
- k3s bootstrap
- OpenBao server or Kubernetes infra manifests
```text
flake.nix
hosts/
vps[X]/
configuration.nix
disko.nix
hardware-configuration.nix
modules/
home.nix
helpers/
home.nix
nixos/
common.nix
tailscale-init.nix
pkgs/
helpers/
cli.py
templates/
```
## Helper Consumption
## Recommended Workflow
The supported employee path is the `nodeiwest` CLI.
It is exported from the root flake as `.#nodeiwest-helper` and installed by the shared Home Manager profile. You can also run it ad hoc with:
The helper package is re-exported from the deployment repo:
```bash
nix run .#nodeiwest-helper -- --help
```
Recommended sequence for a new VPS:
### 1. Probe The Live Host
```bash
nodeiwest host probe --ip <ip>
```
This validates SSH reachability and derives the boot mode, root device, primary disk candidate, and swap facts from the live machine.
### 2. Scaffold The Host Files
Dry-run first:
```bash
nodeiwest host init --name <name> --ip <ip>
```
Write after reviewing the plan:
```bash
nodeiwest host init --name <name> --ip <ip> --apply
```
This command:
- probes the host unless you override disk or boot mode
- creates or updates `hosts/<name>/configuration.nix`
- creates or updates `hosts/<name>/disko.nix`
- creates `hosts/<name>/hardware-configuration.nix` as a placeholder if needed
- prints the exact `flake.nix` snippets still required for `nixosConfigurations` and `colmena`
### 3. Create The OpenBao Bootstrap Material
Dry-run first:
```bash
nodeiwest openbao init-host --name <name>
```
Apply after reviewing the policy and AppRole plan:
```bash
nodeiwest openbao init-host --name <name> --apply
```
This verifies your existing `bao` login, creates the host policy and AppRole, and writes:
- `bootstrap/var/lib/nodeiwest/openbao-approle-role-id`
- `bootstrap/var/lib/nodeiwest/openbao-approle-secret-id`
### 4. Plan Or Run The Install
```bash
nodeiwest install plan --name <name>
nodeiwest install run --name <name> --apply
```
`install plan` validates the generated host files and bootstrap files, then prints the exact `nixos-anywhere` command. `install run` re-validates, asks for confirmation, and executes that command.
### 5. Verify First Boot And Colmena Readiness
```bash
nodeiwest verify host --name <name> --ip <ip>
nodeiwest colmena plan --name <name>
```
`verify host` summarizes the first-boot OpenBao and Tailscale services over SSH. `colmena plan` confirms the deploy target or prints the exact missing host stanza.
## Manual Flow (Fallback / Advanced)
This is the underlying sequence that `nodeiwest` automates. Keep it as the fallback path for unsupported host layouts or when you intentionally want to run the raw commands yourself.
### 1. Prepare The Host Entry
Create a new directory under `hosts/<name>/` with:
- `configuration.nix`
- `disko.nix`
- `hardware-configuration.nix`
`configuration.nix` should import both `disko.nix` and `hardware-configuration.nix`.
Example:
If you import `modules/helpers/home.nix` directly, pass the deployment flake as a special arg:
```nix
{ lib, ... }:
{
imports = [
./disko.nix
./hardware-configuration.nix
];
networking.hostName = "vps1";
networking.useDHCP = lib.mkDefault true;
time.timeZone = "UTC";
boot.loader.efi.canTouchEfiVariables = true;
boot.loader.grub = {
enable = true;
efiSupport = true;
device = "nodev";
};
nodeiwest.ssh.userCAPublicKeys = [
"ssh-ed25519 AAAA... openbao-user-ca"
];
nodeiwest.tailscale.openbao.enable = true;
system.stateVersion = "25.05";
}
extraSpecialArgs = {
deployment = inputs.deployment;
};
```
### 2. Add The Host To `flake.nix`
Add the host to:
- `nixosConfigurations`
- `colmena`
For `colmena`, set:
- `deployment.targetHost`
- `deployment.targetUser = "root"`
- tags as needed
## Discover Disk And Boot Facts
Before writing `disko.nix`, inspect the current VPS over SSH:
```bash
ssh root@<ip> 'lsblk -o NAME,SIZE,TYPE,MODEL,FSTYPE,PTTYPE,MOUNTPOINTS'
ssh root@<ip> 'test -d /sys/firmware/efi && echo UEFI || echo BIOS'
ssh root@<ip> 'findmnt -no SOURCE /'
ssh root@<ip> 'cat /proc/swaps'
```
Use that output to decide:
- disk device name: `/dev/sda`, `/dev/vda`, `/dev/nvme0n1`, etc.
- boot mode: UEFI or BIOS
- partition layout you want `disko` to create
`hosts/vps1/disko.nix` currently assumes:
- GPT
- `/dev/sda`
- UEFI
- ext4 root
- swap partition
Do not install blindly if those assumptions are wrong.
## Generate `hardware-configuration.nix`
`hardware-configuration.nix` is generated during install with `nixos-anywhere`.
The repo path is passed directly to the install command:
```bash
--generate-hardware-config nixos-generate-config ./hosts/<name>/hardware-configuration.nix
```
That generated file should remain tracked in Git after install.
## OpenBao Setup For Tailscale
Each host gets its own AppRole.
The host uses:
- OpenBao address: `https://secrets.api.nodeiwest.se`
- namespace: `it`
- KV mount: `kv`
- auth mount: `auth/approle`
- secret path: `tailscale`
- field: `CLIENT_SECRET`
The host stores:
- `/var/lib/nodeiwest/openbao-approle-role-id`
- `/var/lib/nodeiwest/openbao-approle-secret-id`
The rendered Tailscale auth key lives at:
- `/run/nodeiwest/tailscale-auth-key`
### Create A Policy
Create a minimal read-only policy for the Tailscale secret.
If the secret is accessible as:
```bash
BAO_NAMESPACE=it bao kv get -mount=kv tailscale
```
then create the matching read policy for that mount.
Example shape for the KV v2 mount `kv`:
```hcl
path "kv/data/tailscale" {
capabilities = ["read"]
}
```
Write it from your machine:
```bash
export BAO_ADDR=https://secrets.api.nodeiwest.se
export BAO_NAMESPACE=it
bao policy write tailscale-vps1 ./tailscale-vps1-policy.hcl
```
Adjust the path to match your actual OpenBao KV mount.
### Create The AppRole
Create one AppRole per host.
Example for `vps1`:
```bash
bao write auth/approle/role/tailscale-vps1 \
token_policies=tailscale-vps1 \
token_ttl=1h \
token_max_ttl=24h \
token_num_uses=0 \
secret_id_num_uses=0
```
### Generate Bootstrap Credentials
Create a temporary bootstrap directory on your machine:
```bash
mkdir -p bootstrap/var/lib/nodeiwest
```
Write the AppRole credentials into it:
```bash
bao read -field=role_id auth/approle/role/tailscale-vps1/role-id \
> bootstrap/var/lib/nodeiwest/openbao-approle-role-id
bao write -f -field=secret_id auth/approle/role/tailscale-vps1/secret-id \
> bootstrap/var/lib/nodeiwest/openbao-approle-secret-id
chmod 0400 bootstrap/var/lib/nodeiwest/openbao-approle-role-id
chmod 0400 bootstrap/var/lib/nodeiwest/openbao-approle-secret-id
```
These files are install-time bootstrap material. They are not stored in Git.
## Install With `nixos-anywhere`
Install from your machine:
```bash
nix run github:nix-community/nixos-anywhere -- \
--extra-files ./bootstrap \
--copy-host-keys \
--generate-hardware-config nixos-generate-config ./hosts/vps1/hardware-configuration.nix \
--flake .#vps1 \
root@100.101.167.118
```
What this does:
- wipes the target disk according to `hosts/vps1/disko.nix`
- installs NixOS with `.#vps1`
- copies the AppRole bootstrap files into `/var/lib/nodeiwest`
- generates `hosts/vps1/hardware-configuration.nix`
Important:
- this destroys the existing OS on the target
- take provider snapshots and application backups first
- the target SSH host keys may change after install
## First Boot Behavior
On first boot:
1. `vault-agent-tailscale.service` starts using `pkgs.openbao`
2. it authenticates to OpenBao with AppRole
3. it renders `CLIENT_SECRET` from namespace `it`, KV mount `kv`, path `tailscale` to `/run/nodeiwest/tailscale-auth-key`
4. `nodeiwest-tailscale-authkey-ready.service` waits until that file exists
5. `tailscaled-autoconnect.service` uses that file and runs `tailscale up --ssh`
Public SSH remains the recovery path if OpenBao or Tailscale bootstrap fails.
## Verify After Install
SSH to the host over the public IP first.
Check:
```bash
systemctl status vault-agent-tailscale
systemctl status nodeiwest-tailscale-authkey-ready
systemctl status tailscaled-autoconnect
ls -l /var/lib/nodeiwest
ls -l /run/nodeiwest/tailscale-auth-key
tailscale status
```
If Tailscale bootstrap fails, inspect logs:
```bash
journalctl -u vault-agent-tailscale -b
journalctl -u nodeiwest-tailscale-authkey-ready -b
journalctl -u tailscaled-autoconnect -b
```
Typical causes:
- wrong AppRole credentials
- wrong OpenBao policy
- wrong secret path
- wrong KV mount path
- `CLIENT_SECRET` field missing in the secret
## Deploy Changes After Install
Once the host is installed and reachable, use Colmena:
```bash
nix run .#colmena -- apply --on vps1
```
## Rotating The AppRole SecretID
To rotate the machine credential:
1. generate a new `secret_id` from your machine
2. replace `/var/lib/nodeiwest/openbao-approle-secret-id` on the host
3. restart the agent
Example:
```bash
bao write -f -field=secret_id auth/approle/role/tailscale-vps1/secret-id > new-secret-id
scp new-secret-id root@100.101.167.118:/var/lib/nodeiwest/openbao-approle-secret-id
ssh root@100.101.167.118 'chmod 0400 /var/lib/nodeiwest/openbao-approle-secret-id && systemctl restart vault-agent-tailscale tailscaled-autoconnect'
rm -f new-secret-id
```
## Recovery Notes
- Tailscale is additive. It should not be your only access path.
- Public SSH on port `22` must remain available for first access and recovery.
- OpenBao SSH CA auth is separate from Tailscale bootstrap.
- If a machine fails to join the tailnet, recover via public SSH or provider console.

24
flake.lock generated
View File

@@ -22,9 +22,29 @@
"type": "github"
}
},
"deployment": {
"inputs": {
"colmena": "colmena",
"disko": "disko",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 0,
"narHash": "sha256-BW+YgPQb2t5davyiQ6gb4sIbBdIL72jCaLGiehkGT9U=",
"type": "git",
"url": "file:../nix-deployment"
},
"original": {
"type": "git",
"url": "file:../nix-deployment"
}
},
"disko": {
"inputs": {
"nixpkgs": [
"deployment",
"nixpkgs"
]
},
@@ -96,6 +116,7 @@
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"deployment",
"colmena",
"nixpkgs"
]
@@ -148,8 +169,7 @@
},
"root": {
"inputs": {
"colmena": "colmena",
"disko": "disko",
"deployment": "deployment",
"home-manager": "home-manager",
"nixpkgs": "nixpkgs_2"
}

114
flake.nix
View File

@@ -1,26 +1,23 @@
{
description = "NodeiWest company flake";
description = "NodeiWest employee and workstation flake";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
colmena.url = "github:zhaofengli/colmena";
disko = {
url = "github:nix-community/disko";
inputs.nixpkgs.follows = "nixpkgs";
};
home-manager = {
url = "github:nix-community/home-manager";
inputs.nixpkgs.follows = "nixpkgs";
};
deployment = {
url = "git+file:../nix-deployment";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs =
inputs@{
self,
nixpkgs,
colmena,
disko,
home-manager,
deployment,
...
}:
let
@@ -31,111 +28,22 @@
"x86_64-linux"
];
forAllSystems = lib.genAttrs supportedSystems;
mkPkgs =
system:
import nixpkgs {
inherit system;
};
mkHost =
name:
nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
specialArgs = {
inherit inputs self;
};
modules = [
disko.nixosModules.disko
home-manager.nixosModules.home-manager
self.nixosModules.common
./hosts/${name}/configuration.nix
];
};
in
{
homeManagerModules.default = ./modules/home.nix;
homeManagerModules.helpers = ./modules/helpers/home.nix;
nixosModules.common = ./modules/nixos/common.nix;
packages = forAllSystems (
system:
let
pkgs = mkPkgs system;
nodeiwestHelper = pkgs.callPackage ./pkgs/helpers { };
in
{
colmena = colmena.packages.${system}.colmena;
nodeiwest-helper = nodeiwestHelper;
default = colmena.packages.${system}.colmena;
}
);
packages = forAllSystems (system: {
nodeiwest-helper = deployment.packages.${system}.nodeiwest-helper;
default = self.packages.${system}.nodeiwest-helper;
});
apps = forAllSystems (system: {
colmena = {
type = "app";
program = "${colmena.packages.${system}.colmena}/bin/colmena";
};
nodeiwest-helper = {
type = "app";
program = "${self.packages.${system}.nodeiwest-helper}/bin/nodeiwest";
};
default = self.apps.${system}.colmena;
default = self.apps.${system}.nodeiwest-helper;
});
nixosConfigurations = {
vps1 = mkHost "vps1";
lab = mkHost "lab";
};
colmena = {
meta = {
nixpkgs = mkPkgs "x86_64-linux";
specialArgs = {
inherit inputs self;
};
};
defaults =
{ name, ... }:
{
networking.hostName = name;
imports = [
disko.nixosModules.disko
home-manager.nixosModules.home-manager
self.nixosModules.common
];
};
vps1 = {
deployment = {
targetHost = "100.101.167.118";
targetUser = "root";
tags = [
"company"
"edge"
];
};
imports = [ ./hosts/vps1/configuration.nix ];
};
lab = {
deployment = {
targetHost = "100.101.167.118";
targetUser = "root";
tags = [
"company"
"manager"
];
};
imports = [ ./hosts/lab/configuration.nix ];
};
};
colmenaHive = colmena.lib.makeHive self.outputs.colmena;
};
}

View File

@@ -1,30 +0,0 @@
{ lib, ... }:
{
# Generated by nodeiwest host init.
imports = [
./disko.nix
./hardware-configuration.nix
];
networking.hostName = "lab";
networking.useDHCP = lib.mkDefault true;
time.timeZone = "UTC";
boot.loader.efi.canTouchEfiVariables = true;
boot.loader.grub = {
enable = true;
efiSupport = true;
device = "nodev";
};
nodeiwest.ssh.userCAPublicKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE6c2oMkM7lLg9qWHVgbrFaFBDrrFyynFlPviiydQdFi openbao-user-ca"
];
nodeiwest.tailscale.openbao = {
enable = true;
};
system.stateVersion = "25.05";
}

View File

@@ -1,47 +0,0 @@
{
lib,
...
}:
{
# Generated by nodeiwest host init.
# Replace the disk only if the provider exposes a different primary device.
disko.devices = {
disk.main = {
type = "disk";
device = lib.mkDefault "/dev/sda";
content = {
type = "gpt";
partitions = {
ESP = {
priority = 1;
name = "ESP";
start = "1MiB";
end = "512MiB";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
swap = {
size = "4G";
content = {
type = "swap";
resumeDevice = true;
};
};
root = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
}

View File

@@ -1,5 +0,0 @@
{ ... }:
{
# Placeholder generated by nodeiwest host init.
# nixos-anywhere will replace this with the generated hardware config.
}

View File

@@ -1,28 +0,0 @@
{ lib, ... }:
{
imports = [
./disko.nix
./hardware-configuration.nix
];
networking.hostName = "vps1";
networking.useDHCP = lib.mkDefault true;
time.timeZone = "UTC";
boot.loader.efi.canTouchEfiVariables = true;
boot.loader.grub = {
enable = true;
efiSupport = true;
device = "nodev";
};
nodeiwest.ssh.userCAPublicKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE6c2oMkM7lLg9qWHVgbrFaFBDrrFyynFlPviiydQdFi openbao-user-ca"
];
nodeiwest.tailscale.openbao = {
enable = true;
};
system.stateVersion = "25.05";
}

View File

@@ -1,46 +0,0 @@
{
lib,
...
}:
{
# Replace /dev/sda if the VPS exposes a different disk, e.g. /dev/vda or /dev/nvme0n1.
disko.devices = {
disk.main = {
type = "disk";
device = lib.mkDefault "/dev/sda";
content = {
type = "gpt";
partitions = {
ESP = {
priority = 1;
name = "ESP";
start = "1MiB";
end = "512MiB";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
swap = {
size = "4G";
content = {
type = "swap";
resumeDevice = true;
};
};
root = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
}

View File

@@ -1,10 +0,0 @@
{ lib, ... }:
{
# Replace this file with the generated hardware config from the target host.
fileSystems."/" = lib.mkDefault {
device = "/dev/disk/by-label/nixos";
fsType = "ext4";
};
swapDevices = [ ];
}

View File

@@ -1,10 +1,7 @@
{ pkgs, ... }:
let
nodeiwestHelper = pkgs.callPackage ../../pkgs/helpers { };
in
{ pkgs, deployment, ... }:
{
home.packages = [
pkgs.python3
nodeiwestHelper
deployment.packages.${pkgs.system}.nodeiwest-helper
];
}

View File

@@ -14,5 +14,6 @@
openbao
colmena
# etc.
sops
];
}

View File

@@ -1,101 +0,0 @@
{
config,
lib,
self,
...
}:
let
cfg = config.nodeiwest;
trustedUserCAKeysPath = "/etc/ssh/trusted-user-ca-keys.pem";
in
{
imports = [ ./tailscale-init.nix ];
options.nodeiwest = {
openbao.address = lib.mkOption {
type = lib.types.str;
default = "https://secrets.api.nodeiwest.se";
description = "Remote OpenBao address that hosts should use as clients.";
example = "https://secrets.api.nodeiwest.se";
};
homeManagerUsers = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [
"root"
"deploy"
];
description = "Users that should receive the shared Home Manager company profile.";
example = [
"root"
"deploy"
];
};
ssh.userCAPublicKeys = lib.mkOption {
type = lib.types.listOf lib.types.singleLineStr;
default = [ ];
description = "OpenBao SSH user CA public keys trusted by sshd for user certificate authentication.";
example = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBExampleOpenBaoUserCA openbao-user-ca"
];
};
};
config = {
networking.firewall.allowedTCPPorts = [
22
80
443
];
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
KbdInteractiveAuthentication = false;
PubkeyAuthentication = true;
PermitRootLogin = "prohibit-password";
}
// lib.optionalAttrs (cfg.ssh.userCAPublicKeys != [ ]) {
TrustedUserCAKeys = trustedUserCAKeysPath;
};
};
users.groups.deploy = { };
users.users.deploy = {
isNormalUser = true;
group = "deploy";
createHome = true;
extraGroups = [ "wheel" ];
};
services.traefik = {
enable = true;
staticConfigOptions = {
api.dashboard = true;
entryPoints.web.address = ":80";
entryPoints.websecure.address = ":443";
ping = { };
};
dynamicConfigOptions = lib.mkMerge [ ];
};
home-manager = {
useGlobalPkgs = true;
useUserPackages = true;
users = lib.genAttrs cfg.homeManagerUsers (_: {
imports = [ self.homeManagerModules.default ];
home.stateVersion = config.system.stateVersion;
});
};
environment.etc = lib.mkIf (cfg.ssh.userCAPublicKeys != [ ]) {
"ssh/trusted-user-ca-keys.pem".text = lib.concatStringsSep "\n" cfg.ssh.userCAPublicKeys + "\n";
};
environment.variables = {
BAO_ADDR = cfg.openbao.address;
};
};
}

View File

@@ -1,155 +0,0 @@
{
config,
lib,
pkgs,
...
}:
let
cfg = config.nodeiwest;
tailscaleOpenbaoCfg = cfg.tailscale.openbao;
in
{
options.nodeiwest.tailscale.openbao = {
enable = lib.mkEnableOption "fetching the Tailscale auth key from OpenBao";
namespace = lib.mkOption {
type = lib.types.str;
default = "it";
description = "OpenBao namespace used when fetching the Tailscale auth key.";
};
authPath = lib.mkOption {
type = lib.types.str;
default = "auth/approle";
description = "OpenBao auth mount path used by the AppRole login.";
};
secretPath = lib.mkOption {
type = lib.types.str;
default = "tailscale";
description = "OpenBao secret path containing the Tailscale auth key.";
};
field = lib.mkOption {
type = lib.types.str;
default = "CLIENT_SECRET";
description = "Field in the OpenBao secret that contains the Tailscale auth key.";
};
renderedAuthKeyFile = lib.mkOption {
type = lib.types.str;
default = "/run/nodeiwest/tailscale-auth-key";
description = "Runtime file rendered by OpenBao Agent and consumed by Tailscale autoconnect.";
};
approle = {
roleIdFile = lib.mkOption {
type = lib.types.str;
default = "/var/lib/nodeiwest/openbao-approle-role-id";
description = "Root-only file containing the OpenBao AppRole role_id.";
};
secretIdFile = lib.mkOption {
type = lib.types.str;
default = "/var/lib/nodeiwest/openbao-approle-secret-id";
description = "Root-only file containing the OpenBao AppRole secret_id.";
};
};
};
config = {
systemd.tmpfiles.rules = [
"d /var/lib/nodeiwest 0700 root root - -"
"d /run/nodeiwest 0700 root root - -"
];
services.tailscale = {
enable = true;
openFirewall = true;
extraUpFlags = lib.optionals tailscaleOpenbaoCfg.enable [ "--ssh" ];
authKeyFile = if tailscaleOpenbaoCfg.enable then tailscaleOpenbaoCfg.renderedAuthKeyFile else null;
};
services.vault-agent.instances.tailscale = lib.mkIf tailscaleOpenbaoCfg.enable {
package = pkgs.openbao;
settings = {
vault.address = cfg.openbao.address;
auto_auth = {
method = [
{
type = "approle";
mount_path = tailscaleOpenbaoCfg.authPath;
namespace = tailscaleOpenbaoCfg.namespace;
config = {
role_id_file_path = tailscaleOpenbaoCfg.approle.roleIdFile;
secret_id_file_path = tailscaleOpenbaoCfg.approle.secretIdFile;
remove_secret_id_file_after_reading = false;
};
}
];
};
template = [
{
contents = ''{{- with secret "${tailscaleOpenbaoCfg.secretPath}" -}}{{- if .Data.data -}}{{ index .Data.data "${tailscaleOpenbaoCfg.field}" }}{{- else -}}{{ index .Data "${tailscaleOpenbaoCfg.field}" }}{{- end -}}{{- end -}}'';
destination = tailscaleOpenbaoCfg.renderedAuthKeyFile;
perms = "0400";
}
];
};
};
systemd.services.vault-agent-tailscale = lib.mkIf tailscaleOpenbaoCfg.enable {
wants = [ "network-online.target" ];
after = [ "network-online.target" ];
serviceConfig.Environment = [ "BAO_NAMESPACE=${tailscaleOpenbaoCfg.namespace}" ];
};
systemd.services.nodeiwest-tailscale-authkey-ready = lib.mkIf tailscaleOpenbaoCfg.enable {
description = "Wait for the Tailscale auth key rendered by OpenBao Agent";
after = [ "vault-agent-tailscale.service" ];
requires = [ "vault-agent-tailscale.service" ];
before = [ "tailscaled-autoconnect.service" ];
requiredBy = [ "tailscaled-autoconnect.service" ];
path = [ pkgs.coreutils ];
serviceConfig = {
Type = "oneshot";
};
script = ''
set -euo pipefail
for _ in $(seq 1 60); do
if [ -s ${lib.escapeShellArg tailscaleOpenbaoCfg.renderedAuthKeyFile} ]; then
exit 0
fi
sleep 1
done
echo "Timed out waiting for rendered Tailscale auth key at ${tailscaleOpenbaoCfg.renderedAuthKeyFile}" >&2
exit 1
'';
};
systemd.services.tailscaled-autoconnect = lib.mkIf tailscaleOpenbaoCfg.enable {
after = [
"vault-agent-tailscale.service"
"nodeiwest-tailscale-authkey-ready.service"
];
requires = [
"vault-agent-tailscale.service"
"nodeiwest-tailscale-authkey-ready.service"
];
serviceConfig.ExecStartPre = [
"${lib.getExe' pkgs.coreutils "test"} -s ${tailscaleOpenbaoCfg.renderedAuthKeyFile}"
];
};
assertions = [
{
assertion =
(!tailscaleOpenbaoCfg.enable)
|| (tailscaleOpenbaoCfg.approle.roleIdFile != "" && tailscaleOpenbaoCfg.approle.secretIdFile != "");
message = "AppRole roleIdFile and secretIdFile must be set when OpenBao-backed Tailscale enrollment is enabled.";
}
];
};
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,32 +0,0 @@
{
lib,
writeShellApplication,
python3,
openbao,
openssh,
gitMinimal,
nix,
}:
writeShellApplication {
name = "nodeiwest";
runtimeInputs = [
python3
openbao
openssh
gitMinimal
nix
];
text = ''
export NODEIWEST_HELPER_TEMPLATES=${./templates}
exec ${python3}/bin/python ${./cli.py} "$@"
'';
meta = with lib; {
description = "Safe VPS provisioning helper for the NodeiWest NixOS flake";
license = licenses.mit;
mainProgram = "nodeiwest";
platforms = platforms.unix;
};
}

View File

@@ -1,23 +0,0 @@
{ lib, ... }:
{
# Generated by nodeiwest host init.
imports = [
./disko.nix
./hardware-configuration.nix
];
networking.hostName = "@@HOST_NAME@@";
networking.useDHCP = lib.mkDefault true;
time.timeZone = "@@TIMEZONE@@";
@@BOOT_LOADER_BLOCK@@
nodeiwest.ssh.userCAPublicKeys = @@SSH_CA_KEYS@@;
nodeiwest.tailscale.openbao = {
enable = @@TAILSCALE_OPENBAO_ENABLE@@;
};
system.stateVersion = "@@STATE_VERSION@@";
}

View File

@@ -1,41 +0,0 @@
{
lib,
...
}:
{
# Generated by nodeiwest host init.
# Replace the disk only if the provider exposes a different primary device.
disko.devices = {
disk.main = {
type = "disk";
device = lib.mkDefault "@@DISK_DEVICE@@";
content = {
type = "gpt";
partitions = {
BIOS = {
priority = 1;
name = "BIOS";
start = "1MiB";
end = "2MiB";
type = "EF02";
};
swap = {
size = "@@SWAP_SIZE@@";
content = {
type = "swap";
resumeDevice = true;
};
};
root = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
}

View File

@@ -1,47 +0,0 @@
{
lib,
...
}:
{
# Generated by nodeiwest host init.
# Replace the disk only if the provider exposes a different primary device.
disko.devices = {
disk.main = {
type = "disk";
device = lib.mkDefault "@@DISK_DEVICE@@";
content = {
type = "gpt";
partitions = {
ESP = {
priority = 1;
name = "ESP";
start = "1MiB";
end = "512MiB";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
swap = {
size = "@@SWAP_SIZE@@";
content = {
type = "swap";
resumeDevice = true;
};
};
root = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
}

View File

@@ -1,5 +0,0 @@
{ ... }:
{
# Placeholder generated by nodeiwest host init.
# nixos-anywhere will replace this with the generated hardware config.
}

View File

@@ -1,3 +0,0 @@
path "@@POLICY_PATH@@" {
capabilities = ["read"]
}

View File

@@ -1,114 +0,0 @@
from __future__ import annotations
import importlib.util
import sys
import unittest
from unittest import mock
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parents[3]
CLI_PATH = REPO_ROOT / "pkgs" / "helpers" / "cli.py"
spec = importlib.util.spec_from_file_location("nodeiwest_cli", CLI_PATH)
cli = importlib.util.module_from_spec(spec)
assert spec.loader is not None
sys.modules[spec.name] = cli
spec.loader.exec_module(cli)
class HelperCliTests(unittest.TestCase):
def test_format_activity_frame_highlights_one_block_and_keeps_label(self) -> None:
frame = cli.format_activity_frame("Executing install", 2)
self.assertIn("Executing install", frame)
self.assertEqual(frame.count(""), 4)
self.assertEqual(frame.count("[38;5;220m"), 1)
self.assertEqual(frame.count("[38;5;208m"), 3)
def test_supports_ansi_status_requires_tty_and_real_term(self) -> None:
tty_stream = mock.Mock()
tty_stream.isatty.return_value = True
dumb_stream = mock.Mock()
dumb_stream.isatty.return_value = True
pipe_stream = mock.Mock()
pipe_stream.isatty.return_value = False
with mock.patch.dict(cli.os.environ, {"TERM": "xterm-256color"}, clear=False):
self.assertTrue(cli.supports_ansi_status(tty_stream))
self.assertFalse(cli.supports_ansi_status(pipe_stream))
with mock.patch.dict(cli.os.environ, {"TERM": "dumb"}, clear=False):
self.assertFalse(cli.supports_ansi_status(dumb_stream))
def test_disk_from_device_supports_sd_and_nvme(self) -> None:
self.assertEqual(cli.disk_from_device("/dev/sda2"), "/dev/sda")
self.assertEqual(cli.disk_from_device("/dev/nvme0n1p2"), "/dev/nvme0n1")
def test_lookup_colmena_target_host_reads_existing_inventory(self) -> None:
flake_text = (REPO_ROOT / "flake.nix").read_text()
self.assertEqual(cli.lookup_colmena_target_host(flake_text, "vps1"), "100.101.167.118")
def test_parse_existing_vps1_configuration(self) -> None:
configuration = cli.parse_existing_configuration(REPO_ROOT / "hosts" / "vps1" / "configuration.nix")
self.assertEqual(configuration.host_name, "vps1")
self.assertEqual(configuration.boot_mode, "uefi")
self.assertTrue(configuration.tailscale_openbao)
self.assertEqual(configuration.state_version, "25.05")
self.assertTrue(configuration.user_ca_public_keys)
def test_parse_existing_vps1_disko(self) -> None:
disko = cli.parse_existing_disko(REPO_ROOT / "hosts" / "vps1" / "disko.nix")
self.assertEqual(disko.disk_device, "/dev/sda")
self.assertEqual(disko.boot_mode, "uefi")
self.assertEqual(disko.swap_size, "4G")
def test_render_bios_disko_uses_bios_partition(self) -> None:
rendered = cli.render_disko(boot_mode="bios", disk_device="/dev/vda", swap_size="8G")
self.assertIn('type = "EF02";', rendered)
self.assertIn('device = lib.mkDefault "/dev/vda";', rendered)
self.assertIn('size = "8G";', rendered)
def test_parse_lsblk_output_reads_pairs_without_smearing_columns(self) -> None:
output = (
'NAME="sda" SIZE="11G" TYPE="disk" MODEL="QEMU HARDDISK" FSTYPE="" PTTYPE="gpt" MOUNTPOINTS=""\n'
'NAME="sda1" SIZE="512M" TYPE="part" MODEL="" FSTYPE="vfat" PTTYPE="" MOUNTPOINTS="/boot"\n'
)
rows = cli.parse_lsblk_output(output)
self.assertEqual(rows[0]["NAME"], "sda")
self.assertEqual(rows[0]["SIZE"], "11G")
self.assertEqual(rows[0]["MODEL"], "QEMU HARDDISK")
self.assertEqual(rows[1]["NAME"], "sda1")
self.assertEqual(rows[1]["MOUNTPOINTS"], "/boot")
def test_normalize_swap_size_accepts_gib_suffix(self) -> None:
self.assertEqual(cli.normalize_swap_size("4GiB"), "4G")
self.assertEqual(cli.normalize_swap_size("512MiB"), "512M")
self.assertEqual(cli.normalize_swap_size("8G"), "8G")
def test_bao_kv_get_uses_explicit_kv_mount(self) -> None:
completed = mock.Mock()
completed.stdout = '{"data": {"data": {"CLIENT_ID": "x"}}}'
with mock.patch.object(cli, "run_command", return_value=completed) as run_command:
data = cli.bao_kv_get("it", "kv", "tailscale")
self.assertEqual(data["data"]["data"]["CLIENT_ID"], "x")
command = run_command.call_args.args[0]
self.assertEqual(command, ["bao", "kv", "get", "-mount=kv", "-format=json", "tailscale"])
self.assertEqual(run_command.call_args.kwargs["env"], {"BAO_NAMESPACE": "it"})
def test_derive_openbao_policy_uses_explicit_kv_mount(self) -> None:
completed = mock.Mock()
completed.stdout = 'path "kv/data/tailscale" { capabilities = ["read"] }\n'
with mock.patch.object(cli, "run_command", return_value=completed) as run_command:
policy = cli.derive_openbao_policy("it", "kv", "tailscale")
self.assertIn('path "kv/data/tailscale"', policy)
command = run_command.call_args.args[0]
self.assertEqual(command, ["bao", "kv", "get", "-mount=kv", "-output-policy", "tailscale"])
self.assertEqual(run_command.call_args.kwargs["env"], {"BAO_NAMESPACE": "it"})
if __name__ == "__main__":
unittest.main()