- I removed the minio check and just added a sleep timer for now:
# Check if Minio has bootstrapped yet
resource "null_resource" "check_minio" {
provisioner "local-exec" {
command = <<EOL
# until curl -sf ${var.s3_providers.minio.endpoint_external}/minio/health/ready; do
# echo "Waiting for Minio to become reachable..."
# sleep 1
# done
sleep 10
EOL
}
}
- Needed to run
mount --make-rshared /inside thek3d-rivet-dev-server-0container for prometheus node exporter that was giving this errorError: failed to generate container "b6507df08c7ad69b4c306fbbf9099b0c5721fed01a5a498bddabfa4fcabbbe52" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
Name: prometheus-prometheus-node-exporter-rl6lh
Namespace: prometheus
Priority: 90
Priority Class Name: node-exporter-priority
Service Account: prometheus-prometheus-node-exporter
Node: k3d-rivet-dev-server-0/172.19.0.2
Start Time: Wed, 15 Nov 2023 18:23:16 +0000
Labels: app.kubernetes.io/component=metrics
app.kubernetes.io/instance=prometheus
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=prometheus-node-exporter
app.kubernetes.io/part-of=prometheus-node-exporter
app.kubernetes.io/version=1.6.1
controller-revision-hash=784988f944
helm.sh/chart=prometheus-node-exporter-4.23.2
jobLabel=node-exporter
pod-template-generation=1
release=prometheus
Annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: true
Status: Pending
IP: 172.19.0.2
IPs:
IP: 172.19.0.2
Controlled By: DaemonSet/prometheus-prometheus-node-exporter
Containers:
node-exporter:
Container ID:
Image: quay.io/prometheus/node-exporter:v1.6.1
Image ID:
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--path.rootfs=/host/root
--path.udev.data=/host/root/run/udev/data
--web.listen-address=[$(HOST_IP)]:9100
--collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
--collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
State: Waiting
Reason: CreateContainerError
Ready: False
Restart Count: 0
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/root from root (ro)
/host/sys from sys (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
root:
Type: HostPath (bare host directory volume)
Path: /
HostPathType:
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
- On the Codespace, I've gotten to a state where I'm getting a panic in Bolt, it can probably be fixed by a reset but I want to look into it more later:
Compiling bolt v0.1.0 (/tmp/nix-build-bolt.drv-0/source/lib/bolt/cli)
Finished release [optimized] target(s) in 3m 12s
Executing cargoInstallPostBuildHook
Finished cargoInstallPostBuildHook
Finished cargoBuildHook
buildPhase completed in 3 minutes 13 seconds
installing
Executing cargoInstallHook
Finished cargoInstallHook
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/7pc7pn02avfgmzqyg51hrwmcvjsfvr7c-bolt
shrinking /nix/store/7pc7pn02avfgmzqyg51hrwmcvjsfvr7c-bolt/bin/bolt
checking for references to /tmp/nix-build-bolt.drv-0/ in /nix/store/7pc7pn02avfgmzqyg51hrwmcvjsfvr7c-bolt...
patching script interpreter paths in /nix/store/7pc7pn02avfgmzqyg51hrwmcvjsfvr7c-bolt
stripping (with command strip and flags -S -p) in /nix/store/7pc7pn02avfgmzqyg51hrwmcvjsfvr7c-bolt/bin
Generated config namespaces/dev.toml & secrets/dev.toml
Updated namespace in Bolt.local.toml dev
$ cd "/workspaces/rivet/infra/tf/k8s_infra" && "terraform" "state" "list"
$ cd "/workspaces/rivet/infra/tf/k8s_cluster_k3d" && "terraform" "state" "list"
FATA[0000] No nodes found for given cluster
[core/src/tasks/gen.rs:29] a = Custom {
kind: Other,
error: "command [\"k3d\", \"kubeconfig\", \"get\", \"rivet-dev\"] exited with code 1",
}
Executing step (1/8) k8s-cluster-k3d
$ cd "/workspaces/rivet/infra/tf/k8s_infra" && "terraform" "state" "list"
$ cd "/workspaces/rivet/infra/tf/k8s_cluster_k3d" && "terraform" "state" "list"
FATA[0000] No nodes found for given cluster
[core/src/tasks/gen.rs:29] a = Custom {
kind: Other,
error: "command [\"k3d\", \"kubeconfig\", \"get\", \"rivet-dev\"] exited with code 1",
}
$ cd "/workspaces/rivet/infra/tf/k8s_cluster_k3d" && "terraform" "apply" "-var-file=/workspaces/rivet/gen/tf/env/dev.tfvars.json" "-parallelism=16" "-auto-approve"
k3d_cluster.main: Refreshing state... [id=rivet-dev]
Planning failed. Terraform encountered an error while generating this plan.
- Buckets are getting stuck at connecting
aws_s3_bucket.bucket["dev-bucket-build"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-team-avatar"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-user-avatar"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-job-log"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-imagor-result-storage"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-svc-build"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-team-billing"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-game-banner"]: Still creating... [3m30s elapsed]
aws_s3_bucket.bucket["dev-bucket-lobby-history-export"]: Still creating... [3m30s elapsed]
- Note: Looks like Traefik is sending data to it's collector, should this be turned off if Rivet has data collection turned off? Or is it already?
�[90m2023-11-17T14:56:03Z�[0m �[33mDBG�[0m �[1mgithub.com/traefik/traefik/v3/pkg/collector/collector.go:54�[0m�[36m >�[0m Anonymous stats sent to https://collect.traefik.io/yYaUej3P42cziRVzv6T5w2aYy9po2Mrn: {"global":{"checkNewVersion":true,"sendAnonymousUsage":true},"serversTransport":{"maxIdleConnsPerHost":200},"tcpServersTransport":{"dialKeepAlive":"15s","dialTimeout":"30s"},"entryPoints":{"metrics":{"address":"xxxx","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{},"http2":{"maxConcurrentStreams":250}},"traefik":{"address":"xxxx","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{},"http2":{"maxConcurrentStreams":250}},"tunnel":{"address":"xxxx","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{"tls":{"options":"ingress-tunnel"}},"http2":{"maxConcurrentStreams":250}},"web":{"address":"xxxx","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{},"http2":{"maxConcurrentStreams":250}},"websecure":{"address":"xxxx","transport":{"lifeCycle":{"graceTimeOut":"10s"},"respondingTimeouts":{"idleTimeout":"3m0s"}},"forwardedHeaders":{},"http":{"tls":{}},"http2":{"maxConcurrentStreams":250}}},"providers":{"providersThrottleDuration":"2s","kubernetesIngress":{},"kubernetesCRD":{"allowCrossNamespace":true,"labelSelector":"traefik-instance=tunnel"}},"api":{"dashboard":true},"metrics":{"prometheus":{"buckets":[0.001,0.0025,0.005,0.01,0.025,0.05,0.1,0.25,0.5,1,2.5,5,10,25,50,100],"addRoutersLabels":true,"addServicesLabels":true,"entryPoint":"metrics"}},"ping":{"entryPoint":"traefik","terminatingStatusCode":503},"log":{"level":"DEBUG","format":"common"},"accessLog":{"format":"common","filters":{},"fields":{"defaultMode":"keep","headers":{"defaultMode":"drop"}}}}
-
In the devcontainer, I can get access to kubectl from inside the container by copying the config out with
docker cp k3d-rivet-dev-server-0:/output/kubeconfig.yaml ., changing its API port from127.0.0.1:6443to127.0.0.1:41833and finally runningKUBECONFIG=kubeconfig.yaml kubectl get pods --all-namespaces -
When I want to redeploy a single service, I should use
bolt up build-default-create -
Debian 11 uses glibc 2.31, while nix-shell uses 2.37, so if bolt build code before RA runs first on Debian, there will be issues
- Is the only way to see services manually to forward them like Grafana?
- When updating terraform files/services, do I always run
bold dev init --yes?