- Export all the required environment variables:
export $(grep -v '^#' /etc/etcd.env | xargs -d '\n')
- Start interacting with
etcdctl
etcdctl endpoint health status --cluster -w table && etcdctl endpoint status --cluster -w table
from instagrapi import Client | |
from instagrapi.exceptions import ClientLoginRequired, ClientError | |
import os | |
# Instagram accounts credentials | |
acc1_username = os.environ.get('IM_FROM_USERNAME') | |
acc1_password = os.environ.get('IM_FROM_PASSWORD') | |
acc2_username = os.environ.get('IM_TO_USERNAME') | |
acc2_password = os.environ.get('IM_TO_PASSWORD') |
#!/bin/bash | |
set -e | |
set -u | |
set -o pipefail | |
command -v benchstat >/dev/null 2>&1 || { echo >&2 "I need Benchstat!"; exit 1; } | |
command -v git >/dev/null 2>&1 || { echo >&2 "I need Git!"; exit 1; } | |
BRANCH_TARGET="master" |
Host github.com | |
User git | |
Hostname github.com | |
AddKeysToAgent yes | |
IgnoreUnknown UseKeychain | |
UseKeychain yes | |
PreferredAuthentications publickey | |
IdentityFile /Users/USERNAME/.ssh/id_rsa |
export $(grep -v '^#' /etc/etcd.env | xargs -d '\n')
etcdctl
etcdctl endpoint health status --cluster -w table && etcdctl endpoint status --cluster -w table
In containerd, there is actually a garbage collector which can be found here: https://github.com/containerd/containerd/blob/master/docs/garbage-collection.md. In the cleanup phase, only objects that are not associated (i.e. have no image reference) are removed - those marked as "dirty" are kept. To clean up unused images and running/stopped containers, this can be used.
While not yet production-ready, the tool at https://github.com/Azure/eraser could be used to achieve this. However, it may be difficult and complex to run this on all nodes. Descheduler cannot solve this problem as it does not run as a daemonset, but kubelet garbage collection can be used instead (checking if it is enabled in the current configs): https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images.
It seems that containerd does not support log rotation. I found a solution that involves using kubelet (as described in containerd/containerd#3351 (comment), also pr: https
Bolt operations are copy-on-write. When a page is updated, it is copied to a completely new page. The old page is added to a "freelist", which Bolt refers to when it needs a new page. This means that deleting large amounts of data will not actually free up space on disk, as the pages are instead kept on Bolt's freelist for future use. In order to free up this space to disk, you will need to perform a defrag.
The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
Got some answers from #sig-auth: https://kubernetes.slack.com/archives/C0EN96KUY/p1667201299188199
{
"iss": "https://idp.example",
"aud": "some-audience",
idea
crfs
#!/bin/bash | |
set -e | |
function usage(){ | |
echo "$(basename $0) --registry registry.gitlab.com/images --platform linux/amd64 --chart fluent/fluent-bit --version 0.19.10" >&2 | |
} | |
function teardown { | |
rm -rf "./tmp" |
package main | |
import ( | |
"archive/tar" | |
"bytes" | |
"fmt" | |
"github.com/google/go-containerregistry/pkg/crane" | |
"github.com/google/go-containerregistry/pkg/name" | |
v1 "github.com/google/go-containerregistry/pkg/v1" | |
"github.com/google/go-containerregistry/pkg/v1/mutate" |