- Download and put the clash premium core under
/etc/clash
- Modify your config.yaml based on
config.yaml.example
Please consult the following script:
# Shutdown the Kubernetes cluster first (on every node) | |
systemctl stop kubelet | |
# Stop all docker containers (on every node) | |
docker stop $(docker ps -aq) | |
# Unmount all ISCSI disks (on every node) | |
mount | grep iqn | |
umount --all-targets /dev/sdxx # replace sdxx with each disk |
modprobe ip6table_mangle | |
ebtables -t broute -A BROUTING -p ! ipv6 -j DROP -i eth2.2 | |
brctl addif br0 eth2.2 | |
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables | |
ip6tables -I FORWARD 1 -m physdev -m state --physdev-in eth2.2 --state NEW -j DROP | |
ip6tables -I FORWARD 1 -m physdev -m state --physdev-in eth2.2 -p udp --dport 6881 --state NEW -j ACCEPT | |
ip6tables -I FORWARD 1 -m physdev -m state --physdev-in eth2.2 -p tcp --dport 5000 --state NEW -j ACCEPT | |
ip6tables -I FORWARD 1 -m physdev -m state --physdev-in eth2.2 -p tcp --dport 6443 --state NEW -j ACCEPT | |
ip6tables -I FORWARD 1 -m physdev -m state --physdev-in eth2.2 -p tcp --dport 8096 --state NEW -j ACCEPT |
|example.com:t|example.com:t|005i.com:t|01-123.com:t|01-800.cn:t|024ksm.com:t|025pc.cn:t|029jiakang.com:t|054wan.com:t|0551.us:t|069.net:t|079.254560.top:t|086wl.com:t|0937673.info:t|100860.com:t|103.hk:t|10615.com.cn:t|1141.net:t|114saige.com:t|123.125.114.18:t|1233win.com:t|125135.com:t|14oo.cc:t|151.com.tw:t|15300.cn:t|1587555.com:t|15meili.com:t|163081.com:t|1666yl.com:t|16885518.com:t|173-fc.com:t|17xuexi.org:t|187801.com:t|19kaifu.com:t|1pyy.com:t|20122011.org:t|2996299.com:t|2a2d75.dcmir4f.com:t|30333.loan:t|32xyj.pw:t|360adsolutions.com:t|360du.net.cn:t|392623.com:t|3jwz.com:t|3li.cc:t|3ssq8z.bmspzs.pw:t|3v4.net:t|400839.com:t|4382.loan:t|4659866.com:t|4743.loan:t|4blc.com:t|4r8.nfjhn.com:t|520bdy.com:t|5467.com:t|55118885.com:t|567516.com:t|56weiyu.com:t|57c69d.bb6zz.com:t|57zhuan.cn:t|58pan.cn:t|59177.net:t|5azy.com:t|5eso.com:t|5pk.ah.2j0.iyu.y7.l99.m.7d6y.com:t|611fu.com:t|666so.cn:t|68suan.com:t|68xi.com:t|710dnuv.com:t|732722.com:t|75q.net:t|75yy.com:t|77tui.cn:t|793985.com:t|7dianying.com:t|7gt |
#!/bin/bash | |
# USAGE: bash <(curl -sL https://gist.githubusercontent.com/w1ndy/1c484c8bfafa06b5b42cca0591b026fb/raw/vite-bootstrap.sh) <PROJECT_NAME> | |
set -e | |
if [ -z "$1" ] | |
then | |
echo "No project name supplied" | |
exit 1 | |
fi |
# This is the network config written by 'subiquity' | |
network: | |
ethernets: | |
enp2s0: | |
accept-ra: no | |
addresses: | |
- 192.168.1.2/24 | |
- $IPV6_ADDR | |
gateway4: 192.168.1.1 | |
gateway6: fe80::8ede:f9ff:feb7:2dbc |
IMAGE=pool_name/image_name | |
rbd feature enable $IMAGE exclusive-lock | |
rbd feature enable $IMAGE object-map | |
rbd object-map rebuild $IMAGE | |
rbd du $IMAGE | |
# Enabling for all images in a pool | |
POOL=pool_name | |
for IMAGE in $(rbd ls $POOL); do if rbd info $POOL/$IMAGE | grep -q object-map; then echo "$POOL/$IMAGE has been processed"; else echo "processing $POOL/$IMAGE" && rbd feature enable $POOL/$IMAGE exclusive-lock && rbd feature enable $POOL/$IMAGE object-map && rbd object-map rebuild $POOL/$IMAGE; fi ; done |
kubectl get pv -o=jsonpath='{range .items[*]}{.spec.csi.volumeHandle}{"\t"}{.spec.claimRef.namespace}{"/"}{.spec.claimRef.name}{"\n"}' |
This guide outlines the procedure for debugging NVIDIA graphics cards on a Talos Linux node. Due to Talos Linux's immutable and secure nature, direct driver installation and typical debugging steps are not possible. Instead, we leverage Kubernetes features and NVIDIA's containerized drivers.
Procedure:
Run the Debug Pod:
Execute the following kubectl
command to launch a privileged debug pod on the target Talos Linux node. This pod will contain the NVIDIA driver binaries.