Fix for https://github.com/ovrclk/awesome-akash/blob/b720cd57784f9/chia-madmax/deploy.yaml#L7-L12
expose:
- port: 8080
as: 80
http_options:
read_timeout: 3600000
to:
- global: true
Fix for https://github.com/ovrclk/awesome-akash/blob/b720cd57784f9/chia-madmax/deploy.yaml#L7-L12
expose:
- port: 8080
as: 80
http_options:
read_timeout: 3600000
to:
- global: true
# app-stunnel-client.yml | |
# https://github.com/ovrclk/stunnel-proxy | |
--- | |
version: "2.0" | |
services: | |
stunnel-client: | |
image: andrey01/stunnel-proxy:v0.0.1 | |
env: | |
- PSK=RHFVUmtrQ0EyVFhVanlmVTJZZXgK |
# app-stunnel-server.yml | |
# https://github.com/ovrclk/stunnel-proxy | |
--- | |
version: "2.0" | |
services: | |
app: | |
image: traefik/whoami | |
expose: | |
- port: 80 |
Make sure it is enabled in the config
Get the admin
password:
This procedure is for removing all OSD off of a selected storage node in the environment with more than a single
storage node in the cluster and enough free disk space.
If you have a single storage node only, then you'll have to remove disk by disk.
If you have only a single disk, then you'll have to remove OSD by OSD, reclaiming the freed disk space in the VG (if that is your case)
using this and that hints or waiting until rook-ceph will support this.
osdsPerDevice
from 3
to 1
and apply the rook-ceph-cluster helm chart;I've been playing with Rook Ceph, have been able to helm uninstall
it (all the K8s bits including ceph CRDs), and installing it back again without the data loss while having the Pods using the persistent storage (the RBD
).
The impact: the Akash deployments using persistent storage disks will hang for the time until Ceph services are restored.
They key locations which need to be preserved are:
/var/lib/rook/*
isn't removed when you uninstall akash-rook helm chart
/var/lib/rook/mon-a
;/var/lib/rook/rook-ceph
;rook-ceph-mon
secret;Impact: Akash deployments using Persistent storage will temporarily stall due to having their I/O stuck to the RBD mounted devices.
This will be needed in later steps.
kubectl -n rook-ceph get pods -l "app=rook-ceph-mon" -o wide
import WebSocket from 'ws'; | |
const ws = new WebSocket('wss://rpc-akash-ia.notional.ventures/websocket'); | |
ws.on('open', function open() { | |
console.log('Connected on Akash blockchain from WebSocket'); | |
ws.send(JSON.stringify({ | |
"method":"subscribe", | |
"params": ["tm.event='NewBlock'"], | |
"id":"1", | |
"jsonrpc":"2.0" |
Make sure you are running your archival node with pruning = nothing
since height=0
to keep all historic states (i.e. archiving node).
With akash 0.18.0
(aka mainnet4
) you HAVE TO start the chain with AKASH_PRUNING=nothing
set. (This is fixed in akash 0.20.0
)
Do NOT change pruning
in between the restarts since this can corrupt the chain data (IAVL) cosmos/cosmos-sdk#6370 (comment)
sudo snap remove --purge firefox
sudo snap remove --purge snap-store
sudo snap remove --purge snapd-desktop-integration
sudo snap remove --purge gtk-common-themes
sudo snap remove --purge gnome-3-38-2004
sudo snap remove --purge core20
sudo snap remove --purge bare
sudo snap remove --purge snapd