Skip to content

Instantly share code, notes, and snippets.

View tbensonwest's full-sized avatar

Tarran Benson-West tbensonwest

View GitHub Profile
@MohamedAlaa
MohamedAlaa / tmux-cheatsheet.markdown
Last active November 12, 2025 14:58
tmux shortcuts & cheatsheet

tmux shortcuts & cheatsheet

start new:

tmux

start new with session name:

tmux new -s myname
@MrDrMcCoy
MrDrMcCoy / README.md
Last active November 3, 2024 12:54
How to handle automatic bind mounting for shared directories with Proxmox/LXC

The Problem

Proxmox has a neat UI for adding extra storage to containers. However, if that storage already exists somewhere or needs to exist for more than one container, you're SOL. Bind mounting is an easy way to take a mount and make it exist in more than one place. However, bind mounting has to be done in a particular order with Proxmox due to how it creates device nodes and pre-populates directories. This is very frustrating, but not unsolvable.

The solution

ZFS needs to come up first

If you are using ZFS for your storage (which you should), you need to ensure that all of ZFS's mounts are fully online before proxmox does anything. You will need to add After=zfs.target to the [Unit] section of the Proxmox Systemd service files. The change needs to be applied to the following files:

@jjvillavicencio
jjvillavicencio / setup.sh
Last active November 13, 2025 01:10
Install Android SDK on Windows Bash (WSL)
cd /home/<user>/
sudo apt-get install unzip
wget https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip
unzip sdk-tools-linux-4333796.zip -d Android
rm sdk-tools-linux-4333796.zip
sudo apt-get install -y lib32z1 openjdk-8-jdk
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$PATH:$JAVA_HOME/bin
printf "\n\nexport JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64\nexport PATH=\$PATH:\$JAVA_HOME/bin" >> ~/.bashrc
cd Android/tools/bin
@basoro
basoro / proxmox-proxy
Created May 25, 2019 20:45
Running Proxmox behind a single IP address
I ran into the battle of running all of my VMs and the host node under a single public IP address. Luckily, the host is just pure Debian, and ships with iptables.
What needs to be done is essentially to run all the VMs on a private internal network. Outbound internet access is done via NAT. Inbound access is via port forwarding.
Network configuration
Here’s how it’s done:
Create a virtual interface that serves as the gateway for your VMs:
@bikcrum
bikcrum / Connect Google Colab+Drive with SSH.ipynb
Last active October 16, 2024 03:23
This is the way how can you connect google colab as well as google drive (mounted) using SSH. This is useful when you want to download data directly to your google drive specially for machine learning purpose. It can be easy to mount google drive and use files into for your code.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import boto3
import pandas as pd
s3_bucket = "awebsite"
html_file = "sales_report.html"
df = pd.DataFrame([{'year': 2021, 'month': 8, 'order_status': 'unavailable', 'order_count': 48},
{'year': 2021, 'month': 8, 'order_status': 'delivered', 'order_count': 7069},
{'year': 2021, 'month': 8, 'order_status': 'invoiced', 'order_count': 15},
{'year': 2021, 'month': 8, 'order_status': 'shipped', 'order_count': 74},
import boto3
import pandas as pd
s3_bucket = "annageller"
html_file = "sales_report.html"
df = pd.DataFrame([{'year': 2021, 'month': 8, 'order_status': 'unavailable', 'order_count': 48},
{'year': 2021, 'month': 8, 'order_status': 'delivered', 'order_count': 7069},
{'year': 2021, 'month': 8, 'order_status': 'invoiced', 'order_count': 15},
{'year': 2021, 'month': 8, 'order_status': 'shipped', 'order_count': 74},
import boto3
import pandas as pd
s3 = boto3.client("s3")
bucket_name = "annageller"
s3_object = "sales/customers.csv"
obj = s3.get_object(Bucket=bucket_name, Key=s3_object)
df = pd.read_csv(obj["Body"])
import boto3
bucket_name = "annageller"
s3_object = "ted_lasso.txt"
s3_object_body = "Be curious, not judgemental"
s3_resource = boto3.resource("s3")
upload_result = s3_resource.Object(bucket_name, s3_object).put(Body=s3_object_body)
assert upload_result["ResponseMetadata"]["HTTPStatusCode"] == 200
import os
import boto3
import tempfile
S3_BUCKET = "annageller"
S3_PREFIX = "sales/"
with tempfile.TemporaryDirectory() as tempdir:
s3 = boto3.client("s3")
response = s3.list_objects_v2(Bucket=S3_BUCKET, Prefix=S3_PREFIX)