Lecture 1: Introduction to Research — [📝Lecture Notebooks] [
Lecture 2: Introduction to Python — [📝Lecture Notebooks] [
Lecture 3: Introduction to NumPy — [📝Lecture Notebooks] [
Lecture 4: Introduction to pandas — [📝Lecture Notebooks] [
Lecture 5: Plotting Data — [📝Lecture Notebooks] [[
import asyncio | |
import psycopg2 | |
# dbname should be the same for the notifying process | |
conn = psycopg2.connect(host="localhost", dbname="example", user="example", password="example") | |
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT) | |
cursor = conn.cursor() | |
cursor.execute(f"LISTEN match_updates;") |
{ | |
"Version":"2012-10-17", | |
"Statement":[ | |
{ | |
"Sid":"PublicRead", | |
"Effect":"Allow", | |
"Principal": "*", | |
"Action":["s3:GetObject","s3:GetObjectVersion"], | |
"Resource":["arn:aws:s3:::YOUR_BUCKET_NAME_GOES_HERE/*"] | |
} |
Add yourself to the docker
group to be able to run containers as non-root (see Post-install steps for Linux).
# Resize the file system in UI, under VM -> Hardware -> Click on the disk to resize, click "Resize disk" button
# Confirm increase in disk space (1TB in my case)
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 1T 0 part
# Solution based on https://stackoverflow.com/a/51785735/278836 | |
import tensorflow as tf | |
def extract_patches(images): | |
return tf.image.extract_patches( | |
images, | |
(1, 3, 3, 1), | |
(1, 1, 1, 1), | |
(1, 1, 1, 1), | |
padding="VALID") |
This is a short description of how to host services, using STORJ node as an example, on a host behind GNAT, or otherwise restrictive firewall, by forwarding packets through WireGuard endpoint on a relatively fast nearby VPS. This is not specific to Storj, and can be adopted to hosting other services.
As an example we will use an Oracle Cloud instance. Free tier still provides 10TB of monthly traffic that is sufficient for most node operators. Just make sure to create an account in a closest datacenter to minimize extra latency.
- Create the oracle compute instance (ideally, Ampere, because they are awesome, but if that is not availabe, any other will do too).
- Pick any OS you prefer, here we'll describe Ubuntu, as a most popular one.
Moved to nostr-resources.com
Docker compose has nice support for GPUs, K8s has moved their cluster-wide GPU scheduler from experimental to stable status. Docker swarm has yet to support the device
option used in docker compose
so the mechanisms for supporting GPUs on swarm are a bit more open-ended.
- NVIDIA container runtime for docker. The runtime is no longer required to run GPU support with the docker cli or compose; however, it appears necessary so that one can set
Default Runtime: nvidia
for swarm mode. - docker compose GPU support
- Good GitHub Gist Reference for an overview on Swarm with GPUs. It is a bit dated, but has good links and conversation.
- [Miscellaneous Opt