Skip to content

Instantly share code, notes, and snippets.

View mhdzli's full-sized avatar
💭
The most important and daring kind of freedom is to be what you really are.

M.Zeinali mhdzli

💭
The most important and daring kind of freedom is to be what you really are.
View GitHub Profile
@mhdzli
mhdzli / stdin
Created May 13, 2021 07:29 — forked from sochotnicky/stdin
stdin
diff --git a/main.c b/main.c
index f979e24..2f54043 100644
--- a/main.c
+++ b/main.c
@@ -628,6 +628,7 @@ int main(int argc, char *argv[]) {
zwlr_layer_surface_v1_add_listener(
state.layer_surface, &layer_surface_listener, &state);
zwlr_layer_surface_v1_set_anchor(state.layer_surface, anchor);
+ zwlr_layer_surface_v1_set_size( state.layer_surface, 2560, 1440 );
zwlr_layer_surface_v1_set_margin(state.layer_surface,
@mhdzli
mhdzli / instructions.md
Created November 4, 2022 13:08 — forked from jdmar3/instructions.md
Simple Slurm configuration in Debian based systems

Slurm Configuration Debian based Cluster

Here I will describe a simple configuration of the slurm management tool for launching jobs in a really simplistic cluster. I will assume the following configuration: a main node (for me it is an Arch Linux distribution) and 3 compute nodes (for me compute nodes are Debian VMs). I also assume there is ping access between the nodes and some sort of mechanism for you to know the IP of each node at all times (most basic should be a local NAT with static IPs)

Basic Structure

Slurm management tool work on a set of nodes, one of which is considered the master node, and has the slurmctld daemon running; all other compute nodes have the slurmd daemon. All communications are authenticated via the munge service and all nodes need to share the same authentication key. Slurm by default holds a journal of activities in a directory configured in the slurm.conf file, however a Database management system can be set. All in all what we will try to do is:

  • Install `munge