/.../: Start and end regex delimiters|: Alternation(): Grouping
| Byobu Commands | |
| ============== | |
| byobu Screen manager | |
| Level 0 Commands (Quick Start) | |
| ------------------------------ | |
| <F2> Create a new window |
The Doubling Algorithm for Suffix Array Construction
I'll use x = "banana", m = 6 as an example. The suffixes are:
0: banana
1: anana
2: nana
etc.
First, we'll work out R1, which is the rank of each suffix according to it's first letter only. Because we're just comparing single letters, we can do it in Theta(m) with a bucket sort. There will be some duplicates, and that's fine. We only have three different letters, so we give all of the A's a rank of 0, all of the B's a rank of 1, and all of the N's a rank of 2. We end up with this:
| #!/bin/bash | |
| # ofed-ubuntu.sh | |
| # Install Mellanox Ofed on Ubuntu Machines | |
| # | |
| # Author: Nilson Lopes (06/17/2021) | |
| ubuntu_release=$(awk -F '"' '/VERSION_ID/ {print $2}' /etc/os-release) | |
| mofed_package_version='latest' | |
| mofed_repo_base_url="https://linux.mellanox.com/public/repo/mlnx_ofed" |
Q1. The metric node_cpu_temp_celcius reports the current temperature of a nodes CPU in celsius. What query will return the average temperature across all CPUs on a per node basis? The query should return {instance=“node1”} 23.5 //average temp across all CPUs on node1 {instance=“node2”} 33.5 //average temp across all CPUs on node2.
node_cpu_temp_celsius{instance="node1", cpu="0"} 28
node_cpu_temp_celsius{instance="node1", cpu="1"} 19
node_cpu_temp_celsius{instance="node2", cpu="0"} 36
node_cpu_temp_celsius{instance="node2", cpu="1"} 31
Finally used OpenAI's Deep Research for the first time
This report outlines a model-independent framework for fine-tuning large AI models on a cluster of AMD Ryzen AI Max+ 395 nodes. The design supports a minimum of two nodes and scales to much larger deployments. We focus on optimizing fine-tuning efficiency using the XDNA 2 neural processing unit (NPU) in these chips, while keeping the setup accessible to developers of open-source AI models. Key areas include architecture and low-level optimizations, model splitting strategies, network and data throughput tuning, alternative computation models, and continuous benchmarking for improvements.
XDNA 2 NPU vs CPU/GPU: AMD’s XDNA 2 NPU (built into Ryzen AI Max chips) is a specialized spatial dataflow engine optimized for AI workloads. It consists of a 2D array of compute tiles with a flexible interconnect and on-chip SRAM