I wrote these instructions as part of "installing PyTorch with CUDA 12.1.1".
Anyway, if you still need to compile from source… here's how:
This is a dependency of PyTorch, which is sensitive to CUDA version.
Clone Magma:
""" | |
Extract the contents of a `run-{id}.wandb` database file. | |
These database files are stored in a custom binary format. Namely, the database | |
is a sequence of wandb 'records' (each representing a logging event of some | |
type, including compute stats, program outputs, experimental metrics, wandb | |
telemetry, and various other things). Within these records, some data values | |
are encoded with json. Each record is encoded with protobuf and stored over one | |
or more blocks in a LevelDB log. The result is the binary .wandb database file. |
#!/bin/bash | |
# Downloads and applies a patch from Drupal.org. | |
if [ -z "$1" ] | |
then | |
echo "You need to supply a URL to a patch file." | |
exit | |
fi | |
URL=$1; |
; /usr/share/pulseaudio/alsa-mixer/profile-sets/astro-a50-gen4.conf | |
[General] | |
auto-profiles = yes | |
[Mapping analog-voice] | |
description = Voice | |
device-strings = hw:%f,0,0 | |
channel-map = left,right | |
paths-output = steelseries-arctis-output-chat-common |
# Modify apt sources lists | |
cd /etc/apt/sources.list.d/ | |
sudo rm gds-11-7.conf cuda-12-3.conf cuda-12-2.conf cuda-12-1.conf 989_cuda-11.conf cuda-ubuntu2004-11-7-local.list cuda-ubuntu2004-11-7-local.list | |
# Modify apt preferences | |
cd /etc/apt/preferences.d | |
sudo rm cuda-repository-pin-600 nvidia-fabricmanager | |
# Startup shell environment variables | |
sudo vim /etc/profile.d/dlami.sh # comment out both |
git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git | |
cd nv-codec-headers | |
vi Makefile # change the first line to PREFIX = ${CONDA_PREFIX} | |
make install | |
cd .. | |
git clone https://git.ffmpeg.org/ffmpeg.git | |
cd ffmpeg | |
git checkout n4.2.2 | |
conda install nasm |
I wrote these instructions as part of "installing PyTorch with CUDA 12.1.1".
Anyway, if you still need to compile from source… here's how:
This is a dependency of PyTorch, which is sensitive to CUDA version.
Clone Magma:
""" | |
Creates an HDF5 file with a single dataset of shape (channels, n), | |
filled with random numbers. | |
Writing to the different channels (rows) is parallelized using MPI. | |
Usage: | |
mpirun -np 8 python demo.py | |
Small shell script to run timings with different numbers of MPI processes: |
# https://github.com/HDFGroup/hdf5/blob/hdf5-1_13_1/release_docs/INSTALL_parallel | |
# https://docs.olcf.ornl.gov/software/python/parallel_h5py.html | |
# https://www.pism.io/docs/installation/parallel-io-libraries.html | |
# using ~/local/build/hdf5 as the build directory. | |
# Install HDF5 1.13.1 with parallel I/O in ~/local/hdf5, | |
version=1.13.1 | |
prefix=$HOME/local/hdf5 | |
build_dir=~/local/build/hdf5 | |
hdf5_site=https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.13 |
#One workaround is to create clone environment, and then remove original one: | |
#(remember about deactivating current environment with deactivate on Windows and source deactivate on macOS/Linux) | |
conda create --name new_name --clone old_name --offline #use --offline flag to disable the redownload of all your packages | |
conda remove --name old_name --all # or its alias: `conda env remove --name old_name` | |
#There are several drawbacks of this method: | |
# time consumed on copying environment's files, | |
# temporary double disk usage. |
# -*- coding: utf-8 -*- | |
import os | |
import sys | |
import logging | |
from flask import Flask, request, jsonify | |
from flask_cors import CORS, cross_origin | |
from translate_client import Server |