Skip to content

Instantly share code, notes, and snippets.

@Noxsios
Last active October 16, 2024 01:42
Show Gist options
  • Save Noxsios/877d77dd888946764cfba6a7f061662a to your computer and use it in GitHub Desktop.
Save Noxsios/877d77dd888946764cfba6a7f061662a to your computer and use it in GitHub Desktop.

Flashing Jetson AGX Orin

Work In Progress, will move to a repo once demo is completed

BSP/sample rootfs downloads: https://developer.nvidia.com/embedded/jetson-linux-r3640

  1. Get board into recovery mode (power off, hold reset button, power on and let go after 3s)

My kit refused to go into recovery mode (check) so I had to go to BIOS and manually toggle there, there is also a way to do it via the micro USB UART serial port (untested) https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/AT/BoardAutomation.html

  1. Flashing quickstart (flash.sh) https://docs.nvidia.com/jetson/archives/r36.4/DeveloperGuide/IN/QuickStart.html#to-flash-the-jetson-developer-kit-operating-software
mkdir -p ~/jetson-flash && cd jetson-flash
wget https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v4.0/release/Jetson_Linux_R36.4.0_aarch64.tbz2 
wget https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v4.0/release/Tegra_Linux_Sample-Root-Filesystem_R36.4.0_aarch64.tbz2

tar -jxf Jetson_Linux_R36.4.0_aarch64.tbz2 
cd Linux_for_Tegra/rootfs/
# sudo is very important for perms, otherwise flashing will reject
sudo tar -jxpf ../../Tegra_Linux_Sample-Root-Filesystem_R36.4.0_aarch64.tbz2
cd ..
sudo ./tools/l4t_flash_prerequisites.sh
sudo ./apply-binaries.sh

# `sudo ./nvautoflash.sh` did not work for me, YMMV
sudo ./flash.sh jetson-agx-orin-devkit mmcblk0p1

Flashing script full docs: https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/FlashingSupport.html#flashing-support

  1. Install docker / container runtime https://docs.dockercom/engine/install/ubuntu/#install-using-the-convenience-script
  2. Ensure NVIDIA container toolkit is configured https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
  3. Test container https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html

If you have a better test to verify CUDA/NVIDIA drivers are solid feel free to use that

You may need to edit your ~/.bashrc for CUDA env vars https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions


Post flash / initial setup

  • Swapping power mode to max (will reboot):

    sudo nvpmodel -m 0
    # <yes> (interactive prompt)
    
  • Setting up pip / uv

    sudo apt install python3-pip
    echo 'export PATH="~/.local/bin:$PATH"' >> ~/.bashrc
    # restart shell or `source ~/.bashrc`
    pip install uv
  • Setting up NVME (copied from slack convo, modify as needed)

    sudo fdisk /dev/nvme0n1 # follow prompts, n for new partition, I defaulted through, then w to write.
    sudo mkfs.ext4 /dev/nvme0n1p1
    sudo mkdir /mnt/data
    sudo mount /dev/nvme0n1p1 /mnt/data
    echo "/dev/nvme0n1p1 /mnt/data ext4 defaults 0 2" | sudo tee -a /etc/fstab
    sudo systemctl stop docker
    sudo mv /var/lib/docker /mnt/data/docker
    sudo ln -s /mnt/data/docker /var/lib/docker
    sudo systemctl start docker
    
    sudo mkdir /mnt/data/home
    sudo mv "$HOME" /mnt/data/home/
    echo "/mnt/data/home /home none bind 0 0" | sudo tee -a /etc/fstab
    sudo reboot

Setup gh and Git

WIP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment