Skip to content

Instantly share code, notes, and snippets.

@fideloper
Last active August 4, 2024 18:56
Show Gist options
  • Save fideloper/40f7807920aa1198fa07b9e69dc82b56 to your computer and use it in GitHub Desktop.
Save fideloper/40f7807920aa1198fa07b9e69dc82b56 to your computer and use it in GitHub Desktop.
Find, format, and mount an AWS Ephemeral NVMe disk within ec2 in user data
#!/usr/bin/env bash
###
## This mounts a (single) ephemral NVMe drive in an EC2 server.
## It's meant to be run once, within user-data
## For EBS drives (non-ephemeral storage), see: https://gist.github.com/jalaziz/c22c8464cb602bc2b8d0a339b013a9c4
#
# Install the "nvme" command
# See: https://github.com/linux-nvme/nvme-cli
sudo apt-get install -y nvme-cli
# Create a mount point (directory)
sudo mkdir -p /some/mount
# Find ephemeral storage (assumes a single ephemeral disk)
# and format it (assumes this is run on first-boot in user-data, so the disk is not formatted)
EPHEMERAL_DISK=$(sudo nvme list | grep 'Amazon EC2 NVMe Instance Storage' | awk '{ print $1 }')
sudo mkfs.ext4 $EPHEMERAL_DISK
sudo mount -t ext4 $EPHEMERAL_DISK /some/mount
### For some crazy reason, add ephemeral disk mount to /etc/fstab
## even tho you lose data in stop/starts of ec2 (I believe you keep the data via regular reboots?)
#
# Find the mounted drive UUID so we can mount by UUID
EPHEMERAL_UUID=$(sudo blkid -s UUID -o value $EPHEMERAL_DISK)
echo "UUID=$EPHEMERAL_UUID /opt/nomad ext4 defaults 0 0" | sudo tee -a /etc/fstab
@fideloper
Copy link
Author

fideloper commented Oct 18, 2020

The Issue

NVMe drives in AWS have a few fun factors:

  1. AWS EC2 instances have you mount drives at device names such as /dev/sda1, but within the EC2 instance, you'll only see device names such as /dev/nvme0n1.
  2. Drive re-ordering means
    1. The drive names (e.g. /dev/nvme0n1) can change during reboot
    2. Disk drives will be named inconsistently (root drives vs secondary drives). For example, a new server's secondary drive might be /dev/nvme0n1 or /dev/nvme1n1.

This means we need a programmatic way to decipher which drive is the root drive vs a secondary drive to correctly mount secondary EBS disks or ephemeral storage.

Ephemeral NVMe Drives

The above script will find, format, and mount an AWS Ephemeral NVMe disk within ec2.

It's meant to be run within a user-data script.

I've tried this on Ubuntu 18.04 and 20.04.

EBS NVMe Drives

For EBS drives (non-ephemeral storage), you'll want to use this gist as a guide to help you:

  1. Create symlinks in /dev/foo to the name of the drives you give them when created within AWS (instead of the drive names you see in the server, such as /dev/nvme0n1)
  2. Use the symlinks, which give you known device names, to format/mount the EBS drives as needed

The 70-ec2-nvme-devices.rules file in the gist above goes into the /usr/lib/udev/rules.d directory (possibly /etc/udev/rules.d), and the ebsnvme-id command goes in /sbin/ebsnvme-id.

@robgero
Copy link

robgero commented Mar 2, 2023

Thanks, really convenient script! Although I'm just wondering where /opt/nomad on line 32 came from?
echo "UUID=$EPHEMERAL_UUID /opt/nomad ext4 defaults 0 0" | sudo tee -a /etc/fstab

I assumed you should use your mount point there, /some/mount?

@fideloper
Copy link
Author

fideloper commented Mar 2, 2023 via email

@deus93
Copy link

deus93 commented Apr 25, 2024

ec2 instance will stuck if you use uuid in /etc/fstab after stop/start.
I added some checks. If I need to save data after a reboot, I will check if it's formatted

MOUNT="/opt/data"
EPHEMERAL_DISK=\$(sudo nvme list | grep 'Amazon EC2 NVMe Instance Storage' | awk '{ print \$1 }')
# Check if the mount point exists in /proc/mounts
if grep -qs \${MOUNT} /proc/mounts; then
    echo "It's mounted."
else
    # Check if the block device exists and is not formatted as ext4
    if [ -b \${EPHEMERAL_DISK} ] && ! blkid \${EPHEMERAL_DISK} | grep -qs ext4; then
        echo "It's not mounted and not formatted, formatting..."
        # Format the block device as ext4
        if mkfs.ext4 \${EPHEMERAL_DISK}; then
            echo "Formatting \${EPHEMERAL_DISK} success!"
        else
            echo "Failed to format \${EPHEMERAL_DISK}."
            exit 1
        fi
    fi
    
    # Attempt to mount the block device to the specified mount point
    echo "It's not mounted, trying to mount..."
    if mkdir -p \${MOUNT} && mount -t ext4 \${EPHEMERAL_DISK} \${MOUNT}; then
        echo "Mount \${MOUNT} success!"
    else
        echo "Something went wrong with the mount - \${EPHEMERAL_DISK}"
        exit 1
    fi
fi

@fideloper
Copy link
Author

Thanks for sharing!

@YusDyr
Copy link

YusDyr commented Jun 28, 2024

This is my version.
It checks if there are local NVMe and try to make a RAID0 from it. If there is no these disks, it will search for an EBS volume and mount it.
Also it check does it protected with Vormetric Encryption 3rd-party solution (because it overlap mount).

#!/bin/bash
#
# Script to mount all attached EBS disks with tag "mountPoint"
# Checks first for local NVME disk and if any, makes MD RAID0 and mount to $PATH_TO_DIR_FOR_MD0_MOUNT
# Required:
#    aws-cli utility
#    nvme utility
#    Be authorized in AWS to run "aws ec2 describe-volumes"
#

set -exuo pipefail

declare -A AWS_EBS_Volumes
declare -A Linux_Nvme_Volumes
declare -r RES_CODE_ON_EBS_ABSENT=20
declare -r RES_CODE_ON_MD0_EXISTS=30
declare -r PATH_TO_DIR_FOR_MD0_MOUNT="/data"
declare -r DATADIR="/data"

function Get_AWS_Volumes_Info() {
  ## Get local volume device name corresponding to log volume block device name
  Instance_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)

  aws ec2 describe-volumes \
    --filters Name=attachment.instance-id,Values="${Instance_ID}" \
    --query "Volumes[?not_null(Tags[?Key == 'mountPoint'].Value)].{ID:VolumeId,Mount:Tags[?Key=='mountPoint'].Value | [0]}" \
    --output text

  return $?
}

function Populate_AWS_EBS_Volumes() {
  local Info=""
  Info=$(Get_AWS_Volumes_Info)
  local res_code=$?
  [[ $res_code != 0 ]] && return $res_code
  [ -z "$Info" ] && return $RES_CODE_ON_EBS_ABSENT
  while read -r volumeID mountPoint; do
    AWS_EBS_Volumes["$volumeID"]=$mountPoint
  done <<<"$Info"
}

function Get_Nvme_List() {
  # Get /dev/nvme list and their serial number (which are volume id for AWS)
  # Return pairs of VolumeID and appropriate /dev/nvmeX DeviceName
  /sbin/nvme list |
    grep -F '/dev/' |
    awk '{ if ( $2 ~ /^vol[A-Za-z0-9]+$/) {
            gsub("vol","vol-",$2);
            print $2" "$1
         }
    }'
}

function Get_Local_NVMe() {
  ls /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS*
}

function Populate_Linux_Nvme_Volumes() {
  while read -r volumeID mountpoint; do
    Linux_Nvme_Volumes[$volumeID]=$mountpoint
  done < <(Get_Nvme_List)
}

function Make_Local_Nvme_Mdraid() {
  echo "Creating md0 RAID0 array..."
  if mdadm -D /dev/md0; then
    echo "md0 is already existing, mount it..."
    Mount_Disk /dev/md0 "$PATH_TO_DIR_FOR_MD0_MOUNT"
    return $RES_CODE_ON_MD0_EXISTS
  fi
  mdadm --create /dev/md0 \
      --raid-devices="$(ls /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS* | wc -l)" \
      --level=0 \
      /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS*
  mdadm --detail --scan | tee /etc/mdadm.conf
  mdadm -D /dev/md0
  Mount_Disk /dev/md0 "$PATH_TO_DIR_FOR_MD0_MOUNT"
}

function Get_DevName_By_Volume_ID() {
  local volumeID=$1
  local mountPoint=${Linux_Nvme_Volumes[$volumeID]}
  echo "$mountPoint"
}

function Get_lsblk_By_DevName() {
  local DevName=$1
  lsblk --list --path --noheadings -o NAME,MOUNTPOINT,UUID "$DevName"
}

function Mount_Disk() {
  local DevName=$1
  local MountPoint=$2
  local WriteToFstab=${3:-""}
  mkdir -p "$MountPoint"
  # If there is this device only and no partitions on it
  if [[ $(Get_lsblk_By_DevName "$DevName" | wc -l) == 1 ]]; then
    # If this devices unformatted and has no filesystem
    if [[ $(file -b -s "$DevName" | grep -Ec '^data$') == 1 ]]; then
      # ...then make it
      mkfs -t xfs "$DevName"
    fi
    sleep 1
    local DevUUID="$(lsblk --noheadings --output UUID "$DevName")"
    if [[ -n $WriteToFstab ]]; then
      local OldFstabRecord="$(grep -E '^\s*UUID='"$DevUUID" /etc/fstab)"
      local NewFstabRecord="UUID=$DevUUID $MountPoint xfs defaults,noatime,nofail 0 2"
      # If there is no record in /etc/fstab
      if [ -z "$OldFstabRecord" ]; then
        # ...add it
        echo "$NewFstabRecord" >>/etc/fstab
      # If there is a record with that UUID but another mountpoint, replace and prepare for remount it
      elif [ -z "$(echo "$OldFstabRecord" | grep -w "$MountPoint")" ]; then
        echo "Remount $DevName to a new path"
        umount --verbose "UUID=$DevUUID"
        sed -i "s|$OldFstabRecord|$NewFstabRecord|g" /etc/fstab
      fi
    fi
    # If not mounted yet, mount it
    # Example:
    # /dev/nvme2n1 on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    # /data on /data type secfs2 (rw,relatime,seclabel) << correct
    # or
    # /dev/nvme2n1 on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota) << correct
    # or
    # /data on /data type secfs2 (rw,relatime,seclabel) << should be remounted over separate partition
    # or
    # /dev/nvmeXXXn1 on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota) << should be remounted by over partition
    #
    set +e
    curr_mount=$(mount -v | grep -F -w "on $MountPoint")
    set -e
    if [ $(echo -n "$curr_mount"| grep -A2 -B2 -F 'secfs2' | wc -l) -eq 1 ]; then
      echo "$MountPoint mounted as vormetric partition, but not as physical! Stopping vormetric client before mount ${DevUUID}"
      /etc/vormetric/secfs stop
      echo "Mounting $DevUUID to $MountPoint..."
      mkdir -p "$MountPoint"
      mount -v "UUID=$DevUUID" "$MountPoint"
      /etc/vormetric/secfs start
      exit 1
    elif [[ ($curr_mount) && ($(echo "$curr_mount"| wc -l) -ne 0) ]]; then
      echo "$MountPoint is already/still mounted."
      echo "$curr_mount"
    else
      echo "Mounting $DevUUID to $MountPoint..."
      mkdir -p "$MountPoint"
      mount -v "UUID=$DevUUID" "$MountPoint"
    fi
  fi
  chown -Rv mongod:mongod "${DATADIR}"
  chmod 770 "${DATADIR}"
  semanage fcontext -a -t mongod_var_lib_t "${DATADIR}(/.*)?"
  chcon -R system_u:object_r:mongod_var_lib_t:s0 "${DATADIR}"
  restorecon -R -v "${DATADIR}"
}

#    if [ -z "$Info" ]; then
#      if Get_Local_NVMe; then
#        # If there is local NVMe disks, make MD RAID and use it for /data
#        Make_Local_Nvme_Mdraid
#      else
#        return $RES_CODE_ON_EBS_ABSENT
#      fi
#    fi

main() {
  if ! Populate_AWS_EBS_Volumes; then
    if Get_Local_NVMe; then
      Make_Local_Nvme_Mdraid
    fi
  fi
  Populate_Linux_Nvme_Volumes

  # Debug info
  declare -p AWS_EBS_Volumes
  declare -p Linux_Nvme_Volumes

  for volumeID in "${!AWS_EBS_Volumes[@]}"; do
    local DevName=$(Get_DevName_By_Volume_ID $volumeID)
    local MountPoint="${AWS_EBS_Volumes[$volumeID]}"
    if [ "$MountPoint" ]; then
      Mount_Disk "$DevName" "$MountPoint" "WriteToFstab"
    else
      warn "No MountPoint specified for $DevName!"
    fi
    # printf "%s\n" "$volumeID=${AWS_EBS_Volumes[$volumeID]}"
  done
}

main

@YusDyr
Copy link

YusDyr commented Jun 28, 2024

My extended version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment