Common symptoms of storage performance related issues:
- Application is slow in completing work
- Performance issues present with high levels of disk io -- including iops and throughput (latency:await) issues.
- The %iowait is high and/or %system is high (if using LUKs encryption on storage).
Troubleshooting:
- Check the sar data for higher than normal latency (await) for the device mapper and NVMe devices.
- Check the sar data for high %iowait which means applications are likely being impacted.
SLES:
sudo zypper in -y lvm2 nvme-cli sysstat fioRHEL:
sudo dnf install -y lvm2 nvme-cli sysstat fioGather block device(s), lvm, filesystem and mount details using the following commands:
Mounts:
findmnt -lBlock devices:
lsblk -f -t -m -o +SERIAL | sed 's/vol/vol-/'blockdev --reportsudo nvme listLVM:
sudo dmsetup info -csudo dmsetup table#sudo dmsetup ls --treesudo pvs
sudo vgs
sudo lvs -a -o+lv_layout,lv_role,stripes,devicessudo vgdisplay -v <volume_group>
sudo lvdisplay -m /dev/<volume_group>/<logical_volume>
sudo dmsetup deps /dev/<volume_group>/<logical_volume>
cat /proc/diskstatsiostat command(s) to monitor Disk I/O activity:
iostat -h -y -t -N -c -d -x -p ALL 1iostat -x 2 | grep -E 'sd|nvme|Device'fio command(s) for testing performance:
fio --name TEST --eta-newline=5s --filename=/mnt/fio-tempfile.dat --rw=write --size=50G --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting