There aren't many tutorials about this, the only tutorials I've found were about passing through entire PCIe cards to VMs, or refered to old ESXI versions (below 6.5) that used a more comprehensive desktop client instead of the web app. In v6.5, the web app was introduced and the desktop client was deprecated. You used to be able to setup RDMs in the desktop client, but with the introduction of the web console, this is no longer the case. This tutorial shows you how to pass SATA HDDs to the virtual machine on VMWare ESXI 6.5. This tutorial is partially based on VMWare's own KB and the now deprecated Forza IT blog post.
There is now an option while editing your VM's settings to add a New raw disk
when you click Add hard disk
. I have not personally tried it yet, but the comments below have confirmed it works.
We attach the SATA HDDs as a RDM (Raw Device Mapper) into an existing virtual disk in the command line, then on the web app, we attach a new SCSI controller to the VM, and attach the newly created RDM as an "existing HDD" to the newly created SCSI controller.
If you're new to EXSI, you should know a few things:
- A Datastore is basically this separate layer that sits between a physical device/disk and a virtual disk. A datastore can max the storage of a physical disk, or not. A datastore can have many virtual disks. Virtual disks are what a virtual machine will stores files on. In other words
Physical Disk > Datastore > Virtual Disks > Your filesystem e.g. EXT4
- RDM aka Raw Device Mapper is a pointer file that acts like a virtual disk but instead maps directly to a physical disk. RDMs on EXSI 6.5 have to be created in the command line.
-
Before you get started, make sure you've got a VM setup, and running. In my example, I have a virtual machine and virtual disk both conveniently named
Ubuntu Storage
-
In EXSI's web interface, log in, go to the home page. Click on
Actions
>Services
>Enable Secure Shell (SSH)
. -
Open a SSH session to the ESXi/ESX host.
# ssh root@YOUR_EXSI_IP -p 22
- Run this command to list the disks that are attached to the ESXi host:
# ls -l /vmfs/devices/disks
It should look like this. We care about the physical disks with the prefix t10
. As mentioned in the comments, it may not necessarily start with t10
. You can determined the name/path by going to the web console then Storage
, then Device
, click on the device you're setting up as RDM, and then copying the path.
[root@localhost:~] ls -l /vmfs/devices/disks
total 12221184373
-rw------- 1 root root 5000981078016 Feb 1 10:04 t10.ATA_____HGST_HDN726050ALE610____________________NAG4P4YX____________
Here I have a HGST 5TB disk attached to SATA that I'm trying to passthrough to my VM. There will be other listings here of datastores, and virtual disks that is not relevant.
- Attach a physical disk as a RDM
# vmkfstools -z /vmfs/devices/disks/t10.ATA_____HGST_HDN726050ALE610____________________NAG4P4YX____________ "/vmfs/volumes/Samsung 850 Pro/Ubuntu Storage/HGST_RDM_1.vmdk"
-
Make sure you read Prerequisite knowledge section about datastores, and virtual disks first.
-
vmkfstools
is attaching the physical disk and mapping it to avmdk
file.vmdk
files can only mount in virtual disk folders. e.g./vmfs/volumes/Samsung 850 Pro/Ubuntu Storage/
folders. -
The folder structure is in this format:
/vmfs/volumes/DATASTORE/VIRTUAL_DISK/
TheVIRTUAL_DISK
should be the name of the virtual disk you're using in your VM, for me it's calledUbuntu Storage
. You shouldcd /vmfs/volumes/
andls
around to find the right virtual disk. In my case,Samsung 850 Pro
is simply the datastore name I wrote to be self descriptive. nb. It has to ben an existing store or else you get errorFailed to create virtual disk: The system cannot find the file specified (25)
. See https://gist.github.com/Hengjie/1520114890bebe8f805d337af4b3a064#gistcomment-2834063 -
Do not name your RDM's with
-rdm
in the end of the name or you'll receive errorThe value was rejected by rule "Virtual SCSI or NVMe Devices"
. See https://gist.github.com/Hengjie/1520114890bebe8f805d337af4b3a064#gistcomment-2845874 and https://communities.vmware.com/thread/491979
- Attach the newly created RDM to the VM
- Go back to the web app, go to the Virtual Machine, and click Edit.
- Under
Add other device
, clickSCSI controller
(If you already have an existing SCSI controller for RDMs, you don't need to create another one). In other words, you can reuse the controllers. - Under
Add hard disk
, clickExisting hard disk
and select the newly created vmdk file. You'll have to use the Datastore file browser to select theHGST_RDM_1.vmdk
file - Once the HDD is created, under the new disk e.g.
Hard disk 2
, expand it and make sure it's using the newly created SCSI controller. You'll have to click on the drop down and select e.g.SCSI controller 1
andSCSI (1:0)
- Change
Disk Mode
toIndependent - persistent
. If theDisk Mode
is greyed out then changeDisk compatibility
fromPhysical
toVirtual
.
Note: We use independent persistent
mode because RDMs won't work with VMWare disk snapshots. More thorough explanation here.
- What if I want to change the RDM to another VM?
Simply rm the vmdk file and go through steps 5 and 6 again with the new virtual disk for the new VM.
- How can I passthrough multiple disks to the same VM?
Simply follow steps 4 to 6 but with the new disk. You can mount the RDM drives to the same SCSI controller. In other words, the first drive could be on 0:0
, and the next drive could be on 0:1
. There is a limit of 4 SCSI controllers per VM.
nb: Thanks to commenters for pointing out that you don't actually need a separate SCSI controllers per RDM. Personally, I'd have a separate SCSI controller just for RDMs only for cleaniness.
First of all thank you for that tutorial, it helped with my "virtual NAS". Now for the bad news... It worked fine for me up to ESXi 7.0U2d, I don't know if there's a bug in 7.0U3 but my VM won't load the VMDK files made using that method. It looks like ESXi can't get a lock on the VMDK even if there's no lock in place, I checked with "vmfsfilelockinfo -p" and rebooted the ESXi server, nothing will do. Reverting back to ESXi 7.0U2d (SHIFT+R during the ESXi boot process) solved my problem.