Building a VMware vSphere Virtual Lab with VMware Fusion - Part 5
Disclosure. This page contains links to products that may earn us a small commission at no extra cost to you, should you click on them and make a purchase. Read full disclosure.
This is the fifth part in a series of tutorials on how to build a VMware vSphere Virtual Lab on a Mac with VMware Fusion. In this tutorial, we’ll create a Ubuntu iSCSI Storage Server and connect our ESXi hosts to it.
Overview
In the last tutorial, we created a three node cluster but we couldn’t enable DRS or HA because it requires centralized storage. In this tutorial, we’ll create a storage server with Ubuntu 18.04 and configure it so that our ESXi hosts can access it via iSCSI (with multipathing).
Prerequisites
Ideally you should have read the previous four tutorials before following the procedure in this tutorial.
- Part 1: Installing ESXi
- Part 2: Deploy and Configure a pfSense VM
- Part 3: Deploying vCenter Server Appliance
- Part 4: Adding ESXi Hosts to a Cluster in vCenter
After completing the steps in the previous tutorials, you will be at a point where you have:
- Three ESXi 6.7 VMs running on VMware Fusion*.
- The first ESXi VM contains a pfSense firewall VM with built in DNS Resolver.
- The first ESXi VM also contains the vCenter Server Appliance.
- We should be able to access the hosts and vCenter from the Mac using domain names.
- A cluster with three ESXi 6.7 hosts added to it.
For this tutorial we need to download the Ubuntu Server 18.04 ISO image. Here is a direct link to the ISO which is located on this download page.
Once you’ve followed the procedures in the previous tutorials and downloaded the Ubuntu Server ISO, we’ll begin the first step of creating the virtual machine in VMware Fusion*.
Step 1: Create Ubuntu Server 18.04 VM with VMware Fusion
First of all, open VMware Fusion, and make sure the first ESXi host containing the virtual pfSense firewall is running. We’ll need this to act as a gateway so the storage server has internet access.

Open the new virtual machine wizard by clicking + and then New…

Double click the Install from disc or image button.

Click Use another disc or disc image… then browse your Mac for the Ubuntu Server ISO that you downloaded at the start of this tutorial.
Once opened, the ISO will be added to the list. Select it then click Continue..

Tick Use Easy Install, enter the login credentials for the machine, then click Continue.

Click Customize Settings.

Save the VM to the same folder as your ESXi VMs, giving it a name of us01.

Once saved, the VM Settings screen will load. We have a few config changes to make here. We need to add two extra network adapters, lower the memory and increase the hard disk space.
Start by clicking on Processors & Memory.

Change it to 512 MB then click Show All to be taken back to the settings screen.

Click on Hard Disk (SCSI), change it to 80 GB, click Apply then click Show All.

Click Add Device…, choose Network Adapter then click Add…

Connect the network adapter to the vSphere network.

Repeat the above step to add another network adapter and change the first network adapter so that it’s connected to the vSphere network as well.
Close the settings windows.

Click play to start the VM and wait for the installer to stop at the configure network section.

Select the first network adapter and make a note of the interface names because we’ll need them later when configuring the iSCSI networks.

The network will time out because the vSphere network doesn’t have DHCP. We want this to happen because we’re gonna configure a static IP.
Press Enter to continue.

Choose Configure network manually.

Type in 10.1.1.201
for the IP address (Remember this is the management IP address of us01 in the diagram of Part 1).

Type in 10.1.1.251
for the gateway IP address (the IP of the virtual pfSense firewall) and press Enter to continue. Use the same address for the nameservers.

Once the network is configured, the installation will continue and the machine will reboot. On first boot, VMware tools will be installed and the authentication service will start. This might pause for a few minutes.

After waiting patiently for the OS to boot, you’re now ready to login. Enter your login credentials.

After logging in for the first time, install the SSH server and your prefered text editor etc.
sudo apt install openssh-server vim net-tools
You should also be able to ping the VM from your Mac (using the hostname uc01 if you followed the hosts editing steps in the first tutorial)
$ ping us01
PING us01.graspingtech.com (10.1.1.201): 56 data bytes
64 bytes from 10.1.1.201: icmp_seq=0 ttl=64 time=0.313 ms
64 bytes from 10.1.1.201: icmp_seq=1 ttl=64 time=0.498 ms
^C
--- us01.graspingtech.com ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.313/0.405/0.498/0.093 ms
SSH onto the VM and in the next step we’ll configure the remaining network adapters.
Step 2: Configure the Network
Open the netplan configuration file.
sudo vim /etc/netplan/01-netcfg.yaml
To begin with it should look like this:
network:
version: 2
renderer: networkd
ethernets:
ens33:
addresses: [ 10.1.1.201/24 ]
gateway4: 10.1.1.251
nameservers:
addresses:
- "10.1.1.251"
We’re gonna extend this config file so that the two remaining network adapters are enabled with their own IP addresses and on their own VLAN.
Remember the diagram in Part 1 shows the addresses we want to assign are:
10.10.1.201 (VLAN 101)
10.10.2.201 (VLAN 102)
Let’s do this by modifying the configuration file so that it now looks like the following:
network:
version: 2
renderer: networkd
ethernets:
ens33:
addresses: [ 10.1.1.201/24 ]
gateway4: 10.1.1.251
nameservers:
addresses:
- "10.1.1.251"
ens34:
dhcp4: no
ens35:
dhcp4: no
vlans:
ens34.101:
id: 101
addresses: [10.10.1.201/24]
link: ens34
ens35.102:
id: 102
addresses: [10.10.2.201/24]
link: ens35
Apply the configuration by running the following command.
sudo netplan apply
Now if you run ipconfig
you should see the IP addresses assigned to the adapters we just configured.
ens34.101: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.1.201 netmask 255.255.255.0 broadcast 10.10.1.255
inet6 fe80::20c:29ff:fef1:f3ab prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:f1:f3:ab txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens35.102: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.2.201 netmask 255.255.255.0 broadcast 10.10.2.255
inet6 fe80::20c:29ff:fef1:f3b5 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:f1:f3:b5 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 656 (656.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
In the next step we will create the port groups in vCenter and make sure we can communicate with the hosts over the iSCSI networks by pinging each host.
Step 3: Create ISCSI Port groups
The next thing we need to do is add two new VMkernel adapters to our standard virtual switches so that our hosts can communicate with the storage server we just created using multiple paths.
Login to vCenter,
- Click on the first ESXi host
- Click Configure
- Click Virtual switches
- Click ADD NETWORKING

Select VMkernel Network Adapter then click NEXT.

Select vSwitch0 then click NEXT.

Give the network label a name of ISCSI-1, assign 101 as the VLAN ID then click NEXT.

Choose Use static IPv4 settings and type in the IP address of esxi01 from the diagram in Part 1, which is 10.10.1.11
. Assign 255.255.255.0
for the subnet mask and then click NEXT.

Click FINISH to create the new port group.

You should see the ISCSI-1 port group on the virtual switch. We’ll now add ISCSI-2 by clicking ADD NETWORKING again.

Do the same as last time, except use ISCSI-2 for the network label and 102 for the VLAN ID.

This time for the IP address use 10.10.2.11
.

Click FINISH to create the second port group.

If you click on the name of one of the ISCSI port groups, you’ll notice that both uplinks are used. We want to change this so that each ISCSI network uses one uplink that is different. For example ISCSI-1 will use vmnic0 and ISCSI-2 will use vmnic1.
Click on … next to ISCSI-1.

Click Edit Settings.

Click Teaming and failover, click on vmnic1 then press the down arrow until it’s in the Unused adapters section. Click OK to confirm.

Do the same thing for ISCSI-2 except move vmnic0 into the unused section.

Once done, you should be able to ping both networks from the Ubuntu VM.
$ ping 10.10.1.11
PING 10.10.1.11 (10.10.1.11) 56(84) bytes of data.
64 bytes from 10.10.1.11: icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from 10.10.1.11: icmp_seq=2 ttl=64 time=0.394 ms
--- 10.10.1.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.394/0.713/1.032/0.319 ms
$ ping 10.10.2.11
PING 10.10.2.11 (10.10.2.11) 56(84) bytes of data.
64 bytes from 10.10.2.11: icmp_seq=1 ttl=64 time=1.39 ms
64 bytes from 10.10.2.11: icmp_seq=2 ttl=64 time=0.798 ms
--- 10.10.2.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.798/1.095/1.392/0.297 ms
Repeat the steps above for each ESXi host in the cluster, except use 12
and 13
for the host portion of the IP addresses. For example, esxi02 will be 10.10.1.12
and 10.10.2.12
.
Step 4: Install SCST iSCSI Target
Now that the network is configured and all three ESXi hosts can communicate with the storage server, we need to install the iSCSI target software.
I’ve already written a tutorial on how to do this in the post titled: Creating a Ubuntu iSCSI SAN Storage Server for VMware vSphere ESXi.
Open the link above in a new tab and follow the instructions on how to install SCST, creating a folder and 1 TB thin provisioned image.
When you get to the section on how to export the disk image as an iSCSI LUN, we need to add the two ISCSI networks so that only they are allowed to access the iSCSI target. This can be done by modifying the following section:
TARGET_DRIVER iscsi {
enabled 1
TARGET iqn.2019-10.graspingtech.com:disk1 {
enabled 1
rel_tgt_id 1
LUN 0 disk1
}
}
So that it uses the allowed_portal
option for both networks.
TARGET_DRIVER iscsi {
enabled 1
TARGET iqn.2019-10.graspingtech.com:disk1 {
enabled 1
allowed_portal 10.10.1.201
allowed_portal 10.10.2.201
rel_tgt_id 1
LUN 0 disk1
}
}
Follow the rest of the tutorial until you get to the section on adding the IP address via Dynamic Discovery. Add both IP addresses 10.10.1.201
and 10.10.2.201
.

When you click on the Paths tab you should see two paths and one of them with active I/O.

Follow the rest of the tutorial in the link above until you have created the iSCSI disk named iscsi-disk01. Once the VMFS volume has been created, you can repeat the dynamic discovery and rescan adapters steps for each host and the volume will automatically show up on them without having to go through the add VMFS volume step.
Step 5: Enable Round Robin Multipathing
If you want both network cards to be active at the same time, to increase the bandwidth to the storage server, you can enable round robin multipathing by following the steps below.
Click on the Storage tab, click Configure then Connectivity and Multipathing. Click on the host then click Edit Multipathing…

The multipathing policies dialog will open. Click the path selection policy dropdown and change it to Round Robin (VMware).

Once selected, click OK to apply the policy.

Now you should see both paths as being active.

Repeat the same steps for the remaining hosts.
Step 5: Live migrate the VMs to the storage server
That’s all the configuration steps finished. It’s now time to test it by performing a live migration (Storage VMotion) of our VMs to the iSCSI disk.
First we’ll migrate the pfSense firewall. Right click on fw01 then click Migrate…

Select Change storage only for the migration type then click NEXT.

Choose the 1 TB iSCSI disk then click NEXT.

Click FINISH to start the migration.

Repeat the steps above for the vCenter machine (vc01).

If all goes to plan, you should see both VMs are now on the iSCSI disk.

Conclusion
In this tutorial, we created a storage server, configured iSCSI multipathing and migrated our VMs from local storage onto the centralized 1 TB iSCSI LUN. We’re now one step closer to being able to use vMotion, DRS and HA.
Coming next
In the next tutorial, we’ll enable vMotion so that we can test moving VMs from one host to another while they’re still powered on.
Part 6: Create VMkernel port group for vMotion and enable DRS