With the enhancements of iSCSI devices over these years, it’s not unusual to find some environments choosing iSCSI implementation over Fiber Channel implementations.
I believe it’s clear to everyone that FC provides the fastest and more reliable solution these days and of course it’s the most expensive solution, right ?!?
But iSCSI solutions have it’s merits . So, how do you get the best performance of it ?
Well, the more obviously approach would be use faster connections.
If you are already using 10gb connections you probably wont see much difference having more than one connection.
Now if you have 1gb connection and cannot migrate to 10gb, add multiple NICs and path to your configuration.
Configuring iSCSI Multipathing.
I’ll not try to cover here all the aspects of how to accomplish that, it’s because different iSCSI storage vendors present storage to servers in different ways. Some vendors present multiple LUNs on a single target, while others present multiple targets with one LUN each.
My best advice is to check with your storage vendor how to configure it for your specific environment; they all have documentation about it.
So, what’s this post about ?
My first though was to alert about the misconception about it, some people tent to believe that if you just add more NICs to your virtual switch where the VMKernel port is configured it will automatically provide load balance and bigger throughput, just like the virtual machines connections do.
That’s not true !!!
Since vSphere iSCSI stack acts like a “port binding” you will end up with just one active connection per iSCSI initiator X iSCSI target, regardless of how many NIC you have attached to your vSwitch.
To accomplish multipathing you will need to configure additional vmkernel portgroups and bind each NIC to each portgroup.
Let’s see how it works.
1 – Configure additional vmkernel portgroups
Configure as much portgroups as NICs will have for iSCSI traffic
2 – Map each iSCSI port to just one active NIC.
By default all NICs are active, you will need to overwrite vSwitch failover order policy so that each port maps to only one corresponding active NIC
3 - Binding ports
Now the final piece: you will bind vmknics to iSCSI initiators.
First identify the name of iSCSI ports. (get them from the VI client, Networking option)
Second, you need to identify the vmhba names. (get them from the VI client, Storage Adapters option)
Finally you just run the command which will bind them.
esxcli swiscsi nic add -n port_name -d vmhba
on our example it will be this:
esxcli swiscsi nic add -n vmk1 –d vmhba32
esxcli swiscsi nic add -n vmk2 –d vmhba32
If you display the Paths view for the vmhba32 adapter through the vSphere Client, you see that the adapter uses two paths to access the same target. The runtime names of the paths are vmhba32:C1:T1:L0 and vmhba32:C2:T1:L0. C1 and C2 in this example indicate the two network adapters that are used for multipathing.
You can now configure your discovery initiators and rescan your datastore.
AGAIN: it’s more a heads up than a procedure to follow, remember: there are several factors that could contribute to how do you set it up, like software assistance or hardware dependent of your card connections, so check with your vendor.
If you want to read more:
VMWARE has a good guide: iSCSI SAN Configuration Guide
Virtual Geek blog has also very good information about it
Now it’s up to you ; )
Tuesday, December 6, 2011
Subscribe to:
Posts (Atom)
Who am I

- Eduardo Meirelles da Rocha
- I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell
Most Viewed Posts
-
vRealize Automation 7.1 brings several new features and functionalities, while the community is covering the fanciest...
-
For the past few days my vRealize Automation Cloud has been broken, mainly because there was an error with my Cloud Proxy preventing it to...
-
I have to admit since I started playing with PowerCLI I’ve been enjoying it more and more. A few weeks ago I needed to delete 50 VMs from t...
-
If you just install VMware Converter and start running it with it’s default configuration, I’m sure you will be successfully. But, the...
-
Have you tried to set up or change a root's password for an ESXi host and got the following error message: Weak...
-
Day two of a VMware NSX implementation and I was surrounded by angry network guys asking me: “ What have you done ? ” As scare as...
-
During the past few weeks, my NSX partner in crime, the Sr. Consultant Anderson Duboc and I have been working on a NSX Reference Poster...
-
Do you know the LUNs on your environment might have different versions ? Yeah, that’s true!! To check tha LUN version, on the configuratio...
-
I've been playing with Tanzu Kubernetes Cluster (TKC) on vSphere with Tanzu since vSphere 7.0 GA, recently, to be honest, have bee...
-
While working with one of my customers to deploy a new automation platform ( vRealize Automation ), which will provide and manage multi-clo...