PowerPath Virtual Edition for vSphere

This past week, the vSpecialist new hire Team003 spent the week putting together a vBlock down in the Atlanta offices for EMC. As a veteran of VMware, I took it upon myself to work on something I was less familiar with. Amongst other things, I took on the deployment of PwerPath VE for VMware on the UCS cluster. As it turns out (and as many of you already must know), this turned out not to be a difficult technical challenge, but it was good experience for me.

PowerPath is a critical component of any vSphere implementation, and probably one of the most underutilized and overlooked. While VMware vSphere has native multipathing (NMP) built in, for any given volume it functions in an active / passive configuration only, rather than active / active. Furthermore, while VMware has introduced round-robin for rough load distribution across available storage paths, it is not true load balancing. Finally, the built in NMP uses a far less sophisticated path failover / failback algorithm than PowerPathVE.

PowerPath, on the other hand, employs true load balancing actively across all available paths, providing far better use of the fabric, as well as active path management. This means PowerPath not only manages active paths, continuously balancing the i/o across the fabric, but also failed paths, restoring paths when possible. Finally, storage teams can monitor and manage performance of any PowerPath connected device from a centeralized console. Ultimately, this means higher density of virtual machines per host, decreasing the number of required hosts, increasing operational efficiency, and lowering the cost of the solution.

PowerPathVE is installed through VUM (VMware Update Manager). This wasn’t really a problem, once I decided to follow the instructions in the manual and create a Host Extension baseline rather than a Host Upgrade or Host Patch baseline. Once I created the baseline, I simply took the cluster, one ESX server at a time, put each of them in maintenance mode and applied the VUM baseline. It does require a reboot of the host. Once installed, PowerPath will claim any FC paths available, though it won’t change the path policy until / unless licensing (licensing the software was far more difficult than the implementation). I never did get the license server to function correctly, though I did get it running, and we had licenses. Ultimately, I decided to use per-host (‘unserved’) licensing. This requires knowing the UID of the ESX host to generate production licenses, and to do that, you have to install and run PowerPath Remote tools (rpowermt) from either a Windows or linux host (rpowermt.exe host= check_registration). You have to include the host UID in the request for the license. Once you have obtained and activated your license file through your license management portal on Powerlink, you can register your host(s) with ‘rpowermt host= register.’ Once that is done, PowerPath should change the path policy to the appropriate policy for the array in use, though the rpowermt command can be used to exercise control over which paths will or won’t be managed by NMP or PPVE.

You can find all you need about PowerPathVE on Powerlink.

Advertisements

About Tom

Just a guy, mostly a father and husband, pretending to be a technologist, constantly underestimating how much there is to learn.... content is mine
This entry was posted in From the field, Journey to the Cloud, TechInfo, Virtualization - Server and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s