Distributed virtual and physical routing in VMware NSX for vSphere

Distributed virtual and physical routing in VMware NSX for vSphere.

Distributed virtual and physical routing in VMware NSX for vSphere

This post is intended to be a primer on the distributed routing in VMware NSX for vSphere, using a basic scenario of L3 forwarding between both virtual and physical subnets. I’m not going to bore you with all of the laborious details, just the stuff that matters for the purpose of this discussion.

In VMware NSX for vSphere there are two different types of NSX routers you can deploy in your virtual network.

  1. The NSX Edge Services Router (ESR)
  2. The NSX Distributed Logical Router (DLR)

Both the ESR and DLR can run dynamic routing protocols, or not.  They can just have static/default routes if you like.

The ESR is a router in a VM (it also does other L4-L7 services like FW, LB, NAT, VPN, if you want).  Both the control and data plane of the ESR router are in the VM.  This VM establishes routing protocol sessions with other routers and all of the traffic flows through this VM.  It’s like a router, but in a VM.  This should be straight forward, not requiring much explanation.

The ESR is unique because it’s more than a just router.  It’s also a feature rich firewall, load balancer, and VPN device.  Because of that, it works well as the device handling the North-South traffic at the perimeter of your virtual network.  You know, the traffic coming from and going to the clients, other applications, other tenants.  And don’t be fooled.  Just because it’s a VM doesn’t mean the performance is lacking.  Layer 4 firewall and load balancer operations can reach and exceed 10 Gbps throughput, with high connections per second (cps).  Layer 7 operations also perform well compared to hardware counterparts.  And because it’s a VM, well, you can have virtually unlimited ESRs running in parallel, each establishing the secure perimeter for their own “tenant” enclave.

The DLR is a different beast.  With the DLR the data plane is distributed in kernel modules at each vSphere host, while only the control plane exists in a VM.  And that control plane VM also relies on the NSX controller cluster to push routing updates to the kernel modules.

The DLR is unique because it enables each vSphere hypervisor host to perform L3 routing between virtual and physical subnets in the kernel at line rate.  The DLR is configured and managed like one logical router chassis, where each hypervisor host is like a logical line card.  Because of that the DLR works well as the “device” handling the East-West traffic in your virtual network.  You know, the traffic between virtual machines, the traffic between virtual and physical machines, all of that backend traffic that makes your application work.  We want this traffic to have low latency and high throughput, so it just makes sense to do this as close to the workload as possible, hence the DLR.

The ESR and DLR are independent.  You can deploy both in the same virtual network, just one, or none.

Now that we’ve established the basic difference and autonomy between the ESR and DLR, in this blog we’ll focus on the DLR.  Let’s look at a simple scenario where we have just the DLR and no ESR.

Let’s assume a simple situation where our DLR is running on two vSphere hosts (H1 and H2) and has three logical interfaces:

  • Logical Interface 1: VXLAN logical network #1 with VMs (LIF1)
  • Logical Interface 2: VXLAN logical network #2 with VMs (LIF2)
  • Logical Interface 3: VLAN physical network with physical hosts or routers/gateways (LIF3)

Routers have interfaces with IP addresses and the DLR is no different.  Each vSphere host running the DLR has an identical instance of these three logical interfaces, with identical IP and MAC addresses (with the exception of LIF3).

  • The IP address and MAC address on LIF1 is the same on all vSphere hosts (vMAC)
  • The IP address and MAC address on LIF2 is the same on all vSphere hosts (vMAC)
  • The IP address on LIF3 is the same on all vSphere hosts, however the MAC address on LIF3 is unique per vSphere host (pMAC)

LIFs attached to physical VLAN subnets will have unique MAC addresses per vSphere host.

Side note: the pMAC cited here is not the physical NIC MAC.  It’s different.

The DLR kernel modules will route between VXLAN subnets.  If for example VM1 on Logical Network #1 wants to communicate with VM2 on Logical Network #2, VM1 will use the IP address on LIF1 as it’s default gateway, and the DLR kernel module will route the traffic between LIF1 and LIF2 directly on the vSphere host wherever VM1 resides.  The traffic will then be delivered to VM2, which might be on the same vSphere host, or perhaps another vSphere host where VXLAN encapsulation on Logical Network #2 will be used to deliver the traffic to the hypervisor host where VM2 resides.  Pretty straight forward.

VMware NSX Distributed Logical Router for vSphere

The DLR kernel modules can also route between physical and virtual subnets.  Let’s see what happens when a physical host PH1 (or router) on the physical VLAN wants to deliver traffic to a VM on a VXLAN logical network.

PH1 either has a route or default gateway pointing at the IP address of LIF3.
PH1 issues an ARP request for the IP address present on LIF3.
Before any of this happened, the NSX controller cluster picked one vSphere host to be the Designated Instance (DI) for LIF3.

  • The DI is only needed for LIFs attached to physical VLANs.
  • There is only one DI per LIF.
  • The DI host for one LIF might not be the same DI host for another LIF.
  • The DI is responsible for ARP resolution.

Let’s presume H1 is the vSphere host selected as the DI for LIF3, so H1 responds to PH1′s ARP request, replying with its own unique pMAC on its LIF3.
PH1 then delivers the traffic to the DI host, H1.
H1 then performs a routing lookup in its DLR kernel module.
The destination VM may or may not be on H1.
If so, the packet is delivered directly. (i)
If not, the packet is encapsulated in a VXLAN header and sent directly to the destination vSphere host, H2. (ii)

For (ii) return traffic, the vSphere host with the VM (H2 in this case) will perform a routing lookup in its DLR kernel module and see that the output interface to reach PH1 is its own LIF3.  Yes, if a DLR has a LIF attached to a physical VLAN, each vSphere host running the DLR had better be attached to that VLAN.

Each LIF on the DLR has its own ARP table.  By consequence, each vSphere host in the DLR carries an ARP table for each LIF.
The DLR ARP table for LIF3 may be empty or not contain an entry for PH1, and because H2 is not the DI for LIF3, it’s not allowed to ARP.  So instead H2 sends a UDP message to the DI host (H1) asking it to perform the ARP.

Note: The NSX controller cluster, upon picking H1 as the DI, informed all hosts in the DLR that H1 was the DI for LIF3.

The DI host for LIF3 (H1) issues an ARP request for PH1 and subsequently sends a UDP response back to H2 containing the resolved information. H2 now has an entry for PH1 on its LIF3 ARP table and delivers the return traffic directly from the VM to PH1.  The DI host (H1) is not in the return data path.

All of that happened with just a DLR and static/default routes (no routing protocols).

The DLR can also run IP routing protocols — both OSPF and BGP.

In the case where the DLR is running routing protocols with an upstream router, the DLR will consume two IP addresses on that subnet. One for the LIF in the DLR kernel module in each vSphere host, and one for the DLR control VM.  The IP address on the DLR control VM is not a LIF, it’s not present in the DLR kernel modules of the vSphere hosts, it only exists on the control VM and will be used for establishing routing protocol sessions with other routers — this IP address is referred to as the “Protocol Address”.

The IP address on the LIF will be used for the actual traffic forwarding between the DLR kernel modules and the other routers — this IP address is referred to as the “Forwarding Address” – and is used as the next-hop address in routing advertisements.

When the DLR has a routing adjacency with another router on a physical VLAN, the same process described earlier concerning Designated Instances happens when the other router ARPs for the DLR’s next-hop forwarding address.  Pretty straight forward.

If however the DLR has a routing adjacency with the “other” router on a logical VXLAN network — such as with a router VM running on a vSphere host (eg. ESR) – where that vSphere host is also running the DLR — then no Designated Instance process is needed because the DLR LIF with the Forwarding Address will always be present on the same host as the “other” router VM.  How’s that for a Brain Twister? ;)

The basic point here is that the DLR provides optimal routing between virtual subnets, and physical subnets, and can establish IP routing sessions with virtual and physical routers.

One example where this would work might be a three tier application where each tier is its own subnet.  The Web and App tiers might be virtual machines on VXLAN logical networks, whereas the Database machines might be non-virtualized physical hosts on a VLAN.  The DLR can perform optimal routing between these three subnets, virtual and physical, as well as dynamically advertise new subnets to the data center WAN or Internet routers using OSPF for BGP.

Pretty cool, right?

Stay tuned.  More to come…

Cheers,
Brad

Advertisements

Opening the Virtual Machine Remote Console through PowerCLI | VMware PowerCLI Blog – VMware Blogs

VMware PowerCLI Blog

PowerCLI is the best tool for automating management and configuration of VMware vSphere

Post navigation← Previous

inShare

Opening the Virtual Machine Remote Console through PowerCLI

Posted on October 22, 2013 by Alan Renouf

With the 5.5 R1 release PowerCLI got even better. With the introduction of the new Open-VMConsoleWindow cmdlet you can access the virtual machine console of both vCenter Server and vCloud Director virtual machines. To open a virtual machine console window, simply pass a powered-on virtual machine to the Open-VMConsoleWindow cmdlet:

Get-VM “Win2k3” | Open-VMConsoleWindow

As a result, the cmdlet opens a Web page containing the virtual machine remote console:

You can even open the console in full screen mode – either by specifying the corresponding cmdlet parameter or by clicking the Full Screenbutton on the Web page:

Open-VMConsoleWindow –VM “Win2k3” –FullScreen

Unless configured otherwise, the cmdlet opens the virtual machine console in the default Web browser on your machine. If you want to use a different browser, you can do so by specifying it in the PowerCLI configuration. You will need to specify the full path to the browser’s executable file:

Set-PowerCLIConfiguration –VMConsoleWindowBrowser “C:Program Files (x86)Mozilla Firefoxfirefox.exe”

To switch back to using the default browser, simply specify “$null” for the “VMConsoleWindowBrowser” setting.

How does this work under the covers?

In order to display the virtual machine console, PowerCLI uses the VMRCbrowser plug-in embedded in a Web page. This plug-in is installed during the installation of PowerCLI. It supports 32-bit Internet Explorer, Mozilla Firefox, and Google Chrome browsers. The Web page is located at “/VMConsoleWindow/” – have a look at it if you want to get the full details or make modifications.

Opening the virtual machine console requires authentication in the form of a token. For vSphere virtual machines this token is acquired through the “AcquireCloneTicket()” method of the SessionManager API object, and for vCloud Director – through the “AcquireTicket()” method of the VirtualMachine API object. In both cases the token is valid for 30 secondsfor single use. The token, virtual machine ID, and the host it’s running on (along with other parameters) are passed as URL parameters to the above-mentioned Web page. If you want to get hold of just this URL (for example, if you want to run Firefox with a specific profile) – you can do so by specifying the “UrlOnly” parameter:

$url = Open-VMConsoleWindow –VM “Win2k3” –UrlOnly

.”C:Program Files (x86)Mozilla Firefoxfirefox.exe” -P Work $url

With this new cmdlet you now have one less reason to use the vSphere Web Client and can switch entirely to PowerCLI as a single tool for managing your entire virtual infrastructure! How cool is that!

Getting more from the Open-VMConsoleWindow

Another way in which this cmdlet can be used is to solve a common use case in virtual environments.  Often VM owners will want access to their console to troubleshoot an OS or to have the ability to enter the BIOS, this often leads to vSphere Administrators giving people access to the vSphere Web Client or vSphere Client.

As we know there are more actions than opening a console in the full clients and therefore these VM owners often get new ideas about features they would like to use but do not have permissions to do so.  With this cmdlet and other free tools we can easily give the users access to a console window to open their VM without them knowing about the full vSphere Clients.

An example of this is in the below video where we create a scripted application which we can send to our users in just 5 lines of code! Check it out…

This post was created by Dimitar Barfonchovski.Dimitar joined VMware and the PowerCLI team in 2007. He is member of the development part of the team and his main responsibilities are the functional design and implementation of features for the vSphere and vCloud PowerCLI components.

As with all members of the team, he is working to deliver a good and valuable product. He is also working to improve all processes and tools involved in the product development and validation.

This entry was posted in vCenter and tagged PowerCLI, VMware, VMworld by Alan Renouf. Bookmark the permalink.

via Opening the Virtual Machine Remote Console through PowerCLI | VMware PowerCLI Blog – VMware Blogs.

Understanding vSphere Active Memory | VMware vSphere Blog – VMware Blogs

Understanding vSphere Active Memory

Posted on October 4, 2013 by Mark Achtemichuk

We’ve all seen “Active” memory reported within various vSphere interfaces, but how many us really know what it describes.  I think you might be surprised.

Let’s look at its definition to get us started:

Active Memory – ”Amount of memory that is actively used, as estimated by VMkernel based on recently touched memory pages.”

There are some very important details we need to breakdown here.  Example: What is it used for? What does estimated mean?  How do we define recently?

Background:

VMware virtual memory management was designed from the beginning with several architectural tenants in mind.  Two of those being, to leverage a share based allocation and the ability to reclaim idle memory (incidentally a great technical read here).  So when memory was in short supply, a share based mechanism could be used to determine how much a virtual machine should get in relation to its peers.  To really make intelligent allocation decisions though, an important input into that algorithm would need to be some measure of how much the virtual machine was actually using.  Existing proportional share based algorithms were quite static and ratio based.  Wanting more, Active memory was born out of the idea that if you could measure what a virtual machine was actually using, that could in turn be input into the share based algorithm making it truly proportional and therefore more realistic.  Active memory’s primary purpose is to assist in making memory scheduler allocation decisions.

So now we know why the Active memory counter was created and its purpose.  So how does it work?

In order to track active memory, all memory pages that are touched, defined as read from or written to, by the virtual machine would need to be monitored.  The cost of monitoring every memory page by the hypervisor and how frequently it was touched wasn’t viable.  The overhead generated by that process couldn’t be justified.  So instead, a mathematical model was created in which the hypervisor statistically estimates what Active memory is for a virtual machine by use of a random sampling and some very smart math (beyond me – I am in Marketing after all).  Due diligence was done at the time this was designed to prove the estimation model did represent real life.  So while it is an estimate, we should not be concerned about its accuracy.

Subsequently we’ve now shown that it uses a estimation model to be efficient, but yet maintains its accuracy.  Lastly, let’s define recently.

This counter provides you an estimated total of amount of memory that has been touched over the sampling period.  This means that depending on where you are looking at this is counter, its sampling period may be different.  In vCenter for example, real-time charts show Active memory samples every 20 seconds.  This means that it’s showing the estimated amount of memory that is touched by the virtual machine in the last 20 seconds.  However in esxtop, the display refresh is every 5 seconds.  This means that it’s showing the estimated amount of memory that is touched by the virtual machine in the last 5 seconds.

BUT – Here’s the important part – There is no way to know if the memory pages that were touched in the last number of sampling period are unique or not.

Many people wrongfully assume that since active memory is not really changing, that the virtual machine is only using that amount of memory.  This is the assumption that hurts people because they wrongfully assume what it represents and then use it for another purpose for which it wasn’t designed.  Remember that its purpose is to assist in allocation in times when memory is scarce – not as a capacity planning counter.  Lets solidify this with an example.

Example:

This counter only describes the volume of memory pages that have been touched over the last sampling period.  It does not describe if they are the same memory pages or different memory pages between sampling periods.

It would appear here that since every time Active is reported as 2GB, that the virtual machine is only using 2GB of memory.  Actually though, it means that the virtual machine has touched between 2GB and 6GB of unique memory pages over 60 seconds.  This makes it a great counter to be used in a real-time allocation model, but a very poor counter to be used for capacity planning.

The Takeaway

Active memory can used as a counter to understand how aggressively a virtual machine is touching memory.  However, it cannot be used as a “rightsizing” counter on its own.  I personally believe the best memory counters are those from the guest operating systems as they truly represent what is allocated and idle.  We need to manage memory from the guest perspective.  That’s why investments in tools like vCOps and Hyperic are important for both rightsizing and on-going support of troubleshooting activities.

Other Great References:

Understanding Memory Resource Management in VMware vSphere 5.0

This entry was posted in ESXi, Performance, vSphere and tagged active, memory, Performance by Mark Achtemichuk. Bookmark the permalink.

via Understanding vSphere Active Memory | VMware vSphere Blog – VMware Blogs.

vSphere 5.5 will be out and the GA will be due in September – VMware – IT Certification Forum

Here are some highlights what going to be unveiled this August:

Administrative UI (Web Client)

The 2013 Administrative UI is built around improved performance and a more native web applications feel.

• Improved usability and more search filters, along with the introduction of the ‘recent object’ tabs

help admins find and manager their key objects with fewer clicks.

• The new UI is built to manage larger inventories with a faster response time.

• Faster response time across the entire UI

vSphere Replication

The vSphere Replication release for 2013 adds the following new capabilities:

• Ability to deploy new appliances to allow for replication between clusters and non-shared storage deployments

• Multiple points in time support allows administrators to recover to a previous snapshot thus providing protection from logical corruptions in the application that may have been replicated.

• Storage DRS Interoperability – allows for replicated VMs to be storage vMotioned across datastores with no interruption to ongoing replication

• Simplified Management – Deeper integration into the vSphere Web Client to configure and monitor replication within the VM and vCenter management panes simplifies the management experience for replication

• VSAN interoperability to protect and recover virtual machines running on VSAN datastores

vCenter Orchestrator

With this release, vCenter Orchestrator is greatly optimized for growing clouds because of significant improvements in scalability and high availability. Workflow developers can benefit from a more simplified and efficient development experience provided by the new debugging and failure diagnostic capabilities in the vCenter Orchestrator client.

Virtual SAN

Virtual SAN is a software-based storage solution built into the hypervisor that aggregates the hosts’ local storage devices (SSD and HDD) and makes them appear as a single pool of storage shared across all hosts.

VMware Virtual Flash (vFlash)

Virtual Machine File System (VMFS) Highlights:

This vSphere version supports > 2TB vdisks (vmdk size). Customers will be able to create vmdks upto 64TB. Large files can now be contained within a single vdisk. vSphere version 5.5 and VMFS 5 is needed to create >2TB vdisk.

vCloud Director

Enhancements in this vCloud Director 5.5 release focus on the Content Catalog, vApp provisioning and lifecycle management, improved OVF import/export functionality and added browser support to include supporting Mac OS.

vCloud Director Virtual Appliance

The vCloud Director beta includes support for the vCloud Director Virtual Appliance to help facilitate PoCs and Evals.

 vCloud Director cell is available in a virtual appliance form factor for quick-and-easy deployment and setup. With the appliance you can choose to use an internal/embedded database or an external database of your choice (Microsoft SQL Server or Oracle).

 As with prior releases, the vCD virtual appliance is available for PoC/Eval use only. For help with deploying and configuring the vCloud Director virtual appliance please see the vCloud Director 5.5 Virtual Appliance Deployment Guide available in the Beta Community.

Content Catalog

This release includes multiple enhancements to the Content Catalog.

vCloud Networking & Security

Networking Enhancements

This release contains two major networking enhancements

Link Aggregation Control Protocol (LACP): Provides increased bandwidth, better load balancing,

improved link level redundancy and easier operations for hypervisor uplinks connected to physical network.

• Today, vSphere5.1 supports a simplified version of LACP with support for single Link Aggregation per host and limited choice of load balancing algorithms.

• LACP in vSphere5.5 allows for a rich choice of over 22 load balancing algorithms and 32 LAGs per host and ensures the largest density of physical NICs can be aggregated.

Security Features

Distributed Firewall is a key service in the Software Defined Datacenter. It secures and isolates workloads inside the virtual environment. Key new features:

Performance: High performance stateful firewall at hypervisor of each host

vCenter Site Recovery Manager 2013

Feature Highlights: Here are some key features supported in the beta refresh:

• Support for vSAN with vSphere Replication • SDRS / Storage vMotion interoperability • New configuration option to support vSphere Replication Multi-Point-In-Time snapshots during failover

VMware vCenter Multi-Hypervisor Manager (MHM) 1.1

The MHM 1.1 release adds the following new capabilities:

• Support for Microsoft Hyper-V3 hypervisor (as well as Windows 2008 R2 and 2008).

The ability to cold-migrate VMs from Hyper-V to ESX hosts.

via vSphere 5.5 will be out and the GA will be due in September – VMware – IT Certification Forum.

What’s Required For vSphere Stretch Deploy To Work With vCloud Hybrid Service | VMware vCloud Blog – VMware Blogs

Posted on August 7, 2013 by 

By: Chris Colotti

This is a repost from Chris Colotti’s blog, chriscolotti.us

I wanted to run through a quick guide to what is needed to work with vCloud Hybrid Service and how things are setup to get it working.  There is a misconception that getting this working is really hard to do, but I hope you will see that is not in fact the case.

Stretch Deploy Appliance Requirements

The following virtual appliances are needed in order to begin the setup of Data Center Extension and Stretch Deploy, which by the way, are the same thing

  • vCloud Hybrid Services Public Cloud Account
  • vSphere vCenter on premise
  • vCloud Networking and Security Manager On Premise
  • vCloud Connector Appliances On Premise
    • vCC Node on Premise
    • vCC Server on Premise
  • vCC Multi-tenant Node at vCloud Hybrid Service (This is already deployed by VMware and you just need the appropriate URLS)
    • Yours will be different but will look something like mine below
    • My Dedicated Cloud vCC Node:  p1v17-vccmt.vchs.vmware.com:443
    • My Virtual Private Cloud vCC Node: p1v14-vccmt.vchs.vmware.com:443

Step 1 – Configure vCloud Networking & Security Manager

This is as easy as getting the OVA file from the download site and importing into vSphere on premise.  Once it is imported you need to simply register it with vCenter Server so it can issue the commands needed.  Once this is deployed you can move onto configuring the vCloud Connector components as the login information and IP of the vCNS Manager are needed.  There are a couple of things you will want to also consider.

  1. Update the admin/default login information
  2. Update the time and NTP settings
  3. DNS information

Step 2 – Configure vCloud Connector

This is probably the step that most people struggle with initially, but once it’s setup you are golden, but there is a few things you need to do specifically for Stretch Deploy to work on the vCloud Connector Nodes.  Below is the basic steps you need to perform to get vCloud Connector setup.

  • Deploy and configure the vCloud connector Node on Premise
    • Configure IP Addressing, Time,
    • Change admin passwords
    • Configure local vSphere Connection
  • Deploy and configure the vCloud Connector Node on Premise
  • Configure IP Addressing, Time
  • Change admin passwords
  • Add License Key
  • Register with vSphere vCenter Server

  • Add vCloud Node Connections
    • Click Register Node
    • Fill in information
    • Select vSphere or vCloud Target
    • Supply credentials

  • Finally enable the nodes for Stretch Deploy
    • Notice the vSphere node is asking for the vShield Manager URL and login
    • vCloud is asking for the Org Login information

At this point vCloud Connector is configured and you still need to perform one last step which is to deploy a vShield Edge Gateway on premise.

Step 3 – Deploy On Premise vShield Edge Gateway

I touched on some of this in the previous posts, but I will repeat some of the things to consider here.  It’s not a requirement that in a vSphere setup that existing virtual machines are changed to use this as their gateway.  We need only deploy this Edge Gateway to serve as a VPN end point to the vSphere port group we want to stretch.  You only need to deploy a new Edge with the following basic settings:

  • vnic0 – External Network (Mapped to a Port Group with Internet Access
  • vnic1 – Internal Network (Mapped to the Port Group you want to extend)
  • Firewall Rules = Default
  • NAT Configuration = Default

In order to add a new vShield Edge Gateway you need to select the Data Center level object in vSphere, click the Network Virtualization tab and add a new Edge with the correct mappings and settings as above.  You will need to give it IP addresses on both Portgroups as well.

When you are done you will have a network that looks something like the logical diagram below where the new Edge Gateway is simply bridging the VLAN portgroup you want to stretch and a port group with internet access.  All the virtual machines will remain untouched.

Step 4 – Configure vCloud Connector Plugin

Once you have all these parts in place you simply need to finish the vCloud Connector Plugin setup in vSphere to add your two clouds so you can begin the stretch deploy process.

Summary

You can see there is really only four major steps to set up the components needed to make vSphere and Data Center Extension (Stretch Deploy) work.  Yes there is a few things you need to deploy and configure, but that’s the same with any feature rich technology.  I hope between this ad my other recent posts with another one still to come you can see that this is pretty simple to get setup.  Give it a try to start moving workloads to vCloud Hybrid Service!

Chris is a Consulting Architect with the VMware vCloud Delivery Services team with over 10 years of experience working with IT hardware and software solutions. He holds a Bachelor of Science Degree in Information Systems from the Daniel Webster College. Prior to VMware he served a Fortune 1000 company in southern NH as a Systems Architect/Administrator, architecting VMware solutions to support new application deployments. At VMware, in the roles of a Consultant and now Consulting Architect, Chris has guided partners as well as customers in establishing a VMware practice and consulted on multiple customer projects ranging from datacenter migrations to long-term residency architecture support. Currently, Chris is working on the newest VMware vCloud solutions and architectures for enterprise-wide private cloud deployments.

Best practices for upgrading to VMware vCloud Networking and Security 5.5 (2055673)

Purpose

This article provides best practices for upgrading a vShield environment to vCloud Networking and Security 5.5.Notes:

  • This article assumes that you have read the vShield Installation and Upgrade Guide. The vShield Upgrade and Installation Guide contains definitive information. If there is a discrepancy between the guide and this KB article, assume that the guide is correct.
  • For information on a new installation of vCloud Networking and Security 5.5, see the vShield Installation and Upgrade Guide.

Resolution

To upgrade vShield, you must first upgrade vShield Manager, then update the other components for which you have a license.
Complete upgrades in this order:
  1. vShield Manager
  2. vCenter Server
  3. Other vShield components managed by vShield Manager
  4. ESXi hosts

Software Requirements

For information on the latest interoperability, see the Product Interoperability Matrix.

These are the minimum required versions of VMware products to be installed with vShield 5.5:

  • VMware vCenter Server 5.1 or later
    • For VXLAN virtual wires, you need vCenter Server 5.1 or later
  •  VMware ESXi/ESX 5.0 or later for each server
    • For VXLAN virtual wires, you need VMware ESXi 5.1 or later
    • For vShield Endpoint, you need VMware ESX 5.0 or later
  • VMware Tools
    • For vShield Endpoint and vShield Data Security, you must upgrade your virtual machines to hardware version 7 or 8, and install VMwareTools 8.6.0 (that was released with ESXi 5.0 Patch 3)
    • You must install VMware Tools on virtual machines that are to be protected by vShield App
  • VMware vCloud Director 5.1 or later
  • VMware View 4.5 or later

Client and User Access Requirements

VMware vShield 5.5 has these client and user access requirements:

  • PC with the vSphere Client installed
  • If you add ESXi hosts by name to the vSphere inventory, ensure that DNS servers have been configured on the vShield Manager and name resolution is working. If you do not do this, vShield Manager cannot resolve the IP addresses.
  • Permissions to add and power on virtual machines
  • Access to the datastore where you store virtual machine files, and the account permissions to copy files to that datastore
  • Ensure that you have enabled cookies on your web browser to access the vShield Manager user interface
  • Port 443 must be accessible from the ESXi host, the vCenter Server, and the vShield appliances to be deployed. This port is required to download the OVF file on the ESXi host for deployment.
  • Connection to the vShield Manager user interface using one of these supported browsers:
    • Internet Explorer 6.x and later
    • Mozilla Firefox 1.x and later
    • Safari 1.x or 2.x

System Requirements

This table outlines minimum system requirements:

Component Minimum Requirements
Memory
  • vShield Manager (64-bit): 8 GB, 3GB reserved
  • vShield Edge compact: 512 MB, large: 1GB, x-large: 8GB
  • vShield Endpoint Service: 1GB
  • vShield Data Security: 512 MB
Disk Space
  • vShield Manager: 60 GB
  • vShield Edge compact and large: 512 MB, x-Large: 4.5 GB (with 4 GB swap file)
  • vShield Endpoint Service: 4 GB
  • vShield Data Security: 6GB per ESX host
vCPU
  • vShield Manager: 2
  • vShield Edge compact: 1, large and x-Large: 2
  • vShield Endpoint Service: 2
  • vShield Data Security:

Pre-upgrade Preparation

Prior to starting the upgrade process, consider these points to ensure a successful upgrade:

  • From the vSphere Client, take a snapshot of the vShield Manager.
  • If you are running a version earlier than 5.1.0, follow the upgrade process documented in Upgrading to vCloud Networking and Security 5.1.2a best practices (2044458) to ensure you are running the correct virtual hardware required as of version 5.1.
  • For vShield Managers running 5.1.0 (build 807847) that were upgraded from versions 5.0.0 build 473791), 5.0.1 build 638924, or 5.0.2 build 791471, ensure you have upgraded the virtual hardware as documented in Upgrading to vCloud Networking and Security 5.1.2a best practices (2044458).Note: This virtual hardware upgrade only applies to vShield Managers that are upgraded from versions 5.0.x or earlier. New installations of vShield Manager 5.1.0 or higher already ship with this upgraded virtual hardware.
  • Never uninstall a deployed instance of the vShield Manager appliance.

RC Milestone Upgrade Requirements

For RC, we will be supporting the following upgrades. Ensure that your system is at one of these versions.

  • vCNS 5.1.2 to vCNS 5.5
  • vCNS 5.12b to vCNS 5.5

Upgrade Procedure

For vShield Managers 5.1.0 or later:

  1. From the VMware Download Center, download the vShield upgrade bundle to a location that vShield Manager can browse. The name of the upgrade bundle file is:VMware-vShield-Manager-upgrade-bundle-1258810.tar.gz
  2. From the vShield Manager Inventory panel, click Settings & Reports.
  3. Click the Updates tab.
  4. Click Upload Upgrade Bundle.
  5. Click Browse and select the VMware-vShield-Manager-upgrade-bundle-1258810.tar.gz file.
  6. Click Open.
  7. Click Upload File.
  8. Click Install to begin the upgrade process.
  9. Click Confirm Install. The upgrade process reboots vShield Manager, so you might lose connectivity to the vShield Manager user interface. None of the other vShield components are rebooted.
  10. After the reboot, log back in to the vShield Manager and click the Updates tab. The Installed Release panel displays version 5.5, which is the version you just installed.

Upgrading vShield components

You must upgrade the other vShield components managed by vShield Manager.

Upgrade the vShield Appliance

To upgrade the vShield Appliance:

  1. Log in to the vSphere Client.
  2. Click Inventory > Hosts and Clusters.
  3. Click the host on which you want to upgrade vShield App.
  4. Click the vShield tab. The General tab displays each vShield component that is installed on the selected host and the available release.
  5. Click Update (next to vShield App).
  6. Select the vShield App checkbox.
  7. Click Install.Note: During the vShield App upgrade, the ESXi host is placed into Maintenance Mode by the system and rebooted. Ensure the virtual machines on the ESXi host are migrated (using DRS or vMotion), or that they are powered off to allow the host to be placed into Maintenance Mode.

Upgrading vShield Edge

You must upgrade each vShield Edge instance in your datacenter. vShield Edge 5.1.2 is not backward compatible and you cannot use 2.0 REST API calls after the upgrade.

Note: During the vShield Edge upgrade, there will be a brief network disruption for the networks that are being served by the given vShield Edge instance.

If you have vShield Edge 5.0.x, each 5.0.x vShield Edge instance on each portgroup in your datacenter must be upgraded to 5.5.

To upgrade vShield Edge:

  1. Log in to the vSphere Client.
  2. Click the portgroup on which the vShield Edge is deployed.
  3. In the vShield Edge tab, click Upgrade.
  4. View the upgraded vShield Edge:
    1. Click the datacenter corresponding to the port group on which you upgraded the vShield Edge.
    2. In the Network Visualization tab, click Edges. vShield Edge is upgraded to the compact size. A system event is generated to indicate the ID for each upgraded vShield Edge instance.
    3. Repeat for all other vShield Edges that require upgrading.

If you have 5.1.0 or higher vShield Edge instances, upgrade each Edge:

  1. Log in to the vSphere Client.
  2. Click the datacenter for which vShield Edge instances are to be upgraded.
  3. Click the Network Visualization tab. All existing vShield Edge instances are shown in the listings page. An arrow icon is shown for each vShield Edge that must be updated.
  4. Click an Edge and click Upgrade from Actions to start the upgrade. When the Edge is upgraded, the arrow icon no longer appears.
  5. Repeat for each vShield that must be upgraded.

What to do next

Firewall rules from the previous release are upgraded with some modifications. Inspect each upgraded rule to ensure it works as intended. For information on adding new firewalls, see the vShield Administration Guide.
If your scope in a previous release was limited to a port group that had a vShield Edge installation, the user is automatically granted access to that vShield Edge after the upgrade.

Upgrade vShield Endpoint

To upgrade vShield Endpoint from 5.1.x to 5.5, you must first upgrade vShield Manager, then update vShield Endpoint on each host in your datacenter.

  1. Log in to the vSphere Client.
  2. Click Inventory > Hosts and Clusters.
  3. Click the host on which you want to upgrade vShield Endpoint.
  4. Click the vShield tab. The General tab displays each vShield component that is installed on the selected host and the available version.
  5. Click Update (next to vShield Endpoint).
  6. Click vShield Endpoint.
  7. Click Install.

Upgrading vShield Data Security

To upgrade vShield Data Security from 5.1.x to 5.5, you must first upgrade vShield Manager, then update vShield Data Security on each host in your datacenter.

  1. Log in to the vSphere Client.
  2. Click Inventory > Hosts and Clusters.
  3. Click the host on which you want to upgrade vShield Data Security.
  4. Click the vShield tab. The General tab displays each vShield component that is installed on the selected host and the available version.
  5. Click Update (next to vShield Data Security).
  6. Click vShield Data Security.
  7. Click Install.
Upgrading VXLAN
When upgrading VXLAN, consider these points:
  • VXLAN virtual wires require vCenter Server 5.1 or later.
  • You must upgrade the vCNS server prior to upgrading the ESXi hosts.
  • Upgrading an ESXi host from 5.1 to 5.5 results in a new kernel module automatically being pushed to the upgraded host.
  • A reboot of the host is required to complete the host upgrade for VXLAN.