Most companies with existing applications looking to adopt Public Cloud have a need to migrate a percentage or all of their workloads into the cloud. One of the challenges faced by such customers is the application change, which adds significant risk and lots of effort. VMware Cloud Solutions such as Azure VMware Solution, VMware Cloud on AWS (& more) provide a neat way to adopt public clouds with minimal risk and efforts. VMware Hybrid Cloud Extension tool is central to those migrations.

Many great materials are available about HCX Capabilities, design considerations and step by step deployment procedure (in AVS, VMC, Service Mesh video) so those topics will not be covered here. In this blog post we will specifically focus on the HCX Layer 2 Network Extension Unextend aka Cutover process. The cutover behavior described in the post is generic so can be used as a reference for on-prem, private or public cloud migrations – but there is an obvious skew towards VMware Public Cloud solutions which I am currently supporting.

Migration & Network extension process

VMware Cloud migrations roughly go through the below steps from networking perspective. Some of these tasks can get quite detailed depending on the on-prem application estate. For example, in some cases the first step of workload discovery is done extensively by working with application owners and using tools such as vRealize Network Insight to automate the discovery.

HCX Cloud migration steps
Cloud migration steps

Network extension is not mandatory to migrate workloads using HCX. But the extension feature is proving to be instrumental for many customers as it gives flexibility to move workloads in smaller waves and retain the IPs. The workloads can also be migrated into new target subnets if feasible, but most application owners do not prefer making modifications to the workloads to minimize risk & effort.

Network integration with cloud

Before discussing the step by step process, let us quickly look into the topology of our lab environment in which the above steps will be performed. Extending or not, network connectivity strategy with cloud environments from on-prem is something that is planned even before starting with cloud consumption. This planning and architecture document is helpful during the network cutover. Some of the aspects relevant to our discussion –

  1. Dedicated connection or VPN over Internet
  2. Route exchange mechanism – dynamic or static
  3. Customer managed routers to update with BGP configuration
  4. Security configuration to update

In our lab setup, the on-prem is connected to the cloud via a dedicated connection using BGP as the dynamic route exchange mechanism. The components marked with numbers are of interest to us in the cutover process.

VMware Cloud SDDC Connection Layout
VMware Cloud SDDC Connection Layout

Let us first check the route tables and BGP configuration in the On-prem Router (B2) and Dedicated connection Router (B1 in the above diagram) before starting any extensions or migrations. There may be additional routing controls to manage in this scenario (or not as many) in your environment.

Route table in the On-Prem Router in steady state / before cutover
Route table in the On-Prem (B2) Router

The highlighted networks are connected locally (‘C’) with default gateway in the on-prem router. These are the migration candidates.

Route table in the Dedicated Connection (B1) Router

The B1 router could be customer or partner managed or may not even be a consideration. In the above example, Route table in B1 Router seems to be identical to B2, but as indicated by the next hop, the highlighted on-prem routes are learnt from B2 (172.10.254.253) and the remaining from the cloud (172.16.32.1). The B1 (or B2 if customer only manages it) Router is typically configured with BGP filters which help prevent unwanted subnets to be advertised into the on-prem inadvertently. Remember that these configs are mock ups only to highlight potential considerations, and every environment is different.

Let us check the BGP filters in B1-

R2# show ip prefix-list
ip prefix-list R1-DENY: 5 entries
   seq 5 permit 192.168.128.0/19 le 24
   seq 10 permit 192.168.160.0/19 le 24
   seq 15 permit 192.168.19.0/24 #on-prem subnet
   seq 20 permit 192.168.48.0/24 #on-prem subnet
   seq 25 permit 192.168.49.0/24 #on-prem subnet
ip prefix-list R1-PERMIT: 1 entries
   seq 5 permit 192.168.0.0/16 le 32

R2# show route-map
route-map R1-FILTER, deny, sequence 10 #prefixes in R1-DENY are denied to avoid conflicts
  Match clauses:
    ip address prefix-lists: R1-DENY 
  Set clauses:
  Policy routing matches: 0 packets, 0 bytes
route-map R1-FILTER, permit, sequence 20 #prefixes in R1-PERMIT are allowed
  Match clauses:
    ip address prefix-lists: R1-PERMIT
  Set clauses:
  Policy routing matches: 0 packets, 0 bytes

R2# show running-config | include route-map
 neighbor 172.16.32.1 route-map R1-FILTER in #route map applied for the inbound from cloud
route-map R1-FILTER deny 10 
route-map R1-FILTER permit 20

As can be seen, the on-prem subnets are denied to come from the cloud side to avoid conflicts. The prefix list should be updated just before the cutover so in Part 2 of this series, we will revisit these route tables.

Step 1: Identifying workloads and networks to be migrated to the cloud

Let us go through the migration workflow in the above lab environment quickly. Of course in a typical customer environment the setup would be complex with hundreds and thousands of workloads on numerous networks. How convenient!

Workloads to migratepg-web1 (192.168.19.10)pg-app-1 (192.168.49.50)
Networks to migratepg-web-vlan109 (192.168.19.1/24)pg-app-vlan49 (192.168.49.1/24)
Cloud migration candidates

Step 2: Extend the networks

HCX Deployment procedure is covered in the links provided in the introduction section of this blog post. The network extension task looks like this –

Select the destination NSX Tier-1 Router, specify default gateway/prefix length in the specified format and we are good to go!

hcx network extension procedure
HCX Network Extension procedure

Extension is now complete

HCX Network Extension completed

HCX automatically creates the backing network on the cloud side. It looks like any other NSX-T segment but with a difference – the segment is not connected to the Gateway, as the traffic over the extended segments is handled by the HCX Network Extension appliance

Stretched networks on the Cloud NSX-T

This network should not be connected or modified manually, and should be cutover using HCX once its ready to move to cloud

Step3: Migration

Next step is the migration. Select the VMs to be migrated, select the destination containers, choose the migration type and off they go. The extended destination networks are auto selected intelligently by HCX.

With no changes and minimal effort, VMs are now successfully migrated from the on-prem and available in cloud. The extension also helps with migrating the workloads in batches and this especially helps to deal with one major complexity – migration scheduling!

All the workloads in the web and app VLANs are migrated now and application health checks are green, so the network is ready to be moved to the cloud. This means, the network will be attached to the cloud gateway and default gateway becomes active on the NSX in cloud. The subnets once connected to the cloud gateway are automatically advertised out as the NSX Gateways by default advertise all connected subnets. The gateways have BGP relation with cloud routers which in-turn are typically connected back to on-prem via Express Route (Azure), Direct Connect (AWS) etc., or VPN.

The blog post is broken into two parts due to its length,

Part 1 (this post) of the series covers the topology and first three steps in the flowchart

Part 2 of the series covers the actual network cutover process

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s