Learn how to move virtual machines (VMs) from your original datacenter (vDC) (PREMIER or SDDC) to a new destination vDC (VMware on OVHcloud).
In 2023, OVHcloud launched four new ranges:
- vSphere: OVHcloud Managed VMware vSphere is our most accessible solution for infrastructure migration, application, datacenter extension, or disaster recovery plan needs (with Veeam or Zerto solutions available as an additional option).
- Hyperconverged Storage (vSAN): The Hyperconverged Storage solution meets your needs for ultra-powerful storage. Equipped with NVMe SSDs, our servers have been specially designed to accommodate even the most demanding applications. With VMware vSAN, you can manage your storage in a scalable way, just as you would in your own datacenter.
- Network Security Virtualization (NSX): The Network Security solution is based on VMware NSX (NSX-T) network and security virtualization software. You can manage your security rules, operations, and automation continuously across your different cloud environments. NSX secures your software, whether it is hosted on virtual machines or in containers, and reduces the threat of ransomware thanks to micro-segmentation.
- Software-Defined Datacenter (NSX & vSAN): The Software-Defined Datacenter solution includes hyperconverged storage (vSAN) and network and security virtualization (NSX-T) features. You get an optimal cloud environment for migrating and modernizing your most critical applications.
You can now upgrade from pre-2020 commercial ranges to the new ranges while keeping the same VMware infrastructure (pcc-123-123-123-123) using Storage Motion and vMotion.
There are two aspects involved in this process:
- The OVHcloud infrastructure itself, which includes the customer's side of administrating an infrastructure.
- The VMware infrastructure, which includes the entire VMware ecosystem.
Requirements
- A PCC infrastructure
- Access to the OVHcloud Control Panel (
VMware
in theHosted Private Cloud
section) - Access to the NSX Manager
- Access to the vSphere Control Panel
Instructions
This guide will use the terms source (or original) vDC and destination (or new) vDC. Below is an index of the tasks you will be performing:
Step 1 Design your infrastructure
Step 1.1 Choose between the different VMware on OVHcloud ranges
Step 1.2 Select your hosts (compute)
Step 1.3 Select your datastores (storage)
Step 2 Build your new infrastructure
Step 2.1 Add a new destination vDC
Step 2.2 Add new hosts and datastores
Step 2.3 Convert a datastore to a global datastore
Step 3 Prepare your destination vDC in the OVHcloud context
Step 3.1 Check inherited characteristics (Certifications, KMS, access restrictions)
Step 3.1.2 Key Management Server (KMS)
Step 3.1.3 Access restrictions
Step 3.3 Activate Veeam Managed Backup & Zerto Disaster Recovery Options
Step 3.4 Check your network (vRack, Public IP)
Step 4 Prepare your destination vDC in the VMware context
Step 4.1 Reconfigure VMware High Availability (HA)
Step 4.2 Reconfigure VMware Distributed Resource Scheduler (DRS)
Step 4.3 Rebuild resource pools
Step 4.4 Recreate your Cluster Datastores (if relevant)
Step 4.5 Enable vSAN (if relevant)
Step 4.6 Recreate vSphere networking
Step 4.7 Check inventory organization (if relevant)
Step 4.8 Migrate NSX-V to NSX (if relevant)
Step 4.8.1 NSX Distributed Firewall
Step 4.8.2 NSX Distributed Logical Router
4.8.3.1 Create the T1 and Segments
4.8.3.6 Firewall for T0 T1 gateways
4.8.3.8 Reconfiguration of the Initial IP Block
Step 4.9 Extend Zerto Disaster Recovery Protection (if relevant)
Step 6 Finalize your migration
Step 6.1 Reconfigure Veeam Managed Backup (if relevant)
Step 6.2 Reconfigure Zerto Disaster Recovery (if relevant)
Step 6.3 Recreate Affinity rules
Step 6.4 Reconfigure the Private Gateway (if relevant)
Step 1 Design your infrastructure
At the end of Step 1, you should have a clear view of which commercial range you want to upgrade to, as well as which hosts and storage you want to use.
Step 1.1 Choose between different ranges
As a Hosted Private Cloud VMware customer with a pre-2020 host, you want to upgrade to VMware on OVHcloud.
Here are a few guidelines:
- If you are using or you plan to use NSX, you must upgrade to Network Security Virtualization or Software-Defined Datacenter.
- If you are not using NSX on your current infrastructure and you don't need certifications, you can choose vSphere.
- Veeam Managed Backup and Zerto Disaster Recovery options are available.
- The OVHcloud VMware infrastructure can also assist in meeting certification needs.
Step 1.2 Select your hosts (compute)
You have now chosen your commercial range.
Please note that this choice is not definitive, you can start with 2 hosts of 96GB RAM and switch to 3 hosts of 192GB RAM.
Step 1.3 Select your datastores (storage)
You have now chosen your commercial range and hosts. Please note that some of your current datastores may be compatible with the newer ranges, meaning that those datastores can be made global. A global datastore is a datastore mounted on all clusters/vDC within a VMware infrastructure, i.e. shared between the source vDC and the destination vDC. Run the OVHcloud API to check datastores compatibility:
GET /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/filer/{filerId}/checkGlobalCompatible
Expected return: boolean
If the API return is
TRUE
, this datastore is compatible with the newer ranges and you can keep it. If the API return isFALSE
, this datastore is not compatible, you will need to order new datastores, either VMware on OVHcloud datastores.
Based on your needs in terms of storage capacity, you can choose the type and number of datastores to order.
You only need to replace the datastores that are not compatible. You will be able to release the datastores that are not compatible at the end of the process.
Please note that this choice is not final, you can start with 4x3Tb and move to 2x6Tb later.
Step 2 Build your new infrastructure
At the end of Step 2, you should have within your existing VMware infrastructure (pcc-123-123-123-123) a new destination vDC with 2020 hosts, and global datastores.
Step 2.1 Add a new destination vDC
You can add a destination vDC following these steps:
- From the OVHcloud Control Panel, select the
Hosted Private Cloud
tab at the top of the screen. - From the left-hand navigation bar, under the VMware heading,
select your environment
. - Click the
Datacenters tab
. - Select
NSX
, then clickAdd a datacenter
.
Step 2.2 Add new hosts and datastores
In the OVHcloud Control Panel, you will see your new vDC attached to your existing service. You can order new resources (selected in Step 1) in the new destination vDC. You can see billing information for OVHcloud products and services in this guide.
Step 2.3 Convert a datastore to a global datastore
You now have new datastores in the new destination vDC, as well as compatible datastores in your previous vDC. You can convert those datastores to global
Run the OVHcloud API to convert the datastore to global:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/filer/{filerId}/convertToGlobal
Expected return: Task information
Step 3 Prepare your destination vDC in the OVHcloud context
Step 3.1 Check inherited characteristics (Certifications, KMS, access restrictions)
Step 3.1.1 Certifications
These options are enabled by VMware infrastructure and apply to any vDC. If an option has been enabled, it remains available on the destination vDC.
Step 3.1.2 Key Management Server (KMS)
This option is to enable and configure per vCenter and apply to any vDC. If virtual machines are protected by encryption, they stay protected on the destination vDC.
Step 3.1.3 Access restrictions
For connections to the VMware platform, you can choose to block access to vSphere by default. Please refer to our guide on the vCenter access policy for details.
If the access policy has been changed to "Restricted," the new vDC will inherit the access policy used by the source vDC.
Step 3.2 Manage user rights
In the lifecycle of the source vDC, a list of users may have been created for business or organizational needs. These users will also be present on the new vDC but will have no permissions on this new vDC. You must therefore assign the users the appropriate rights, depending on the configuration of the destination vDC.
To do this, please refer to our guides on changing user rights, changing the user password, and associating an email with a vSphere user.
Step 3.3 Activate Veeam Managed Backup & Zerto Disaster Recovery Options
These options are enabled and configured per vDC. You need to enable the relevant options on the new vDC- see Step 6.1 (Veeam) or Step 6.2 (Zerto) for instructions.
Step 3.4 Check your network (vRack, Public IP)
Step 3.4.1 vRack
As part of a migration process, the new vDC will (by default) be linked to the same vRack as the source vDC. Please consult our guide to Using Private Cloud within a vRack.
Step 3.4.2 Public network
The public IP addresses attached to the source vDC do not automatically route to the new destination vDC. See Step 4.8.3.8 for more details.
Step 4 Prepare your destination vDC in the VMware context
Step 4.1 Reconfigure VMware High Availability (HA)
Setting up a new vDC involves reconfiguring VMware High Availability (HA), including boot order and priority. Please consult our guide about VMware HA configuration.
Here is a checklist of aspects to take into account:
- Host monitoring settings
- VM monitoring settings
- Admission control
- Advanced HA options
- VM Overrides
Step 4.2 Reconfigure VMware Distributed Resource Scheduler (DRS)
Setting up a new vDC involves reconfiguring the VMware Distributed Resource Scheduler (DRS) feature, in particular the affinity or anti-affinity rules for groups of hosts and VMs. Please consult our guide about configuring VMware DRS.
Here is a checklist of aspects to take into account:
- Automation level
- VM/Hosts Groups
- VM/Host affinity/anti-affinity rules
- VM Overrides
Step 4.3 Rebuild resource pools
Setting up a new vDC requires rebuilding resource pools, including reservations, shares, and vApps. This also applies to vApps and any start-up order configuration set in vApps.
For more information, consult VMware's documentation for managing resource pools.
Here is a checklist of aspects to take into account:
- CPU/Memory shares settings
- CPU/Memory reservations
- Scalable CPU/Memory option
- CPU/Memory Limits
Step 4.4 Recreate Datastores Clusters (if relevant)
If datastore clusters are present in the source vDC, setting up a new vDC may require recreating these Datastore Clusters on the destination vDC if the same level of structure and SDRS is needed.
Here is a checklist of aspects to take into account:
- SDRS automation level
- SDRS space, I/O, rule, policy, VM evacuation settings
- SDRS affinity/anti-affinity rules
- SDRS VM Overrides
Step 4.5 Enable vSAN (if relevant)
If vSAN was enabled on your source VDC, you will need to enable it again on the destination vDC. Please refer to our guide on using VMware Hyperconvergence with vSAN.
Step 4.6 Recreate your vSphere networking
Setting up a new vDC involves recreating the vRack VLAN-backed port groups on the destination vDC to ensure VM network consistency. If vRack VLANs are in use on the source vDC, vRack can be used to stretch the L2 domain to the destination vDC to allow for a more phased migration plan. For more information consult our guide about Using Hosted Private Cloud within a vRack.
Here is a checklist of aspects to take into account:
- Portgroup VLAN type
- Security settings (important in case promiscuous mode is needed)
- Teaming and Failover settings
- Customer network resource allocation
For more information, consult OVHcloud's guide on How to create a V(x)LAN within a vRack and VMware's documentation on how to edit general distributed port group settings.
- Some virtual routing appliances such as pfSense use CARP to provide high availability.
- VMs that use CARP will need “Promiscuous Mode” enabled in the security settings of a portgroup.
- Customers can enable this setting themselves on the vRack vDS on the destination vDC.
- However, if promiscuous mode needs to be enabled on the “VM Network” portgroup in the new vDC, please open a ticket with OVHcloud support before migration to ensure connectivity remains during migration.
Step 4.7 Check inventory organization (if relevant)
For organizational reasons, the VMs, hosts, or datastores may have been placed in directories.
If you still need this organization, you will need to create it again in the destination vDC.
Step 4.8 Migrate NSX-V to NSX (if relevant)
As part of an NSX-V to NSX-T migration, several NSX-V services need to be migrated to NSX-T. If you are using any of the services listed below, here is a step-by-step guide on how to migrate them.
As a first step, please read our documentation on Getting Started with NSX.
Step 4.8.1 NSX Distributed Firewall
The NSX distributed firewall automatically protects the entire vDC. It is crucial to understand that objects placed in the Distributed Firewall will correspond to the significant object ID locally. For example, if a vRack VLAN portgroup is used in a rule in the Distributed Firewall, it will reference the portgroup from the original vDC only, not from a recreated vRack portgroup in the destination vDC.
It will be necessary to verify if the Distributed Firewall contains significant local objects and modify the Distributed Firewall so that it can also see the objects in the new vDC. For example, a rule that uses a vRack VLAN port group from the original vDC can be modified to use both the original vRack VLAN portgroup and the new vRack VLAN portgroup in the destination vDC.
The objects to be considered are:
- Clusters
- Datacenters
- Distributed Port Groups
- Legacy Port Groups
- Resource Pool
- vApp
For further information on the Distributed Firewall, refer to our guide Distributed Firewall Management in NSX.
Step 4.8.2 NSX Distributed Logical Router
The NSX-V Distributed Logical Router does not have a direct equivalent in NSX. To migrate the Distributed Logical Router, routing should be directly done in the T1 Gateways.
Step 4.8.3 NSX Edges
It would be beneficial to step back and review the implemented network architecture to better align with the requirements of the new NSX product. For each edge in NSX-V, you should create a T1 Gateway in NSX-T.
Also, if your production requires zero service interruptions, solutions can be implemented to avoid these disruptions.
Step 4.8.3.1 Create the T1 and Segments
To create T1 Gateways, follow this documentation: Adding a New Tier-1 Gateway. This documentation also guides you on how to create segments. Before creating segments, it is necessary to inventory the VXLANs used in the source vDC and create a segment for each VXLAN used in your infrastructure.
Afterward, you can connect them to the provided T0 in NSX.
For further information on segments, this documentation can be helpful: Segment Management in NSX.
It is also possible to create VLAN-type segments and connect them to vRack via the ovh-tz-vrack Transport Zone. Then, either at the T1 level or at the T0 level with Service-type interfaces, VLAN segments should be positioned to establish connectivity with vRack via NSX.
Step 4.8.3.2 DHCP
To recreate DHCP and associate them with your segments and T1 Gateways, please see the DHCP Configuration in NSX guide.
Step 4.8.3.3 DNS
To recreate DNS and associate them with your T1 Gateways, see the Configuring DNS Forwarder in NSX guide.
Step 4.8.3.4 NAT Rules
To recreate your NAT rules and associate them with your T1 Gateways, see the Configuring NAT Redirection guide.
Step 4.8.3.5 NSX Load Balancing
To recreate your Load Balancers, follow the instructions in our Load Balancing Configuration guide.
Step 4.8.3.6 Firewall for T0 T1 Gateways
To recreate the firewall rules associated with your previous edges, see our Gateway Firewall Management in NSX guide.
→Step 4.8.3.7.1 IPSec
To recreate your IPsec sessions, here is the documentation: How to Set Up an IPsec Tunnel with NSX.
→Step 4.8.3.7.2 SSL VPN
If you are using SSL VPN functionality, unfortunately, this feature no longer exists in NSX. However, there are open-source alternatives available, such as OpenVPN, OpenConnect VPN, WireGuard, etc. These network appliances need to be deployed on dedicated VMs hosted in your HPC. Each client needs to be installed on the employees' workstations to regain access to their VPN.
Step 4.8.3.8 Reconfiguration of the Initial IP Block
For this step, you will need two elements:
- The IP block initially associated with the NSX-V vDC.
- The public IP of the VIP associated with the NSX-T0 (visible in [Networking] => [Tier-0 Gateways] => [ovh-T0-XXXX] => expand => [HA VIP Configuration] => click on [1] => [IP Address / Mask] section)
Next, in the OVHcloud Control Panel, follow the instructions in our How to Move an Additional IP guide to move the initial NSX-V block to the PCC service you are migrating. Specify the VIP IP of the T0 as the "next hop," as shown in the example below:
Step 4.9 Extend Zerto Disaster Recovery Protection (if relevant)
The Zerto Replication is configured at the vDC level. To protect the workload on the new vDC, you need to do some actions.
Prerequisites:
- Having a new vDC
- In the new vDC, have a host cluster with the required number of hosts (same as the source cluster with a minimum of 2 hosts)
- In the new vDC, have a datastore that can be accessible from the two (2) hosts
- Having Zerto Replication enabled on the new vDC
Run the OVHcloud API to prepare the migration:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/disasterRecovery/zerto/startMigration
{datacenterId}
is the new vDC id, you can get it with the following API call:
GET /dedicatedCloud/{serviceName}/datacenter
A task is launched on the infrastructure to deploy vRA on each of the hosts of the new vDC.
After this, the Zerto Replication will work on both data centers:
- The old one is still running and protects your workload
- The new one is ready to host your workload
The next step depends on the current configuration per Virtual Protection Group:
- Source of replication
- Destination of replication
Step 4.9.1 VPG as Source
With the migration to the new vDC, Zerto will continue to protect the workload with vRA deployed on the target cluster and hosts.
Step 4.9.2 VPG as Destination
Unfortunately, there is no way to update the VPG configuration, the only option is to delete the VPG and create a new one.
Step 5 Migrate your workload
Step 5.1 Storage Motion
You now have old datastores in the previous vDC (not compatible with the new ranges) and global datastores (either previously compatible ones or new ones). You can use Storage Motion to move a virtual machine (VM) and its disk files from one datastore to another while the virtual machine is running.
Step 5.2 vMotion
Since both source and destination vDC are within the same vCenter, hot or cold VMware vMotion can be used to migrate VMs.
Hot vMotion can be used when the CPU chipset is the same between source and destination (i.e. Intel to Intel).
Cold vMotion can be used when the CPU chipset is different between source and destination (i.e. AMD to Intel).
Here is a checklist of aspects to take into account:
- ESXi host CPU chipsets on source and destination vDCs
- EVC modes on source and destination Clusters
- vDS versions are the same between source and destination vDC. You can upgrade the vDS vRack in the source vDC. For the vDS with VM Network (VXLAN vDS), please contact support so that the vDS can be upgraded.
Step 6 Finalize your migration
Step 6.1 Reconfigure Veeam Managed Backup (if relevant)
If OVHcloud-provided Veeam is currently in use to backup VMs on the source vDC, it will be necessary to use the OVHcloud API to check backup tasks again after migrating the VMs to the destination vDC.
Here is how to proceed:
{datacenterId}
is the old vDC id, you can get it with the following API call: GET /dedicatedCloud/{serviceName}/datacenter
- Enable the Veeam Managed Backup option on the new vDC from the OVHcloud Control Panel.
- Migrate the virtual machines from the source vDC to the destination vDC.
- Run the OVHcloud API to re-check the backup date:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/checkBackupJobs
- If you have migrated only part of the virtual machines whose backups are enabled, you can repeat Steps 2 and 3 to transfer their backup jobs to the new vDC.
Before you continue, you can check visually, in the graphic Backup Management plug-in on the new vDC, that the backup jobs are present and active. You can then disable Veeam Backup on the old vDC. You can do this via the following API call:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/backup/disable
Use the API call “checkBackupJobs” (mentioned in Step 3 above) several times to ensure you have backups on the new vDC.
If you have any doubts, contact OVHcloud support to monitor backup jobs.
Step 6.2 Reconfigure Zerto Disaster Recovery (if relevant)
Run the OVHcloud API to finalize the migration:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/disasterRecovery/zerto/endMigration
{datacenterId}
is the new vDC id, you can get it with the following API call:
GET /dedicatedCloud/{serviceName}/datacenter
A task is launched to:
- Check if a destination VPG still exists on the datacenter: they MUST be removed.
- Switch the Zerto Replication option (subscription) from the old to the new vDC.
- Remove all vRA from hosts on the old vDC.
Step 6.3 Recreate Affinity rules
Affinity rules are based on VM objects, so rules can only be created after VMs have been migrated to the destination vDC. Once the migration is complete, the affinity rules can be reapplied on the destination vDC.
Step 6.4 Reconfigure the Private Gateway (if relevant)
To "move" Private Gateway to destination vDC, you must first disable it by following the steps to disable the private gateway.
Then enable it again by following the steps in enable the private gateway and choose the datacentreId of the new vDC.
Step 6.5 Put hosts in maintenance mode
You must put hosts in maintenance mode by following these steps:
- In the vSphere Client navigate to
Hosts and Clusters
. - Navigate to a
Host
. - Right-click the
Host
. - Navigate to
Maintenance Mode
. - Click
Enter Maintenance Mode
.
Repeat the action for each host.
Step 6.6 Remove old environment
At this point, you will need to reach out to your Account Team to open a support ticket for your old environment to be removed.
Go further
For more information and tutorials, please see our other Hosted Private Cloud support guides, our NSX-t support guides, or explore the guides for other OVHcloud products and services.