Learn how to move virtual machines (VMs) from your original datacenter (vDC) (PREMIER or SDDC) to a new destination vDC (VMware on OVHcloud).
In 2023, OVHcloud launched four new ranges:
- vSphere: OVHcloud Managed VMware vSphere is our most accessible solution for infrastructure migration, application, datacenter extension, or disaster recovery plan needs (with Veeam or Zerto solutions available as an additional option).
- Hyperconverged Storage (vSAN): The Hyperconverged Storage solution meets your needs for ultra-powerful storage. Equipped with NVMe SSDs, our servers have been specially designed to accommodate even the most demanding applications. With VMware vSAN, you can manage your storage in a scalable way, just as you would in your own datacenter.
- Network Security Virtualization (NSX): The Network Security solution is based on VMware NSX (NSX-T) network and security virtualization software. You can manage your security rules, operations, and automation continuously across your different cloud environments. NSX secures your software, whether it is hosted on virtual machines or in containers, and reduces the threat of ransomware thanks to micro-segmentation.
- Software-Defined Datacenter (NSX & vSAN): The Software-Defined Datacenter solution includes hyperconverged storage (vSAN) and network and security virtualization (NSX-T) features. You get an optimal cloud environment for migrating and modernizing your most critical applications.
You can now upgrade from pre-2020 commercial ranges to the new ranges while keeping the same VMware infrastructure (pcc-123-123-123-123) using Storage Motion and vMotion.
There are two aspects involved in this process:
- The OVHcloud infrastructure itself, which includes the customer's side of administrating an infrastructure.
- The VMware infrastructure, which includes the entire VMware ecosystem.
Requirements
- A PCC infrastructure
- Access to the OVHcloud Control Panel (
VMware
in theHosted Private Cloud
section) - Access to the NSX Manager
- Access to the vSphere Control Panel
Instructions
This guide will use the terms source (or original) vDC and destination (or new) vDC. Below is an index of the tasks you will be performing:
Step 1 Design your infrastructure
Step 1.1 Choose between the different VMware on OVHcloud ranges
Step 1.2 Select your hosts (compute)
Step 1.3 Select your datastores (storage)
Step 2 Build your new infrastructure
Step 2.1 Add a new destination vDC
Step 2.2 Add new hosts and datastores
Step 2.3 Convert a datastore to a global datastore
Step 3 Prepare your destination vDC in the OVHcloud context
Step 3.1 Check inherited characteristics (Certifications, KMS, access restrictions)
Step 3.1.2 Key Management Server (KMS)
Step 3.1.3 Access restrictions
Step 3.3 Activate Veeam Managed Backup & Zerto Disaster Recovery Options
Step 3.4 Check your network (vRack, Public IP)
Step 4 Prepare your destination vDC in the VMware context
Step 4.1 Reconfigure VMware High Availability (HA)
Step 4.2 Reconfigure VMware Distributed Resource Scheduler (DRS)
Step 4.3 Rebuild resource pools
Step 4.4 Recreate your Cluster Datastores (if relevant)
Step 4.5 Enable vSAN (if relevant)
Step 4.6 Recreate vSphere networking
Step 4.7 Check inventory organization (if relevant)
Step 4.8 Migrate NSX-V to NSX (if relevant)
Step 4.8.1 NSX Distributed Firewall
Step 4.8.2 NSX Distributed Logical Router
4.8.3.1 Create the T1 and Segments
4.8.3.6 Firewall for T0 T1 gateways
4.8.3.8 Reconfiguration of the Initial IP Block
Step 4.9 Extend Zerto Disaster Recovery Protection (if relevant)
Step 6 Finalize your migration
Step 6.1 Reconfigure Veeam Managed Backup (if relevant)
Step 6.2 Reconfigure Zerto Disaster Recovery (if relevant)
Step 6.3 Recreate Affinity rules
Step 6.4 Reconfigure the Private Gateway (if relevant)
Step 6.5 Remove the old environment
Step 1 Design your infrastructure
At the end of Step 1, you should have a clear view of which commercial range you want to upgrade to, as well as which hosts and storage you want to use.
Step 1.1 Choose between different ranges
As a Hosted Private Cloud VMware customer with a pre-2020 host, you want to upgrade to VMware on OVHcloud.
Here are a few guidelines:
- If you are using or you plan to use NSX, you must upgrade to Network Security Virtualization or Software-Defined Datacenter.
- If you are not using NSX on your current infrastructure and you don't need certifications, you can choose vSphere.
- Veeam Managed Backup and Zerto Disaster Recovery options are available.
- The OVHcloud VMware infrastructure can also assist in meeting certification needs.
Step 1.2 Select your hosts (compute)
You have now chosen your commercial range.
Please note that this choice is not definitive, you can start with 2 hosts of 96GB RAM and switch to 3 hosts of 192GB RAM.
Step 1.3 Select your datastores (storage)
You have now chosen your commercial range and hosts. Please note that some of your current datastores may be compatible with the newer ranges, meaning that those datastores can be made global. A global datastore is a datastore mounted on all clusters/vDC within a VMware infrastructure, i.e. shared between the source vDC and the destination vDC. Run the OVHcloud API to check datastores compatibility:
GET /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/filer/{filerId}/checkGlobalCompatible
Expected return: boolean
If the API return is
TRUE
, this datastore is compatible with the newer ranges and you can keep it. If the API return isFALSE
, this datastore is not compatible, you will need to order new datastores, either VMware on OVHcloud datastores.
Based on your needs in terms of storage capacity, you can choose the type and number of datastores to order.
You only need to replace the datastores that are not compatible. You will be able to release the datastores that are not compatible at the end of the process.
Please note that this choice is not final, you can start with 4x3Tb and move to 2x6Tb later.
Step 2 Build your new infrastructure
At the end of Step 2, you should have within your existing VMware infrastructure (pcc-123-123-123-123) a new destination vDC with 2020 hosts, and global datastores.
Step 2.1 Add a new destination vDC
You can add a destination vDC following these steps:
- From the OVHcloud Control Panel, select the
Hosted Private Cloud
tab at the top of the screen. - From the left-hand navigation bar, under the VMware heading,
select your environment
. - Click the
Datacenters tab
. - Select
NSX
, then clickAdd a datacenter
.
Step 2.2 Add new hosts and datastores
In the OVHcloud Control Panel, you will see your new vDC attached to your existing service. You can order new resources (selected in Step 1) in the new destination vDC. You can see billing information for OVHcloud products and services in this guide.
Step 2.3 Convert a datastore to a global datastore
You now have new datastores in the new destination vDC, as well as compatible datastores in your previous vDC. You can convert those datastores to global
Run the OVHcloud API to convert the datastore to global:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/filer/{filerId}/convertToGlobal
Expected return: Task information
Step 3 Prepare your destination vDC in the OVHcloud context
Step 3.1 Check inherited characteristics (Certifications, KMS, access restrictions)
Step 3.1.1 Certifications
These options are enabled by VMware infrastructure and apply to any vDC. If an option has been enabled, it remains available on the destination vDC.
Step 3.1.2 Key Management Server (KMS)
This option is to enable and configure per vCenter and apply to any vDC. If virtual machines are protected by encryption, they stay protected on the destination vDC.
Step 3.1.3 Access restrictions
For connections to the VMware platform, you can choose to block access to vSphere by default. Please refer to our guide on the vCenter access policy for details.
If the access policy has been changed to "Restricted," the new vDC will inherit the access policy used by the source vDC.
Step 3.2 Manage user rights
In the lifecycle of the source vDC, a list of users may have been created for business or organizational needs. These users will also be present on the new vDC but will have no permissions on this new vDC. You must therefore assign the users the appropriate rights, depending on the configuration of the destination vDC.
To do this, please refer to our guides on changing user rights, changing the user password, and associating an email with a vSphere user.
Step 3.3 Activate Veeam Managed Backup & Zerto Disaster Recovery Options
These options are enabled and configured per vDC. You need to enable the relevant options on the new vDC- see Step 6.1 (Veeam) or Step 6.2 (Zerto) for instructions.
Step 3.4 Check your network (vRack, Public IP)
Step 3.4.1 vRack
As part of a migration process, the new vDC will (by default) be linked to the same vRack as the source vDC. Please consult our guide to Using Private Cloud within a vRack.
Step 3.4.2 Public network
The public IP addresses attached to the source vDC do not automatically route to the new destination vDC. See Step 4.8.3.8 for more details.
Step 4 Prepare your destination vDC in the VMware context
Step 4.1 Reconfigure VMware High Availability (HA)
Setting up a new vDC involves reconfiguring VMware High Availability (HA), including boot order and priority. Please consult our guide about VMware HA configuration.
Here is a checklist of aspects to take into account:
- Host monitoring settings
- VM monitoring settings
- Admission control
- Advanced HA options
- VM Overrides
Step 4.2 Reconfigure VMware Distributed Resource Scheduler (DRS)
Setting up a new vDC involves reconfiguring the VMware Distributed Resource Scheduler (DRS) feature, in particular the affinity or anti-affinity rules for groups of hosts and VMs. Please consult our guide about configuring VMware DRS.
Here is a checklist of aspects to take into account:
- Automation level
- VM/Hosts Groups
- VM/Host affinity/anti-affinity rules
- VM Overrides
Step 4.3 Rebuild resource pools
Setting up a new vDC requires rebuilding resource pools, including reservations, shares, and vApps. This also applies to vApps and any start-up order configuration set in vApps.
For more information, consult VMware's documentation for managing resource pools.
Here is a checklist of aspects to take into account:
- CPU/Memory shares settings
- CPU/Memory reservations
- Scalable CPU/Memory option
- CPU/Memory Limits
Step 4.4 Recreate Datastores Clusters (if relevant)
If datastore clusters are present in the source vDC, setting up a new vDC may require recreating these Datastore Clusters on the destination vDC if the same level of structure and SDRS is needed.
Here is a checklist of aspects to take into account:
- SDRS automation level
- SDRS space, I/O, rule, policy, VM evacuation settings
- SDRS affinity/anti-affinity rules
- SDRS VM Overrides
Step 4.5 Enable vSAN (if relevant)
If vSAN was enabled on your source VDC, you will need to enable it again on the destination vDC. Please refer to our guide on using VMware Hyperconvergence with vSAN.
Step 4.6 Recreate your vSphere networking
Setting up a new vDC involves recreating the vRack VLAN-backed port groups on the destination vDC to ensure VM network consistency. If vRack VLANs are in use on the source vDC, vRack can be used to stretch the L2 domain to the destination vDC to allow for a more phased migration plan. For more information consult our guide about Using Hosted Private Cloud within a vRack.
Here is a checklist of aspects to take into account:
- Portgroup VLAN type
- Security settings (important in case promiscuous mode is needed)
- Teaming and Failover settings
- Customer network resource allocation
For more information, consult OVHcloud's guide on How to create a V(x)LAN within a vRack and VMware's documentation on how to edit general distributed port group settings.
- Some virtual routing appliances such as pfSense use CARP to provide high availability.
- VMs that use CARP will need “Promiscuous Mode” enabled in the security settings of a portgroup.
- Customers can enable this setting themselves on the vRack vDS on the destination vDC.
- However, if promiscuous mode needs to be enabled on the “VM Network” portgroup in the new vDC, please open a ticket with OVHcloud support before migration to ensure connectivity remains during migration.
Step 4.7 Check inventory organization (if relevant)
For organizational reasons, the VMs, hosts, or datastores may have been placed in directories.
If you still need this organization, you will need to create it again in the destination vDC.
Step 4.8 Migrate NSX-V to NSX (if relevant)
As part of an NSX-V to NSX-T migration, several NSX-V services need to be migrated to NSX-T. If you are using any of the services listed below, here is a step-by-step guide on how to migrate them.
As a first step, please read our documentation on Getting Started with NSX.
Step 4.8.1 NSX Distributed Firewall
The NSX distributed firewall automatically protects the entire vDC. It is crucial to understand that objects placed in the Distributed Firewall will correspond to the significant object ID locally. For example, if a vRack VLAN portgroup is used in a rule in the Distributed Firewall, it will reference the portgroup from the original vDC only, not from a recreated vRack portgroup in the destination vDC.
It will be necessary to verify if the Distributed Firewall contains significant local objects and modify the Distributed Firewall so that it can also see the objects in the new vDC. For example, a rule that uses a vRack VLAN port group from the original vDC can be modified to use both the original vRack VLAN portgroup and the new vRack VLAN portgroup in the destination vDC.
The objects to be considered are:
- Clusters
- Datacenters
- Distributed Port Groups
- Legacy Port Groups
- Resource Pool
- vApp
For further information on the Distributed Firewall, refer to our guide Distributed Firewall Management in NSX.
Step 4.8.2 NSX Distributed Logical Router
The NSX-V Distributed Logical Router does not have a direct equivalent in NSX. To migrate the Distributed Logical Router, routing should be directly done in the T1 Gateways.
Step 4.8.3 NSX Edges
It would be beneficial to step back and review the implemented network architecture to better align with the requirements of the new NSX product. For each edge in NSX-V, you should create a T1 Gateway in NSX-T.
Also, if your production requires zero service interruptions, solutions can be implemented to avoid these disruptions.
Step 4.8.3.1 Create the T1 and Segments
To create T1 Gateways, follow this documentation: Adding a New Tier-1 Gateway. This documentation also guides you on how to create segments. Before creating segments, it is necessary to inventory the VXLANs used in the source vDC and create a segment for each VXLAN used in your infrastructure.
Afterward, you can connect them to the provided T0 in NSX.
For further information on segments, this documentation can be helpful: Segment Management in NSX.
It is also possible to create VLAN-type segments and connect them to vRack via the ovh-tz-vrack Transport Zone. Then, either at the T1 level or at the T0 level with Service-type interfaces, VLAN segments should be positioned to establish connectivity with vRack via NSX.
Step 4.8.3.2 DHCP
To recreate DHCP and associate them with your segments and T1 Gateways, please see the DHCP Configuration in NSX guide.
Step 4.8.3.3 DNS
To recreate DNS and associate them with your T1 Gateways, see the Configuring DNS Forwarder in NSX guide.
Step 4.8.3.4 NAT Rules
To recreate your NAT rules and associate them with your T1 Gateways, see the Configuring NAT Redirection guide.
Step 4.8.3.5 NSX Load Balancing
To recreate your Load Balancers, follow the instructions in our Load Balancing Configuration guide.
Step 4.8.3.6 Firewall for T0 T1 Gateways
To recreate the firewall rules associated with your previous edges, see our Gateway Firewall Management in NSX guide.
→Step 4.8.3.7.1 IPSec
To recreate your IPsec sessions, here is the documentation: How to Set Up an IPsec Tunnel with NSX.
→Step 4.8.3.7.2 SSL VPN
If you are using SSL VPN functionality, unfortunately, this feature no longer exists in NSX. However, there are open-source alternatives available, such as OpenVPN, OpenConnect VPN, WireGuard, etc. These network appliances need to be deployed on dedicated VMs hosted in your HPC. Each client needs to be installed on the employees' workstations to regain access to their VPN.
Step 4.8.3.8 Reconfiguration of the Initial IP Block
For this step, you will need two elements:
- The IP block initially associated with the NSX-V vDC.
- The public IP of the VIP associated with the NSX-T0 (visible in [Networking] => [Tier-0 Gateways] => [ovh-T0-XXXX] => expand => [HA VIP Configuration] => click on [1] => [IP Address / Mask] section)
Next, in the OVHcloud Control Panel, follow the instructions in our How to Move an Additional IP guide to move the initial NSX-V block to the PCC service you are migrating. Specify the VIP IP of the T0 as the "next hop," as shown in the example below:
Step 4.9 Extend Zerto Disaster Recovery Protection (if relevant)
The Zerto Replication is configured at the vDC level. To protect the workload on the new vDC, you need to do some actions.
Prerequisites:
- Having a new vDC
- In the new vDC, have a host cluster with the required number of hosts (same as the source cluster with a minimum of 2 hosts)
- In the new vDC, have a datastore that can be accessible from the two (2) hosts
- Having Zerto Replication enabled on the new vDC
Run the OVHcloud API to prepare the migration:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/disasterRecovery/zerto/startMigration
{datacenterId}
is the new vDC id, you can get it with the following API call:
GET /dedicatedCloud/{serviceName}/datacenter
A task is launched on the infrastructure to deploy vRA on each of the hosts of the new vDC.
After this, the Zerto Replication will work on both data centers:
- The old one is still running and protects your workload
- The new one is ready to host your workload
The next step depends on the current configuration per Virtual Protection Group:
- Source of replication
- Destination of replication
Step 4.9.1 VPG as Source
With the migration to the new vDC, Zerto will continue to protect the workload with vRA deployed on the target cluster and hosts.
Step 4.9.2 VPG as Destination
Unfortunately, there is no way to update the VPG configuration, the only option is to delete the VPG and create a new one.
Step 5 Migrate your workload
Step 5.1 Storage Motion
You now have old datastores in the previous vDC (not compatible with the new ranges) and global datastores (either previously compatible ones or new ones). You can use Storage Motion to move a virtual machine (VM) and its disk files from one datastore to another while the virtual machine is running.
Step 5.2 vMotion
Since both source and destination vDC are within the same vCenter, hot or cold VMware vMotion can be used to migrate VMs.
Hot vMotion can be used when the CPU chipset is the same between source and destination (i.e. Intel to Intel).
Cold vMotion can be used when the CPU chipset is different between source and destination (i.e. AMD to Intel).
Here is a checklist of aspects to take into account:
- ESXi host CPU chipsets on source and destination vDCs
- EVC modes on source and destination Clusters
- vDS versions are the same between source and destination vDC. You can upgrade the vDS vRack in the source vDC. For the vDS with VM Network (VXLAN vDS), please contact support so that the vDS can be upgraded.
Step 6 Finalize your migration
Step 6.1 Reconfigure Veeam Managed Backup (if relevant)
If OVHcloud-provided Veeam is currently in use to backup VMs on the source vDC, it will be necessary to use the OVHcloud API to check backup tasks again after migrating the VMs to the destination vDC.
Here is how to proceed:
{datacenterId}
is the old vDC id, you can get it with the following API call: GET /dedicatedCloud/{serviceName}/datacenter
- Enable the Veeam Managed Backup option on the new vDC from the OVHcloud Control Panel.
- Migrate the virtual machines from the source vDC to the destination vDC.
- Run the OVHcloud API to re-check the backup date:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/checkBackupJobs
- If you have migrated only part of the virtual machines whose backups are enabled, you can repeat Steps 2 and 3 to transfer their backup jobs to the new vDC.
Before you continue, you can check visually, in the graphic Backup Management plug-in on the new vDC, that the backup jobs are present and active. You can then disable Veeam Backup on the old vDC. You can do this via the following API call:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/backup/disable
Use the API call “checkBackupJobs” (mentioned in Step 3 above) several times to ensure you have backups on the new vDC.
If you have any doubts, contact OVHcloud support to monitor backup jobs.
Step 6.2 Reconfigure Zerto Disaster Recovery (if relevant)
Run the OVHcloud API to finalize the migration:
POST /dedicatedCloud/{serviceName}/datacenter/{datacenterId}/disasterRecovery/zerto/endMigration
{datacenterId}
is the new vDC id, you can get it with the following API call:
GET /dedicatedCloud/{serviceName}/datacenter
A task is launched to:
- Check if a destination VPG still exists on the datacenter: they MUST be removed.
- Switch the Zerto Replication option (subscription) from the old to the new vDC.
- Remove all vRA from hosts on the old vDC.
Step 6.3 Recreate Affinity rules
Affinity rules are based on VM objects, so rules can only be created after VMs have been migrated to the destination vDC. Once the migration is complete, the affinity rules can be reapplied on the destination vDC.
Step 6.4 Reconfigure the Private Gateway (if relevant)
To "move" Private Gateway to destination vDC, you must first disable it by following the steps to disable the private gateway.
Then enable it again by following the steps in enable the private gateway and choose the datacentreId of the new vDC.
Step 6.5 Remove the old environment
Preparation
When you are ready to have resources removed, please follow the procedure below.
- Log into vCenter.
- Turn DRS off on the old Virtual Datacenter.
- Turn off NSX-V edges through the NSX-V Networking and Security plugin.
Verify the NSX-T environment is still working as intended.
If it is, delete the NSX-V edges from the NSX-V Networking and Security plugin.
- If you are using OVHcloud managed Veeam, disable it in the OVHcloud Control Panel on the virtual datacenter that is being removed.
This will remove the backups that are being managed by OVHcloud managed Veeam. Be sure that the backups are either saved elsewhere or that they are not needed and can be deleted.
- Make note of the hosts and datastores that you want to return to OVHcloud.
- Put the host into maintenance mode.
To put a host in maintenance mode:
- In the vSphere Client navigate to
Hosts and Clusters
. - Navigate to a
Host
. - Right-click the
Host
. - Navigate to
Maintenance Mode
. - Click
Enter Maintenance Mode
.
- In the vSphere Client navigate to
Remove datastores
- Ensure the datastores are empty of all files.
- Right-click on the datastore and scroll down to
OVHcloud
. - Select
Remove this storage....
Allow the process to complete before continuing.
Remove hosts
- Right-click on a host that is in maintenance mode.
- Scroll down to
OVHcloud
. - Select
Remove this host...
Allow the process to complete before continuing.
Remove vDC
From the OVHcloud Control Panel, remove the old vDC.
Park IP blocks
Park any IP blocks that are no longer being used in the OVHcloud API Console.
- Click
Login
in the upper-right corner. - Navigate to the POST /ip/{ip}/park API call.
- Enter the IP block to be parked and click
Execute
.
- That the migration from NSX-V to NSX-T is complete.
- You would like to have your billing updated.
- The number of datastores, hosts, vDCs, and IPs that you have removed.
Step 7 Recreate an advanced NSXv architecture on NSX
You can find all the information on setting up an advanced NSX-v architecture on NSX by watching this video.
FAQ
Below is a list of frequently asked questions about vDC migration.
What are the impacts when sharing my datastores between my vDCs?
There is no impact on your production, billing, or ZFS snapshots. However, it is not currently possible to unshare a datastore. We'll change that later.
Will the VMs (with public IPs) be accessible from the outside if they are in the new vDC when the PFSENSEs are in the old vDC?
Yes, the VM network is at the level of the VMware infrastructure and therefore on the two vDCs.
Is it possible to set up a PFSENSE in the old vDC and another in the new vDC?
Yes, it is even necessary to have 2 different PFSENSEs to avoid IP conflicts.
Are the VXLANs available on both vDCs?
VXLANs are only available on Premier, not Essentials.
We do not use NSX. The migration procedure specifies that the source/destination vDS must have the same version. On the source, our only vDS is in 6.0.0, so I guess we have to update it. The documentation/video/interface indicates that we can do it ourselves without any downtime if it's vRack. I thought it was vRack but we can't update it (the menu is grayed out). Does that mean it's vxlan? How do I tell the difference between vRack and vLAN?
If it is grayed out, it is probably the public DVS (vmnetwork) /vxlan. The vrack DVS is a second DVS with the word "vrack" at the end. Please open a support ticket so that we can confirm this with you and perform the DVS upgrade if required.
How do I know if my network adapters are VLAN or VxLAN and compatible with Essentials? In vSphere, I see for example, and without further details: vxw-dvs-74-virtualwire-20-sid-...
All that is %-virtual-% is vxlan.
If I have several VMs that pass through the same EDGE NSX, will I need to migrate all of the VMs and the EDGE at the same time, otherwise I will no longer have an internet connection on some VMs?
Yes, you will need to move the EDGE with a redeployment before moving the VMs. Depending on the case, with or without wide area networks, the two actions can be separated.
Can we create a DRS pool for global datastores? I think I have already tried unsuccessfully between 2 vDC 2014 / 2016.
There are limitations for global datastores. We recommend only using them to migrate between the two vDCs, then having "standard" datastores on the new vDC and making the datastores global at the end of the migration.
We have an SDDC 2016 with 6 x 6 TB SSD Acceleraded (ordered in 2021) with "convert to global" available in the OVHcloud Control Panel. Can we convert them to global and keep them as they are in the new vDC (to avoid the vMotion storage phase)? Memo: the 6 DS are in a storage cluster.
Yes, if the VMs point to these DS, there will be no storage motion steps.
What are the limitations/differences in migration depending on the range you have chosen (Essentials or Premier)?
There are no differences between upgrading to Essentials or Premier. The only difference is in the steps linked to the NSX component. These steps are required for an upgrade to Premier and are not relevant for an upgrade to Essentials.
How long will it take to migrate (depending on the number of VMs)?
The speeds recorded for the Storage Motion step are between 0.5 and 1TB per hour. For vMotion, this depends heavily on the size of the VM, on average less than a minute; it can take up to 3 minutes for VMs of several TB.
Which Microsoft licenses are available in SPLA mode?
Windows licenses (standard and datacentre) and SQL Server (standard and web) are available on 2020 solutions in SPLA mode.
I need to upgrade two VMware infrastructures, which are currently used as part of a DRP zerto with data replication. Do I need to upgrade my secondary or primary infrastructure first?
There is no obligation, we recommend you upgrade the secondary infrastructure first to control the process before you upgrade the primary infrastructure.
Will the historical cap on hourly resources still be deployed?
No, the hourly billing limit is disabled on the 2020 offers (Premier & Essentials). All older ranges will continue to work with the hourly billing limit in place.
Will the price of previous offers change?
No, there are no price changes planned for the old solutions.
In which language are OVHcloud Professional Services available?
OVHcloud Professional Services are available in English and French.
Can OVHcloud Professional Services recreate my NSX user accounts & configurations for me?
Our Professional Services do not carry out any operations on the customer's infrastructure. We are here to help, guide, and advise you. In this scenario, we will direct our customer to a partner who will be able to execute the operations in the customer infrastructure.
How do I know how many hours of Credits have been used and are still outstanding?
Your OVHcloud sales representative or technical referrer can provide you with this information.
What happens if the consulting session takes less time than expected?
A session is scheduled and counted in 1-hour blocks. For example, a session scheduled for 2 hours and 1.5 hours would be billed for 2 hours. A session scheduled for 3 hours but only 1.5 hours would be charged at 2 hours.
Go further
For more information and tutorials, please see our other Hosted Private Cloud support guides, our NSX support guides, or explore the guides for other OVHcloud products and services.