Below are the frequently asked questions about NSX.
Topics:
Migration
What is the deadline for the NSX-T migration? What is the NSX-v End of Life date?
NSX-v End of Life is planned for January 15, 2024. The migration has to be performed before this date.
What is the last date on which you can request migration support?
NSX-v End of Life is planned for January 15, 2024, so the sooner you take action, the more time to perform your migration.
What happens if we have not migrated before January 15, 2024?
OVhcloud will not cut the service but won't be able to guarantee any SLA. Customers will have to sign a document committing to leave NSX-v at a certain given date by OVHcloud.
VMware has decided on NSX-v End of Life in January 2024. Discussions are still ongoing with them regarding the NSX-v extension of support, OVHcloud will communicate as soon as possible an official statement.
How do we manage the IPs during the migration?
You will find in the migration guide, how to move your existing IP from your NSX-v platform and route them to the IP attached to the T0 Gateway part.
When an NSX-T vDC is delivered, we deploy and configure a new IP Block for NSX T0 Gateway (VIP + NSX EDGES). You will be able to re-use the IP attached to the NSX-v DC and point them to the new vDC.
For the vDC migration, the datastore has to go global, is a rollback possible on this configuration?
The global data store is managed at the manager or API level.
This allows you to globalize your data store which will be visible from your new vDC. It allows you to do a compute vMotion and not a storage vMotion of the VMs.
In this case, a rollback is not possible. You would have to order a new data store and apply vMotion to free the global one.
Can the IP migration be done IP per IP or by block?
The IP migration is performed by block. You will change the next stop of the IP block, then the global block is transitioned to the NSX part.
Will this migration cut the service and if yes, how long?
This will depend on the used services. For example, if you are using IPSEC tunnels and public IPs, you will have to move your workloads and reconfigure the IP block you had on your NSX-v infrastructure to your NSX-T one.
During this IP move, a short service cut can happen. Depending on your network topology, you can have continuity via the vRack service of the flows among the different workloads carried by an exposition of the NSX-v edges. You move the different machines to the second DC and through the vRack the flow keeps rising to your previous NSX-v edges.
The downtime will thus depend on your environment's complexity.
Is it possible to have NSX-v and NSX-T on our PCC at the same time to perform tests?
It is possible if you order a new vDC to get NSX-T.
Please note that ordering the new vDC will automatically initiate the refund mechanism for the coming month.
How long will the migration take?
This will mainly depend on the discovery/assessment phase of the NSX-v infrastructure and the design phase on the NSX-T side.
The migration itself is short, it is a re-confirmation of the VMware stack.
It is possible to create an extended network, a VPN between the two environments, vxLAN for NSX-v, and segments with NSX-T, allowing you to move VMs from one vDC to another and then perform the migration steps. This will ease the service cut during the transition.
You can also include Terraform into your NSX-T design in order to push your Terraform configuration directly into the environment you just ordered.
Can we use "migration coordinator" for the migration between NSX-v and NSX-T?
This tool requires very strong administration rights on the environment, our Professional Services team can execute and duplicate the confirmation. In this tool, it is important to notice that many elements are not supported (options, existing rules in NSX-v).
The reproduction phase would require lots of adaptation on your part so it is not a recommended tool in the migration.
Will the commitment dates and initial prices be maintained after the migration?
Please get in contact with your preferred OVHcloud contact to discuss this.
During the migration phase, we will have to pay twice for our platform for one month, and then get reimbursed the next month?
OVHcloud will refund 1 month of hosts and NSX management fees on the next invoice following the order of your new vDC (1 month is considered 30 days). Enterprise customers will not be affected by this scenario.
If we have anticipated the NSX-T migration and chose a PfSense tier solution, do we still have to request a new vDC creation without NSX-T? Can we do it on the existing vDC?
In this case, you don't have to order a new vDC but make sure you deactivate all your NSX-v features so OVHcloud can disable the component.
Is it possible to use Zerto in the migration phase?
There are no complexities in this part, you can follow the step-by-step documentation OVHcloud provides.
What about my Veeam and Zerto options? Are they still compatible with NSX?
Yes, but you will have to reconfigure them after vDC migration.
Configuration
How can I protect my virtual machines exposed on the internet directly, with a Public IP?
You can create virtual machines in the ovh-t0-public segment, and then secure your flows with the NSX Distributed Firewall.
The "Edit" button in NSX for Tier-0 is disabled, how do I configure the public gateway?
It is not possible by default. The Tier-0 gateways are each hosted in a different host, HA (High Availability) is enabled and a VIP is configured between the 2 EDGES to maintain service continuity. The HA part is already preconfigured by OVHCloud.
Can I configure an active-active Tier-0 Gateway to have a double bandwidth (10+10=20Gb/s guarantee and 25+25Gb/s "theoretical")?
No, it is not possible by default, the configuration is managed by OVHcloud and is done in active/passive mode with a VIP (10 Gbp/s guaranteed bandwidth).
How can I add more Public IPs?
As indicated in this guide, at the moment it is not possible to create new virtual IP addresses, but this feature should be available soon.
Can IP address blocks be used/distributed between two VMware DCs in the same PCC?
IP address blocks are PCC-dependent, not vDC-dependent. Therefore it is possible to use the same IP address block between multiple virtual data centers (without any changes).
Can I configure High Availability (HA)?
This is not necessary as the NSX Edges are already configured by OVHcloud following VMware best practices regarding HA.
We currently have 300 edges and about 5000 simultaneous RDP, will the average configuration "4 vCPU / 8Gb RAM / 200Gb" manage the flows?
The sizing will depend on the services you activate or consume on your edges (firewalling or load balancing).
Today the size M edge node might not fit for you. The 4.1.1 release will grant you new features like "edge nodes scale up", allowing you to raise to L or XL profiles.
All this will depend on your use cases and the metrics you have on your platform.
Can I use the OVHcloud API to configure and use NSX?
Yes, it is possible to do so.
Is the Internet output configurable? In other words, can I deploy the interface?
It is not possible to manage the Internet output in NSX as the Edge is managed by OVHcloud, but you can configure the network on your VM (vSphere).
Do we need to update the clusters to use NSX-T in a vSphere 7.0.3 environment?
In this case, you don't have to update the clusters.
Do you take NSX configuration backups, including for the customer manual configuration?
Yes, OVHcloud is performing some backups. You can see them in your NSX-T control plane.
This backup is not aimed to allow you to do a rollback in case of wrong configuration from your end but exists in case of corruption of the different NSX-T controllers.
Management/Miscellaneous
Why is NSX management via Terraform done via a separate https://nsxt
endpoint?
The NSX API is dedicated and not linked to the vSphere API. That's why we created a dedicated endpoint to reach it.
Is it possible to do BGP?
It is not possible to do Public BGP.
Though it is possible to do BGP in the vRack, documentation will be available soon to detail this workaround.
Is NSX-T compatible with BGP over IPSEC?
Currently, the BGP over IPSEC feature is only available from a T0 Gateway.
This operation requires specific rights at the T0 Gateway to create the tunnel.
If you have a specific use case, you can open a ticket so we can support you in this configuration.
Can we have multiple edge clusters?
Today, one single NSX-T Edge cluster is necessary.
Is it possible to communicate an NSX edge between two PCCs?
Yes, it's possible.
Is there an additional cost to use Advanced LB (with WAF) and a distributed IPS/IDS?
The basic version of ALB is already included in the NSX-T license version, without additional cost.
IPS/IDS is planned to be released in the future without a precise ETA for now, with additional cost.
Does vRack work with NSX-T?
Yes, vRack works with NSX-T.
You can access it from port groups in vSphere or vLAN segments inside NSX.
Will the compute cluster have access to vRack? Or will the vRack be connected only to the Edge Node?
The NSX cluster is fully compatible with vRack. You can add the NSX service in your PCC vRack. Find more information about vRack on this page.
NSX version 4.1.1
What are the changes in the Autonomous System (AS)?
Before the 4.1.1 version, there was one AS number per environment coming from the T0 Gateway.
Opening a ticket with the Support, you can request a modification on the AS number.
With the 4.1.1 version, you will be able to set up different AS on the VRF and not have necessarily the AS number from the T0 Gateway.
What is the customer impact of the NSX 4.1.1 migration?
There is normally no downtime, a maintenance task will be initiated, including a move of the edges with a vMotion.
From your user side, there is no specific task to plan.
What is the ETA for the NSX 4.1.1 release?
ETA is planned for later in 2023.
Why is there a pricing modification for NSX-T and its 4.1.1 version?
The price increase on the NSX offers is based on:
- The rise in our costs based on inflation on all our services in 2022 and 2023.
- The NSX-T licensing costs.
- The costs linked to the NSX management infrastructure.
Waiting for the availability of the NSX 4.1.1 version, physical resources dedicated to NSX Edge VM hosting have been assumed by OVHcloud and have not been charged to you.
In consequence, the transition to the 4.1.1 version won't have any pricing impact.
Gateway
Can we put a virtual firewall in front of the T0 Gateway in the same PCC?
Today it is not possible. The T0 Gateway already has a firewall feature so we recommend configuring the firewall with the T0 Gateway.
Can you explain the difference between T0 and T1 Gateways?
In the VMware conception, a T1 Gateway is always attached to a T0 Gateway.
Flows go through the T0 Gateway to go to the external network.
All the elements that have to stay inside the vSphere platform are routed by the T1 Gateway.
How can I add another Tier-0 Gateway?
It is currently not possible to add a new working Tier-0 Gateway.
Is it possible to connect to the Tier-0 Gateway from the command line to perform debugging or packet capture?
No, this is not possible for Tier-0.
Is it possible to connect to the Tier-1 Gateway from the command line to perform debugging or packet capture?
No, this is not possible for Tier-1. Different tools are available in NSX to address these needs.
What is the maximum number of interfaces (connected segments) on a Tier-1 Gateway?
This information can be found in NSX > Inventory > Capacity.
Regarding the Edges, we refer to the Gateways and the Tier-0 and Tier-1. Tier-0 is already deployed and uses three public IPs to route between the active/backup Gateways and uses the concept of a VIP that is in front of the two internal public IPs. This is used for failover and redundancy.
NSX and NSX-v are different and at the moment you cannot break the current Tier-0 Gateway configuration and deploy others.
What will be the bandwidth of the edge node's cards, knowing the T0 Gateway will be mutualized?
This will depend on the activated services (LB/NAT/Firewall, etc.)
If we don't want to have Virtual Routing & Forwarding (VRF) to split the T0 Gateway, what would be the solution besides training or buying a new PCC?
It is possible not to use the VRF and use the T1 Gateway.
You can use a T1 Gateway, hosting the workloads behind it. In this case, the T1 Gateway is used as a "mini" VRF but the flows will be mixed inside the T0 Gateway.
The advantage of doing a VRF in the T0 Gateway is to maintain the partitioning of the routing table of the elements going to the external network of the vSphere platform.
Support/Assistance
For those using Professional Services: What are the different assistance packs? What are the differences between the packs?
All packs are based on days (1 day = 8 hours); 1 day, 2 days, or more.
The first approach is the same for all packs with a discovery phase but the duration of the pack will depend on the complexity of the environment and the customer maturity.
This would be discussed with the PS team during a first assessment call.
Will you provide training and documentation to improve NSX-T skills?
Documentation has been provided. Please see our Getting Started with NSX guide and our other documentation for VMware on OVHcloud.
Go further
For more information and tutorials, please see our other NSX support guides or explore the guides for other OVHcloud products and services.