Learn how to migrate from an existing Load Balancer for Managed Kubernetes to a Public Cloud Load Balancer.
Public Cloud Load Balancer is the default Load Balancer for MKS clusters using Kubernetes version 1.31 or later.
Load Balancer for Managed Kubernetes is deprecated for MKS clusters using Kubernetes version 1.32 or later.
This guide explains the steps required to make this transition safely, minimizing service interruptions. Finally, it provides recommendations on DNS management, particularly reducing the TTL to optimize the propagation of changes and ensure a smooth migration.
NOTE: Since the Load Balancer for Kubernetes and Public Cloud Load Balancer do not use the same solution for Public IP allocation, it is not possible to keep the existing public IP of your Load Balancer for Kubernetes. Changing the Load Balancer class of your Service will lead to the creation of a new Load Balancer and the allocation of a new Public IP (Floating IP).
Pricing for the new Load Balancer differs from the Load Balancer for Managed Kubernetes, and the Public IP will be charged separately according to Floating IP pricing. See pricing details here.
Comparison
Below is a comparison between Load Balancer for Kubernetes and Public Cloud Load Balancer, highlighting their key differences and capabilities. Public Cloud Load Balancer introduces several sizes/flavors, you can find the detailed specifications on the Public Cloud Load Balancer page.
Load Balancer for Managed Kubernetes | Public Cloud Load Balancer | |
---|---|---|
Maximum number of connections | 10 000 | up to 20 000 |
Maximum number of HTTP requests | 2000 | Up to 80 000 |
Bandwidth | 200 Mbit/s | up to 4 Gbit/s (up/down) |
Supported protocol | TCP | TCP/UCP |
Supported load balancing layers | L4 | L4/L7 |
Capacity to export metrics and logs | No | Yes |
Private to private scenario | No | Yes |
Floating IP | No | Yes |
Annotations
Below is a comparison of existing annotations supported on Load Balancer for Managed Kubernetes and their equivalent on Public Cloud Load Balancer.
NOTE: If you are using legacy annotations from Load Balancer for Managed Kubernetes, you must update them to the new format supported by Public Cloud Load Balancer during migration.
Some annotations have been deprecated and need to be replaced to ensure full compatibility.
You can find full details on the official documentation pages:
Load Balancer for Managed Kubernetes | Public Cloud Load Balancer |
---|---|
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol Supported values: - v1 - v2 - v2-ssl - v2-ssl-cn |
loadbalancer.openstack.org/proxy-protocol Supported values: - v1, true: enable the ProxyProtocol version 1 - v2: enable the ProxyProtocol version 2 |
service.beta.kubernetes.io/ovh-loadbalancer-allowed-sources --> DEPRECATED** |
No annotation. IP restriction is defined using .spec.loadBalancerSourceRanges
|
service.beta.kubernetes.io/ovh-loadbalancer-balance Supported values: - first - leastconn - roundrobin - source |
loadbalancer.openstack.org/lb-method Supported values: - ROUND_ROBIN - LEAST_CONNECTIONS - SOURCE_IP |
Migration of your Load Balancer
NOTE: Starting from MKS cluster using Kubernetes version 1.31, any cluster upgrade attempt will be blocked if a service of type Load Balancer relying on Load Balancer for Managed Kubernetes (IOLB) is still present in the cluster.
Action required: Before upgrading, you MUST change your services into a Public Cloud Load Balancer (Octavia) by following the steps described in this guide.
If you attempt to upgrade without first migrating, an error will be returned, preventing the upgrade.
There are two methods to move from a Load Balancer for Managed Kubernetes to a Public Cloud Load Balancer, you can either Migrate or Replace your Load Balancer.
Your existing Load Balancer Service using Load Balancer for Managed Kubernetes should have the following annotation:
WARNING: Each of the two methods described below (migration or replacement) requires a DNS switch for which a DNS propagation of 24 to 48 hours is required before performing the migration.
Therefore, please read the entire guide (and especially the "How to perform a DNS switch" section) before you proceed with the steps below.
Migration
Migrating from an existing Load Balancer for Kubernetes to a Public Cloud Load Balancer involves creating a new Load Balancer service using Public Cloud Load Balancer with the same label selector to expose your application. For a short period of time, your application will be accessible using both Load Balancers.
At this time you can perform a DNS switch and then delete the old Load Balancer.
To migrate from an existing Load Balancer for Kubernetes to a Public Cloud Load Balancer, follow these steps:
Step 1 - Create a new Load Balancer service
You need to create a new Load Balancer service using the Public Cloud Load Balancer while keeping the existing one active:
- Ensure that the new service has the same labelSelector as the old one so that it exposes the same application.
- You can set the annotation
loadbalancer.ovhcloud.com/class: "octavia"
for Kubernetes version 1.30 (or earlier). Public Cloud Load Balancer Octavia will be the defailt load balancer for Kubernetes version 1.31 (or later).
labelSelector:
annotations:
Apply the new service with:
This will create a new Public Cloud Load Balancer with a new public IP.
Step 2 - Test Application access
Once the new Load Balancer is created, get the Public IPs of both services:
You should see two Load Balancer services pointing to the same application with different Public IPs.
Test accessibility via both IPs:
Both should return a successful response from your application.
Step 3 - Perform a DNS switch
To perform a DNS switch, refer to the How to perform a DNS switch? part of this guide.
Step 4 - Remove the Load Balancer service for Kubernetes (a.k.a. IOLB)
Once you confirm that traffic is flowing through the new Load Balancer and the old one is no longer needed, remove the old Load Balancer service by deleting it:
Replacement
Replacing an existing Load Balancer for Managed Kubernetes with a Public Cloud Load Balancer involves modifying the existing service and changing the loadbalancer class from iolb
to octavia
. This will lead to Kubernetes reconciling the loadbalancer class by deleting the old one and creating a new loadbalancer.
Once the new Load Balancer delivered, you can perform a DNS switch using the new public IP.
NOTE: Please note that during the deletion and creation process, your service will not be accessible.
You can reduce this impact by lowering your DNS TTL duration, please refer to the How to perform a DNS switch? part of this guide.
Step 1 - Edit your Service to change the Load Balancer class to 'octavia'
NOTE: The old Load Balancer and its IP address will be deleted permanently, making services unreachable. Perform the DNS switch immediately with the new provided IP.
Apply the service update using:
Step 2 - Perform a DNS switch
To perform a DNS switch, follow the steps right below.
How to perform a DNS switch?
Check the current TTL
By default, DNS servers cache the IP address of a domain for a period defined by the TTL (Time-To-Live).
A TTL that is too long can slow down the transition by forcing some users to wait several hours before accessing the new Load Balancer. To avoid this, we recommend temporarily reducing the TTL before updating the IP.
Before changing the TTL, it is important to know its current value. To do this, run the following command in a terminal:
Here, 600 corresponds to the TTL in seconds (approximately 10 minutes). This means that the DNS servers keep this IP in cache for this length of time before checking for an update.
Lower the TTL before migration
To lower the TTL, follow these steps:
- Access your DNS management console.
- Find the A record (or CNAME) associated with your domain.
- Change the TTL of your A or CNAME record by reducing it to 300 seconds (5 minutes).
- Wait 24 to 48 hours for this new TTL to propagate.
Why waiting? Because DNS servers must first clear their caches before adopting the new TTL.
Update the Load Balancer IP
Once the TTL reduction has been propagated:
- Replace the old IP with the one of the new Load Balancer in your DNS record.
- Thanks to the short TTL value, users will quickly see the update.
- Check that the change is effective using the same
dig
command used previously. You should see the new IP address displayed in the command output.
Restore TTL after migration
Once the transition has been validated and the old Load Balancer disconnected, reconfigure the TTL with a higher value.
Other resources
- Exposing applications using services of Load Balancer type
- Using Octavia Ingress Controller
- OVHcloud Load Balancer concepts
- How to monitor your Public Cloud Load Balancer with Prometheus
Go further
- Visit the Github examples repository.
For more information and tutorials, please see our other Managed Kubernetes or Platform as a Service guides. You can also explore the guides for other OVHcloud products and services.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.