Learn how to use the OVHcloud Public Cloud Load Balancer to expose your applications in Kubernetes.
NOTE: If you are using Kubernetes version 1.31 (or later), the Octavia-based Load Balancer will be the default.
If you are using Kubernetes version 1.30 or earlier, you will need the annotation loadbalancer.ovhcloud.com/class: octavia
to use the Public Cloud Load Balancer with Managed Kubernetes Service (MKS).
Introduction
If you're not comfortable with the different ways of exposing your applications hosted on Managed Kubernetes Service (MKS), or if you're not familiar with the notion of the 'load balancer' service, we recommend starting by reading the guide explaining how to expose your application deployed on an OVHcloud Managed Kubernetes Service. This guide details the different methods to expose your containerized applications hosted in Managed Kubernetes Service.
Our Public Cloud Load Balancer is relying on the OpenStack Octavia project which provides a Cloud Controller Manager (CCM) allowing Kubernetes clusters to interact with Load Balancers. For Managed Kubernetes Service (MKS), this Cloud Controller is installed and configured by our team, allowing you to easily create, use, and configure our Public Cloud Load Balancers. You can find the CCM open-source project documentation here.
This guide uses some concepts that are specific to our Public Cloud Load Balancer (listener, pool, health monitor, member, etc.) and to the OVHcloud Public Cloud Network (Gateway, Floating IP). You can find more information regarding Public Cloud Network concepts in our official documentation.
Requirements
Kubernetes version
To be able to deploy Public Cloud Load Balancer, your Managed Kubernetes Service must run or have been upgraded to the following patch versions.
Please note that for clusters running on those versions, you must use the annotation loadbalancer.ovhcloud.com/class: "octavia"
to specify that you want to deploy Public Cloud Load Balancer (based on Octavia project) for your MKS cluster.
Kubernetes versions |
---|
1.26.4-3>= |
1.27.12-1>= |
1.28.8-1>= |
1.29.3-3>= |
1.30.2-1 >= |
If you are running version 1.31 or later, Public Cloud Load Balancer will be used as the default load balancing solution, you do not need to specify any annotation.
Network prerequisite to expose your Load Balancers publicly
The first step is to make sure that you have an existing vRack on your Public Cloud Project. To do so you can follow our Configure a vRack for Public Cloud guide.
If you plan to expose your Load Balancer publicly, it is mandatory to have an OVHcloud Gateway (an OpenStack router) deployed on the subnet hosting your Load Balancer in order to attach a Floating IP to your Load Balancer.
If it does not exist when you create your first Public Cloud Load Balancer, an S size Managed Gateway will be automatically created. That is why we recommend deploying your MKS clusters on a network and subnet where an OVHcloud Gateway can be created (manually or automatically - see Creating a Private Network with Gateway) or already exists.
If you have an existing (already deployed) cluster, check the scenarios below to see what actions need to be taken.
- If the Subnet's Gateway IP is already used by an OVHcloud Gateway, nothing needs to be done. The current OVHcloud Gateway (OpenStack Router) will be used.
-
If the subnet does not have an IP reserved for a Gateway, you will have to provide or create a compatible subnet. Three options are available:
- Edit an existing subnet to reserve an IP for a Gateway. Please refer to the Update Subnet Properties documentation.
- Provide another compatible subnet: a subnet with an existing OVHcloud Gateway or with an IP address reserved for a Gateway (Creating a Private Network with Gateway).
- Use a subnet dedicated for your load balancer: this option can be used in the OVHcloud Control Panel under
Advanced parameters
>Loadbalancer Subnet
or using APIs/Infra as Code using the 'LoadBalancerSubnetID' parameter.
-
If the Gateway IP is already assigned to a non-OVHcloud Gateway (OpenStack Router). Two options are available:
- Provide another compatible subnet: a subnet with an existing OVHcloud Gateway or with an IP address reserved for a Gateway (Creating a Private Network with Gateway).
- Use a subnet dedicated for your load balancers: this option can be used in the OVHcloud Control Panel under
Advanced parameters
>Loadbalancer Subnet
or using APIs/Infra as Code with the 'LoadBalancerSubnetID' parameter.
Limitations
- Layer 7 Policy & Rules and TLS Termination (
TERMINATED_HTTPS
listener) are not available yet. For such use cases, you can rely on Octavia Ingress Controller. - UDP proxy protocol is not supported.
Billing
When exposing your load balancer publicly (public-to-public or public-to-private):
- If it does not already exist, a single OVHcloud Gateway will be automatically created and billed for all Load Balancers spawned in the subnet (seeing pricing).
- A Public Floating IP will be used (see pricing).
- Each Public Cloud Load Balancer is billed according to its flavor (see pricing).
Instructions
Depending on the Kubernetes version your cluster is using, if you want to use a Public Cloud Load Balancer rather than the historical Load Balancer for Kubernetes solution, you might need to add the annotation: loadbalancer.ovhcloud.com/class: "octavia"
on your Kubernetes Service manifest. Please refer to the Requirements section for more details.
You can add the annotation: loadbalancer.ovhcloud.com/class: "octavia"
on your Kubernetes Service manifest if you want a Kubernetes load balancer service to be deployed using Public Cloud Load Balancer rather than the historical Load Balancer for Kubernetes solution.
Here's a simple example of how to use the Public Cloud Load Balancer
- Deploy a functional Managed Kubernetes (MKS) cluster using the OVHcloud Control Panel or APIs.
- Retrieve the kubeconfig file needed to use kubectl tool (via the OVHcloud Control Panel or API). You can use this guide.
- Create a Namespace and a Deployment resource using the following command:
- Copy/paste the following code on a file named
test-lb-service.yaml
:
- Create a 'Service' using the following command:
- Retrieve the Service IP address using the following command line:
- Open a web browser and access:
http://141.94.215.240
.
Use cases
You can find a set a examples on how to use our Public Cloud Load Balancer with Managed Kubernetes Service (MKS) on our dedicated GitHub repository.
Public-to-Private (your cluster is attached to a private network/subnet)
In a public-to-private scenario you will use your Load Balancer to publicly expose applications that are hosted on your Managed Kubernetes Cluster. The main benefit of this scenario is that your Kubernetes nodes are not exposed on the Internet.
Service example:
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.
Private-to-Private
In a private-to-private scenario your Load Balancer is not exposed publicly, it may be useful if you want to expose your containerized service inside your OVHcloud private network.
Service example:
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.
Public-to-Public (you are using a public Managed Kubernetes Cluster)
In a public-to-public scenario, all your Kubernetes nodes have a public network interface. Inter-node/pod communication will rely on public network. This is the easiest way to deploy a MKS cluster as it does not require to create a network and subnet topology. Although all your nodes already carry a public IP address for exposing your applications, you can choose to use a load balancer to expose them behind a single IP address.
Service example:
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.
Supported Annotations & Features
Supported service annotations
loadbalancer.ovhcloud.com/class
Authorized values: 'octavia'=Public Cloud Load Balancer, "iolb"=Load Balancer for Managed Kubernetes service (will be deprecated in future version.) If not specified, the default class of the MKS Kubernetes versions you are using will be applied. Refer to the Requirements section for more information.
loadbalancer.ovhcloud.com/flavor
Not a standard OpenStack Octavia annotation (specific to OVHcloud). The size used for creating the load balancer. Specifications can be found on the Load Balancer specifications page. Authorized values => small
,medium
,large
. Default is 'small'.
service.beta.kubernetes.io/openstack-internal-load-balancer
If 'true', the load balancer will only have an IP on the private network (no Floating IP is associated with the Load Balancer). Default is 'false'.
loadbalancer.openstack.org/subnet-id
The subnet ID where the private IP of the load balancer will be retrieved. By default, the subnet-id of the subnet configured for your OVHcloud Managed Kubernetes Service cluster will be used.
loadbalancer.openstack.org/member-subnet-id
Member subnet ID of the load balancer created. By default, the subnet-id of the subnet configured for your OVHcloud Managed Kubernetes Service cluster will be used.
loadbalancer.openstack.org/network-id
The network ID which will allocate the virtual IP for load balancer. By default, the network-id of the network configured for your OVHcloud Managed Kubernetes Service cluster will be used.
loadbalancer.openstack.org/port-id
The port ID for load balancer private IP. Can be used if you want to use a specific private IP.
loadbalancer.openstack.org/connection-limit
The maximum number of connections per second allowed for the listener. Positive integer or -1 for unlimited (default). This annotation supports update operations.
loadbalancer.openstack.org/keep-floatingip
If 'true', the floating IP will NOT be deleted upon load balancer deletion. Default is 'false'. Useful if you want to keep your Floating IP after Load Balancer deletion.
loadbalancer.openstack.org/proxy-protocol
Enable the ProxyProtocol on all listeners. Default is 'false'.
Values:
v1
, true
: enable the ProxyProtocol version 1
v2
: enable the ProxyProtocol version 2
loadbalancer.openstack.org/timeout-client-data
Frontend client inactivity timeout in milliseconds for the load balancer. Default value (ms) = 50000.
loadbalancer.openstack.org/timeout-member-connect
Backend member connection timeout in milliseconds for the load balancer. Default value (ms) = 5000.
loadbalancer.openstack.org/timeout-member-data
Backend member inactivity timeout in milliseconds for the load balancer. Default value (ms) = 50000.
loadbalancer.openstack.org/timeout-tcp-inspect
Time to wait for additional TCP packets for content inspection in milliseconds for the load balancer. Default value (ms) = 0.
loadbalancer.openstack.org/enable-health-monitor
Defines whether to create health monitor for the load balancer pool. Default is true
. The health monitor can be created or deleted dynamically. A health monitor is required for services with externalTrafficPolicy: Local
.
loadbalancer.openstack.org/health-monitor-delay
Defines the health monitor delay in seconds for the load balancer pools. Default value (ms) = 5000
loadbalancer.openstack.org/health-monitor-timeout
Defines the health monitor timeout in seconds for the load balancer pools. This value should be less than delay. Default value (ms) = 3000
loadbalancer.openstack.org/health-monitor-max-retries
Defines the health monitor retry count for the load balancer pool members to be marked online. Default value = 1
loadbalancer.openstack.org/health-monitor-max-retries-down
Defines the health monitor retry count for the load balancer pool members to be marked down. Default value = 3
loadbalancer.openstack.org/flavor-id
The id of the flavor that is used for creating the load balancer. Not useful as we provide loadbalancer.ovhcloud.com/flavor
.
loadbalancer.openstack.org/load-balancer-id
This annotation is automatically added to the Service if it's not specified when creating. After the Service is created successfully it shouldn't be changed, otherwise the Service won't behave as expected.
If this annotation is specified with a valid cloud load balancer ID when creating Service, the Service is reusing this load balancer rather than creating another one. More details are below.
If this annotation is specified, the other annotations which define the load balancer features will be ignored.
loadbalancer.openstack.org/hostname
This annotation explicitly sets a hostname in the status of the load balancer service.
loadbalancer.openstack.org/load-balancer-address
This annotation is automatically added and it contains the Floating IP address of the load balancer service. When using loadbalancer.openstack.org/hostname
annotation it is the only place to see the real address of the load balancer.
Annotations not yet supported
loadbalancer.openstack.org/availability-zone
The name of the load balancer availability zone to use. It is ignored if the Octavia version doesn't support availability zones yet.
loadbalancer.openstack.org/x-forwarded-for
If you want to perform Layer 7 load balancing we recommend using the official Octavia Ingress-controller.
Features
Resize your Load Balancer
There is no proper way to "hot-resize" your load balancer yet (work in progress). The best alternative to change the flavor of your load balancer is to recreate a new Kubernetes Service that will use the same public IP as an existing one. You can find the complete HowTo and examples on our public GitHub repository.
- First, make sure that the existing service is using the
loadbalancer.openstack.org/keep-floatingip
annotation. If it's not using it, the public Floating IP will be released (it can be added after the service creation). - Get the public IP of your existing service:
Example response:
- Create a new service with the new expected flavor:
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.- Until the deletion of the previous service, this Service will only deploy the Load Balancer without a Floating IP.
- When the Floating IP is available (the deletion of the initial LB service will unbound the IP), the Floating IP will be attached to this new LB.
Use PROXY protocol to preserve client IP
When exposing services like nginx-ingress-controller, it's a common requirement that the client connection information could pass through proxy servers and load balancers, therefore visible to the backend services. Knowing the originating IP address of a client may be useful for setting a particular language for a website, keeping a denylist of IP addresses, or simply for logging and statistics purposes. You can follow the official Cloud Controller Manager documentation on how to Use PROXY protocol to preserve client IP.
Migrate from Load Balancer for Kubernetes to Public Cloud Load Balancer
To migrate from an existing Load Balancer for Kubernetes to a Public Cloud Load Balancer you will have to modify an existing Service and change its Load Balancer class.
Your existing Load Balancer Service using Load Balancer for Kubernetes should have the following annotation:
Step 1 - Edit your Service to change the Load Balancer class to 'octavia'
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.Step 2 - Apply the change
Use an existing Floating IP in the tenant
To use an available Floating IP to your K8S Load Balancer, use the field .spec.loadBalancerIP
to find this Floating IP in your tenant.
- If the Floating IP is not found, the Load Balancer will be stuck during the provisioning
- If the Floating IP is already assigned to another component, the Load Balancer will be provisioned. The Floating IP will only be assigned when the Floating IP becomes available.
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.
Use a fixed Virtual-IP (VIP) for the Load Balancer in the subnet
To assign a fixed VIP to the Load Balancer in the OpenStack subnet, you have to create an OpenStack Port. e.g. with the OpenStack CLI:
Then use the annotation loadbalancer.openstack.org/port-id
with the OpenStack port's UUID:
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.
Restrict access for a Load Balancer Service
To apply IP restriction to the K8S Load Balancer Service, you can define the array .spec.loadBalancerSourceRanges
with a list of CIDR ranges.
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.If no value is assigned to this spec, no restriction will be applied.
Sharing an Octavia Load Balancer between multiple Kubernetes Load Balancer Services
You can share an Octavia Load Balancer with up to two Kubernetes Services. These Services can be deployed on different MKS clusters (clusters must be in the same network).
K8S services must expose different protocols/ports (you cannot set the same protocol/port on both K8S Services). See more information here.
To allow another K8S Load Balancer Service to use an existing Octavia (created via MKS or via OpenStack), use the annotation loadbalancer.openstack.org/load-balancer-id
:
loadbalancer.ovhcloud.com/class
annotation is not required for clusters running Kubernetes version 1.31 or later.
Features not yet supported
Setup and Manage Load Balancer using OVHcloud Control Panel
Creating a load balancer using the methods above will add a load balancer entry to the OVHcloud Control Panel under Public Cloud → Load Balancer. The load balancer will have the prefix kube_service_$mks_cluster_shortname_$namespace_
to distinguish it from load balancers created and used for other public cloud services. Modifying a load balancer used with Kubernetes from the OVHcloud Control Panel (other than subnets) is not supported, so it is strongly recommended to use the methods above for any changes.
Common issues when deploying a new Load Balancer
kubectl describe service <svc_name>
command. This will help you get events linked to the service for debugging purposes.Network is not matching requirements for Public Load Balancer: No Gateway IP
When trying to spawn a Public Load Balancer, you must have a Gateway IP assigned to your Subnet to allow a Floating IP in your subnet. Once the Gateway IP parameter is set with a valid IP, an OpenStack router will be spawned to attach a Public IP to your Octavia Load Balancer.
See this guide for more information.
If you don't want to deploy an OpenStack router in your subnet (e.g. you manage your own router), you have to configure the LoadBalancerSubnetId
of your MKS cluster. More information here.
Network is not matching requirements for Public Load Balancer: Cannot deploy an OpenStack Router
When trying to spawn a Public Load Balancer, you must have a Gateway IP assigned to your Subnet to allow a Floating IP in your subnet and this Gateway IP must be available or attached to an OpenStack router.
In your case, the Gateway IP is already used by something else and we cannot deploy an OpenStack Router for your Public Load Balancer. If you are not able to release the IP (e.g. it is used by a router you deployed), you have to configure the LoadBalancerSubnetId
of your MKS cluster. More information here.
Resources Naming Convention
When deploying Load Balancer through Kubernetes Service with type Load Balancer, the Cloud Controller Manager (CCM) implementation will automatically create Public Cloud resources (Load Balancer, Listener, Pool, Health-monitor, Gateway, Network, Subnet,...). In order to easily identify those resources, here are the naming templates:
Resource | Naming |
---|---|
Public Cloud Load Balancer | kube_service_$mks_cluster_shortname_$namespace_$k8s_service_name |
Listener | listener_kube_service_$listener_n°$mks_cluster_shortname$namespace_$service-name |
Pool | pool_kube_service_$pool_n°$mks_cluster_shortname$namespace_$service-name |
Health-monitor | monitor_kube_service_$mks_cluster_shortname_$namespace_$service-name |
Network (only automatically created in Public-to-Public scenario) | k8s-cluster-$mks_cluster_id |
Subnet (only automatically created in Public-to-Public scenario) | k8s-cluster-$mks_cluster_id |
Gateway/Router | k8s-cluster-$mks_cluster_id |
Floating IP | Name = IP. Description= LB Octavia Name |
Other resources
- Exposing applications using services of Load Balancer type
- Using Octavia Ingress Controller
- OVHcloud Load Balancer concepts
- How to monitor your Public Cloud Load Balancer with Prometheus
- Visit the Github examples repository.
Go further
For more information and tutorials, please see our other Managed Kubernetes support guides or explore the guides for other OVHcloud products and services.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.