In this tutorial, we are explaining how to deploy services on OVHcloud Managed Kubernetes using our
LoadBalancer to get external traffic into your cluster. We will begin by listing the main methods to expose Kubernetes services outside the cluster, with their advantages and disadvantages. Then we will see a complete example of
LoadBalancer service deployment.
Before you begin
This tutorial assumes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the OVHcloud Managed Kubernetes Service Quickstart.
Warning: When a Load Balancer resource is created inside a Managed Kubernetes cluster, an associated Public Cloud Load Balancer is automatically created, allowing public access to your Kubernetes application. The Public Cloud Load Balancer service is hourly charged and will appear in your Public Cloud project. For more information, please refer to the following documentation: Network Load Balancer price
Some concepts: Cluster IP, Node Port, Ingress and Load Balancer
When you begin to use Kubernetes for real applications, one of the first questions is how to get external traffic into your cluster. The official doc gives you a good, but rather dry, explanation on the topic, but here we are trying to explain the concepts in a minimal, need-to-know way.
There are several ways to route the external traffic into your cluster:
Using Kubernetes proxy and
ClusterIP: The default Kubernetes
ClusterIP, which exposes the
Serviceon a cluster-internal IP. To reach the
ClusterIPfrom an external source, you can open a Kubernetes proxy between the external source and the cluster. It is usually only used for development.
Exposing services as
NodePort: Declaring a
NodePortexposes the service on each Node’s IP at a static port (the
NodePort). You can then access the
Servicefrom the outside of the cluster by requesting
<NodeIp>:<NodePort>. It can be used for production, with some limitations.
Exposing services as
LoadBalancer: Declaring a
LoadBalancerexposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for the
Service, and map it to its automatically assigned
NodePort. It is the most widely used method in production environments.
Using Kubernetes proxy and Cluster IP
The default Kubernetes
ClusterIP, that exposes the
Service on a cluster-internal IP. To reach the
ClusterIP from an external computer, you can open a Kubernetes proxy between the external computer and the cluster.
You can use
kubectl to create such a proxy. When the proxy is up, you’re directly connected to the cluster, and you can use the
Services internal IP (Cluster IP).
This method isn’t suited for a production environment, but it’s interesting for development, debugging, or other quick-and-dirty operations.
Exposing services as Node Port
Declaring a service of type
NodePort exposes the
Service on each Node’s IP at a static port, the
NodePort (a fixed port for that
Service, in the default range of 30000-32767). You can then access the
Service from the outside of the cluster by requesting
<NodeIp>:<NodePort>. Every service you deploy as
NodePort will be exposed in its own port, on every Node.
It’s rather cumbersome to use
Services in production. As you are using non-standard ports, you often need to set up an external load balancer that listens on standard ports and redirects the traffic to the
Warning: In our OVHcloud Managed Kubernetes you have an easy way to access
NodePort services. You need to get the nodes URL, a URL resolving via round-robin DNS to one random node of your cluster. Since
NodePort services are exposed in the same port on every Node, you can use this nodes URL to access them.
In order to get the nodes URL, you get the control plane URL (the one given on
kubectl cluster-info) and add the
nodes element between the first and the second element of the URL
$ kubectl cluster-info Kubernetes control plane is running at https://xxxxxx.c1.gra9.k8s.ovh.net CoreDNS is running at https://xxxxxxx.c1.gra9.k8s.ovh.net/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://xxxxxx.c1.gra9.k8s.ovh.net/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
In this case, the nodes URL will be
https://xxxxxx.nodes.c1.gra9.k8s.ovh.net and a service deployed on NodePort 30123 can be accessed on
Exposing services as Load Balancer
LoadBalancer service exposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for the
Service, and map it to its automatically assigned
NodePort. How the traffic from that external load balancer is routed to the
Service pods depends on the cluster provider.
LoadBalancer is the best option for a production environment, with two caveats:
LoadBalanceryou deploy will get its own IP.
LoadBalanceris usually billed by the number of exposed services and can, therefore, get expensive.
Note: There is a limit of 200 active
LoadBalancer per OpenStack project (also named OpenStack tenant). This limit can be raised by contacting OVHcloud US support.
There are several annotations available to customize your load balancer:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: Used on the service to enable the proxy protocol on all backends. Supported values:
OVHcloud Load Balancer services handle four Proxy Protocol modes:
service.beta.kubernetes.io/ovh-loadbalancer-allowed-sources: Used on the service to specify allowed client IP source ranges. Value: comma separated list of CIDRs. For example:
10.0.0.0/24,22.214.171.124. Deprecated, please use
loadBalancerSourceRangesspec instead, see Restrict Access For LoadBalancer Service.
service.beta.kubernetes.io/ovh-loadbalancer-balance: Used on the service to set the algorithm to use for load balancing. Supported values:
What about Ingress
According to the official documentation, an
Ingress is an API object that manages external access to the services in a cluster, typically HTTP. What is the difference between the
Ingress isn’t a type of
Service, but an object that acts as a reverse proxy, and single entry point to your cluster that routes the request to the different services. The most basic
Ingress is the NGINX Ingress Controller, where the NGINX take the role of reverse proxy, but also functions as SSL.
An Ingress is exposed to the outside of the cluster either via
ClusterIP and Kubernetes proxy,
LoadBalancer, and it routes incoming traffic according to configured rules.
The main advantage of using an
Ingress behind a
LoadBalancer is the cost: you can have lots of services behind a single
Deploying Load Balancer Services on OVHcloud Managed Kubernetes Clusters
In our OVHcloud Managed Kubernetes we propose a load balancing service enabling you to use
Deploying a Hello World Load Balancer service
hello.yml file for our
ovhplatform/hello Docker image, defining the service type as
apiVersion: v1 kind: Service metadata: name: hello-world labels: app: hello-world spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: hello-world --- apiVersion: apps/v1 kind: Deployment metadata: name: hello-world-deployment labels: app: hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: ovhplatform/hello:1.1 ports: - containerPort: 80
Apply the file:
kubectl apply -f hello.yml
After applying the YAML file, a new
hello-world service and the corresponding
hello-world-deployment deployment are created:
$ kubectl apply -f hello.yml service/hello-world unchanged deployment.apps/hello-world-deployment configured
Note: The application you have just deployed is a simple Nginx server with a single static Hello World page. Basically, it deploys the Docker image ovhplatform/hello
List the services
Now you are going to use
kubectl to see your service:
kubectl get service hello-world -w
You should see your newly created service:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world LoadBalancer 10.3.81.234 80:31699/TCP 9s
LoadBalancer creation is asynchronous, and the provisioning of the load balancer can take several minutes, you will get a
If you try again in a few minutes, you should get an
$ kubectl get service hello-world NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world LoadBalancer 10.3.81.234 xxx.xxx.xxx.xxx 80:31699/TCP 4m
For each service you deploy with
LoadBalancer type, you will get a new IPv4 with the
xxx.xxx.xxx.xxx format to access the service.
Testing your service
If you point your web browser to the
EXTERNAL-IP value, the
hello-world service will answer you:
At the end, you can proceed to clean up by deleting the service and the deployment.
Let’s begin by deleting the service:
kubectl delete service hello-world
If you list the services, you will see that
hello-world doesn’t exist anymore:
$ kubectl delete service hello-world service "hello-world" deleted $ kubectl get services -l app=hello-world No resources found.
Then you can delete the deployment:
kubectl delete deploy hello-world-deployment
Now, if you list your deployment, you will find no resources:
$ kubectl delete deploy hello-world-deployment deployment.apps "hello-world-deployment" deleted $ kubectl get deployments -l app=hello-world No resources found.
Now, list the pods:
kubectl get pod -n default -l app=hello-world
You will see that the pod created for
hello-world has been deleted too:
$ kubectl get pod -l app=hello-world No resources found