In this tutorial we are explaining how to deploy services on OVHcloud Managed Kubernetes service using our LoadBalancer
to get external traffic into your cluster. We will begin by listing the main methods to expose Kubernetes services outside the cluster, with its advantages and disadvantage. Then we will see a complete example of LoadBalancer
service deployment.
Before you begin
This tutorial presupposes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the OVHcloud Managed Kubernetes Service Quickstart.
Some concepts: ClusterIP, NodePort, Ingress and LoadBalancer
When you begin to use Kubernetes for real applications, one of the first questions is how to get external traffic into your cluster. The official doc gives you a good but rather dry explanation on the topic, but here we are trying to explain the concepts in a minimal, need-to-know way.
There are several ways to route the external traffic into your cluster:
-
Using Kubernetes proxy and
ClusterIP
: The default KubernetesServiceType
isClusterIp
, that exposes theService
on a cluster-internal IP. To reach theClusterIp
from an external source, you can open a Kubernetes proxy between the external source and the cluster. Its is usually only used for development. -
Exposing services as
NodePort
: Declaring aService
of typeNodePort
exposes the service on each Node’s IP at a static port (theNodePort
). You can then access theService
from the outside of the cluster by requesting<NodeIp>:<NodePort>
. It can be used for production, with some limitations. -
Exposing services as
LoadBalancer
: Declaring aService
of typeLoadBalancer
exposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for theService
, and map it to its automatically assignedNodePort
. It is the most widely used method in production environments.
Using Kubernetes proxy and ClusterIP
The default Kubernetes ServiceType
is ClusterIp
, that exposes the Service
on a cluster-internal IP. To reach the ClusterIp
from an external computer, you can open a Kubernetes proxy between the external computer and the cluster.
You can use kubectl
to create such a proxy. When the proxy is up, you’re directly connected to the cluster, and you can use the Services
internal IP (ClusterIp)
This method isn’t suited for a production environment, but it’s interesting for development, debugging or other quick-and-dirty operations.
Exposing services as NodePort
Declaring a service of type NodePort
exposes the Service
on each Node’s IP at a static port, the NodePort
(a fixed port for that Service
, in the default range of 30000-32767). You can then access the Service
from the outside of the cluster by requesting <NodeIp>:<NodePort>
. Every service you deploy as NodePort
will be exposed in its own port, on every Node.
It’s rather cumbersome to use NodePort
Services
in production. As you are using non-standard ports, you often need to set-up an external load balancer that listen in standard ports and redirects the traffic to the <NodeIp>:<NodePort>
.
Exposing services as LoadBalancer
Declaring a service of type LoadBalancer
exposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for the Service
, and map it to its automatically assigned NodePort
.How the traffic from that external load balancer is routed to the Service
pods depends on the cluster provider.
The LoadBalancer
is the best option for a production environment, with two caveats:
- Every
Service
of typeLoadBalancer
you deploy will get it’s own IP. - The
LoadBalancer
is usually billed by the number of exposed services, it can be expensive.
There is a limit of 16 active LoadBalancer
per Openstack project (also named Openstack tenant). This limit can be exceptionally raised upon request through our support team
Supported annotations
There are several annotations available to customize your load balancer:
-
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol
: Used on the service to enable the proxy protocol on all backends. Supported values:v1
,v2
,v2_ssl
,v2_ssl_cn
. -
service.beta.kubernetes.io/ovh-loadbalancer-allowed-sources
: Used on the service to specify allowed client IP source ranges. Value: comma separated list of CIDRs. For example:10.0.0.0/24,172.10.0.1
. Deprecated please useloadBalancerSourceRanges
spec instead, see Restrict Access For LoadBalancer Service. -
service.beta.kubernetes.io/ovh-loadbalancer-balance
: Used on the service to set the algorithm to use for load balancing. Supported values:first
,leastconn
,roundrobin
,source
. Default:roundrobin
.
What about Ingress
According to the official documentation, an Ingress
is an API object that manages external access to the services in a cluster, typically HTTP. Whats the difference with the LoadBalancer
or NodePort
?
Ingress
isn’t a type of Service
, but un object that acts as a reverse proxy, and single entrypoint to your cluster that routes the request to the different services. The most basic Ingress
is the NGINX Ingress Controller, where the NGINX take the role of reverse proxy, but also functions as SSL.
An Ingress is exposed to the outside of the cluster either via ClusterIP
and Kubernetes proxy, NodePort
or LoadBalancer
, and it routes incoming traffic according to configured rules.
The main advance of using an Ingress
behind a LoadBalancer
is the cost: you can have lots of services behind a single LoadBalancer
.
Deploying LoadBalancer Services on OVHcloud Managed Kubernetes clusters
In our OVHcloud Managed Kubernetes we propose a load balancing service enabling you to use LoadBalancer
ServiceType
. There is a limit of 16 active LoadBalancer
per cluster. This limit can be exceptionally raised upon request through our support team
Deploying a Hello World LoadBalancer service
Create a hello.yaml
file for our ovhplatform/hello
Docker image, defining the service type as LoadBalancer
:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: ovhplatform/hello
ports:
- containerPort: 80
And apply the file:
# kubectl apply -f hello.yml
After applying the YAML file, a new hello-world
service and the corresponding hello-world-deployment
deployment are created:
# kubectl apply -f hello.yml
service/hello-world created
deployment.apps/hello-world-deployment created
The application you have just deployed is a simple nginx server with a single static Hello World page. Basically it just deploys the Docker image ovhplatform/hello
List the services
And now you’re going to use kubectl
to see your service:
# kubectl get service hello-world -w
You should see your newly created service:
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.3.81.234 <pending> 80:31699/TCP 9s
As the LoadBalancer
creation is asynchronous, and the provisioning of the load balancer can take several minutes, you will surely get a <pending>
EXTERNAL-IP
.
If you try again in a few minutes you should get an EXTERNAL-IP
:
# kubectl get service hello-world
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world LoadBalancer 10.3.81.234 xxx.xxx.xxx.xxx 80:31699/TCP 4m
For each service you deploy with LoadBalancer
type, you will get a new IPv4 xxx.xxx.xxx.xxx
to access the service.
Testing your service
If you point your web browser to the service URL, the hello-world
service will answer you:
Cleaning up
At the end you can proceed to clean up by deleting the service and the deployment.
Let’s begin by deleting the service:
# kubectl delete service hello-world
If you list the services you will see that hello-world
doesn’t exist anymore:
# kubectl delete service hello-world
service "hello-world" deleted
$ kubectl get services
No resources found.
Then, you can delete the deployment:
# kubectl delete deploy hello-world-deployment
And now if you list your deployment you will find no resources:
# kubectl get deployments
No resources found.
If now you list the pods:
# kubectl get pods
you will see that the pod created for hello-world
has been deleted too:
# kubectl -n=default get pods
No resources found