Nodes and pods
We have tested our OVHcloud Managed Kubernetes service with up to 100 nodes and 100 pods per node. While we are fairly sure it can go further, we advise you to keep under those limits. Nodepools with anti-affinity are limited to 5 nodes (but you can create multiple node pools with the same instance flavor if needed of course).
In general, it’s better to have several mid-size Kubernetes clusters than one monster-size one.
To ensure high availability for your services, it is recommended to possess the computation power capable of handling your workload even when one of your nodes becomes unavailable. Note that any operation requested to our services, like node deletions or updates, will be performed even if Kubernetes budget restrictions are present.
Delivering a fully managed service, including OS and other component updates, you will neither need nor be able to SSH as root into your nodes.
LoadBalancer
Creating a Kubernetes service of type LoadBalancer in a Managed Kubernetes cluster triggers the creation of a Public Cloud Load Balancer. The lifespan of the external Load Balancer (and thus the associated IP address) is linked to the lifespan of this Kubernetes resource.
There is a default quota of 16 external Load Balancers per Openstack project (also named Openstack tenant). This limit can be exceptionally raised upon request through our support team.
There is also a limit of 10 open ports on every LoadBalancer
, and these ports must be in a range between 6 and 65535. (Additionally, node-ports are using default range of 30000 - 32767 , allowing you to expose 2767 services/ports).
OpenStack
Our Managed Kubernetes service is based on OpenStack, and your nodes and persistent volumes are built on it, using OVH Public Cloud. As such, you can see them in the Compute
> Instances
section of OVHcloud Manager. It doesn’t mean that you can deal directly with these nodes and persistent volumes as other cloud instances.
The managed part of OVHcloud Managed Kubernetes Service means that we have configured those nodes and volumes to be part of our Managed Kubernetes.
Please refrain from manipulating them from the OVH Public Cloud Manager (modifying ports left opened, renaming, resizing volumes…), as you could break them.
There is also a limit of 20 Managed Kubernetes Services by Openstack project (also named Openstack tenant).
Node naming
Due to known limitations currently present in the Kubelet
service, be careful to set a unique name to all your Openstack instances running in your tenant including your “Managed Kubernetes Service” nodes and the instances that your start directly on Openstack through manager or API.
The usage of the period (.
) character is forbidden in node name. Please, prefer the dash (-
) character instead.
Ports
In any case, there are some ports that you shouldn’t block on your instances if you want to keep your OVHcloud Managed Kubernetes service running:
Ports to open from public network (INPUT)
- TCP Port 22 (ssh): needed for nodes management by OVH
- TCP Port 10250 (kubelet): needed for communication from apiserver to worker nodes
- TCP Ports from 30000 to 32767 (NodePort services port range): needed for NodePort and LoadBalancer services
Ports to open from instances to public network (OUTPUT)
- TCP Port 8090 (internal service): needed for nodes management by OVH
- UDP Port 123: needed to allow NTP servers synchronization (systemd-timesync)
- TCP/UDP Port 53: needed to allow domain name resolution (systemd-resolve)
Ports to open from others worker nodes (INPUT/OUPUT)
- UDP Port 8472 (flannel): needed for communication between pods
- UDP Port 4789 (kube-dns internal usage): needed for DNS resolution between nodes
Private Networks
Private networks (vRack) aren’t yet supported in OVHcloud Managed Kubernetes.
Please refrain from adding private networks to your working nodes instances.
Cluster health
The command kubectl get componentstatus
is reporting the scheduler, the controller manager and the etcd service as unhealthy. This is a limitation due to our implementation of the Kubernetes control plane as the endpoints needed to report the health of these components are not accesible.
Persistent Volumes
Kubernetes Persistent Volume Claims
resizing only allows to expand volumes, not to decrease them.
If you try to decrease the storage size, you will get a message like:
The PersistentVolumeClaim "mysql-pv-claim" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
For more details, please refer to the Resizing Persistent Volumes documentation.
The Persistent Volumes are using our Cinder-based block-storage solution through Cinder CSI. A worker node can get attached to a maximum of 25 persistent volumes, and a persistent volume can only be attached to a single worker node.
Comments
0 comments
Please sign in to leave a comment.