Before you begin
This tutorial assumes that you already have a working OVHcloud Managed Kubernetes cluster and some basic knowledge of how to operate it. If you want to know more about these topics, please look at the OVHcloud Managed Kubernetes Service Quickstart.
NOTE: When a Persistent Volume resource is created inside a Managed Kubernetes cluster, an associated Public Cloud Block Storage volume is automatically created with it, with a lifespan depending on the parent cluster’s lifespan. This volume is charged hourly and will appear in your Public Cloud project. For more information, please refer to the following documentation: Volume Block Storage price.
Persistent Volumes
Before going further, let’s review how Kubernetes deals with data storage. There are currently two kinds of storage available with Kubernetes: Volumes and Persistent Volumes.
Kubernetes Volumes exist only while the Pod (containers) exists, and are deleted when it is deleted.
As a result, Kubernetes Volumes are only useful for storing temporary data.
Kubernetes Persistent Volumes allow us to work with non-volatile data in Kubernetes. Persistent Volumes are not tied to the pod lifecycle or a single Pod. Pods can claim Persistent Volumes, thus making the data available to them.
NOTE: You must be wondering how Persistent Volumes are compatible with the rule that containers should be stateless – one of the most important principles of best practice for containers. It’s important to note that as the Kubernetes ecosystem has matured and persistent storage solutions have emerged, and this rule is no longer universally applicable.
What are the use cases for Persistent Volumes in Kubernetes?
Well, the most common application is databases. Database data, by definition, is meant to be persistent, and not linked to a specific pod, so Persistent Volumes are needed to deploy it in Kubernetes.
When deploying a database in Kubernetes, we follow these steps:
- Create and configure a Pod for the database engine
- Attach a Persistent Volume to the Pod using a Persistent Volume Claim
- Mount the claimed volume in the Pod
To use a Persistent Volume (PV) on a Kubernetes cluster, you must create a Persistent Volume Claim (PVC). Persistent Volume Claims are requests to provision a specific type and configuration of Persistent Volume. The different kinds of persistent storage are defined by cluster admins using Storage Classes.
When you need a Persistent Volume, you create a Persistent Volume Claim and choose a Storage Class from those made available by the cluster administrators. Depending on the Storage Class, an actual infrastructure volume storage device is provisioned on your account and a Persistent Volume is created on this physical device. The Persistent Volume is a sort of virtual storage instance over the infrastructure virtual storage.
Persistent Volumes on OVHcloud Managed Kubernetes
We currently support several Storage Classes on OVHcloud Managed Kubernetes:
csi-cinder-classic
csi-cinder-high-speed
You can display them with the kubectl get storageclass
command:
All of them are based on Cinder, the OpenStack Block Storage service.
The difference between them is the associated physical storage device. The csi-cinder-high-speed
use SSD, while csi-cinder-classic
use traditional spinning disks. Both are distributed transparently, on three physical local replicas.
When you create a Persistent Volume Claim on your Kubernetes cluster, we provision the Cinder storage into your account. This storage is charged according to the OVHcloud Flexible Cloud Block Storage Policy.
Since Kubernetes 1.11, support for expanding PersistentVolumeClaims
(PVCs) is enabled by default, and it works on Cinder volumes. To learn how to resize them, please refer to the Resizing Persistent Volumes tutorial. Kubernetes PVCs resizing only allows for expanding volumes, not for decreasing them.
Setting up a Persistent Volume
In this guide, we are going to use a simple example: a small Nginx web server, running in a Pod, created by a Deployment, attached to a Persistent Volume.
Create a namespace:
Define a Persistent Volume Claim (PVC) in a file named pvc.yaml
with the following content:
Apply the YAML manifest:
This PVC will create a PV, with a size of 1GB, dynamically according to the storage class csi-cinder-high-speed
.
Check the new PVC and PV have been correctly created:
Example output:
$ kubectl create ns nginx-example namespace/nginx-example created $ kubectl apply -f pvc.yaml persistentvolumeclaim/nginx-logs created $ kubectl get pvc -n nginx-example NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-logs Bound ovh-managed-kubernetes-d6r47l-pvc-a6025a24-c572-4c28-b5e7-c6f8311aa47f 1Gi RWO csi-cinder-high-speed 21s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE ovh-managed-kubernetes-d6r47l-pvc-a6025a24-c572-4c28-b5e7-c6f8311aa47f 1Gi RWO Delete Bound nginx-example/nginx-logs csi-cinder-high-speed 19s
As you can see, the Persistent Volume is created and is bound to the Persistent Volume Claim you created.
Now create a file named deployment.yaml
with the following content:
Apply it:
If you look at the deployment part of this manifest, you will see that we have defined a .spec.strategy.type. It specifies the strategy used to replace old Pods with new ones, and we have set it to Recreate, so all existing Pods are killed before new ones are created.
We do so as the Storage Class we are using, csi-cinder-high-speed, only supports a ReadWriteOnce, so we can only have one pod writing on the Persistent Volume at any given time.
Thanks to the Deployment, Kubernetes will create one Pod with one container of Nginx and mount a volume on it in the path /var/log/nginx
. The Nginx container will have the permission to write in this folder.
Create a service for the Nginx container in a file named svc.yaml
:
Apply it:
Wait until you get an external IP:
And do some calls to the URL to generate some access logs:
Example output:
$ kubectl apply -f deployment.yaml kubedeployment.apps/nginx-deployment created $ kubectl get po -n nginx-example NAME READY STATUS RESTARTS AGE nginx-deployment-766444c4d9-bqnz7 1/1 Running 0 41s $ kubectl apply -f svc.yaml service/nginx-service created $ kubectl -n nginx-example get svc/nginx-service -w NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.3.128.254 80:31622/TCP 30s nginx-service LoadBalancer 10.3.128.254 80:31622/TCP 51s nginx-service LoadBalancer 10.3.128.254 152.228.168.120 80:31622/TCP 51s $ export NGINX_URL=$(kubectl get svc nginx-service -n nginx-example -o jsonpath='{.status.loadBalancer.ingress[].ip}') echo Nginx URL: http://$NGINX_URL/ Nginx URL: http://152.228.168.120/ $ curl -I http://$NGINX_URL/ HTTP/1.1 200 OK Server: nginx/1.7.9 Date: Thu, 24 Mar 2022 12:31:12 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT Connection: keep-alive ETag: "54999765-264" Accept-Ranges: bytes $ curl -I http://$NGINX_URL/ HTTP/1.1 200 OK Server: nginx/1.7.9 Date: Thu, 24 Mar 2022 12:31:25 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT Connection: keep-alive ETag: "54999765-264" Accept-Ranges: bytes
Now we need to connect to the pod to read the log file and verify that our logs are written.
First, get the name of the Nginx running pod:
And then connect to it and see your access logs:
Example output:
$ export POD_NAME=$(kubectl get po -n nginx-example -o name) $ kubectl -n nginx-example exec $POD_NAME -c nginx -- cat /var/log/nginx/access.log 10.2.1.0 - - [24/Mar/2022:12:31:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-" 10.2.2.0 - - [24/Mar/2022:12:31:12 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.1" "-"
Go further
For more information and tutorials, please see our other Managed Kubernetes or Platform as a Service guides. You can also explore the guides for other OVHcloud products and services.