Objective
The OVHcloud Managed Kubernetes service provides you Kubernetes clusters without the hassle of installing or operating them. At OVHcloud, we like to provide you with the best products and services. For us, security is important. That’s why, by default, we apply security updates on your Kubernetes clusters.
Still, you can change the configuration of the security update policy for your cluster. Learn how to do it in this guide.
Requirements
- An OVHcloud Managed Kubernetes cluster
Instructions
Configure security update policy through the OVHcloud Control Panel
Log into the OVHcloud Control Panel. Go to the Public Cloud
section, and select the Public Cloud project concerned.
Access the administration UI for your OVHcloud Managed Kubernetes clusters by clicking on Managed Kubernetes Service
in the left-hand menu.
Click the ...
button to the right of your Kubernetes cluster, and choose Manage cluster
.
From the Management section, click on Change security policy
.
A pop-up displays all the options you can have:
-
Do not update. We do not recommend this choice. OVHcloud reserves the right to update Kubernetes components or your nodes on an exceptional basis, in critical cases that limit the security of our infrastructure.
-
Minimum unavailability. Apply (’patch version’) security updates to my Kubernetes service to guarantee service security and stability. If we cannot avoid downtime while performing a rolling update on your nodes, we will report this to you. We advise sizing your cluster to ensure that it can be updated at any time.
-
Maximum security. Apply (’patch version’) security updates to my Kubernetes service to guarantee service security and stability. The update may result in your nodes being unavailable for a few minutes while we perform the rolling update.
The security policy Maximum security is configured by default. Even though we recommend that you keep this setting, you can choose the security policy that is convenient for you.
Choose an option and click Confirm
.
Configure security update policy through Terraform
Since the version 0.20+ of our OVHcloud Terraform provider, you can configure the security update policy at cluster creation and update also through Terraform.
Getting your cluster/API tokens information
The “OVHprovider” needs to be configured with a set of credentials:
- an
application_key
- an
application_secret
- a
consumer_key
Why?
Behind the scenes, the “OVHcloud Terraform provider” is making requests to OVHcloud APIs.
In order to retrieve this information, please follow our First Steps with the OVHcloud APIs tutorial.
Specifically, you have to generate these credentials via the OVHcloud token generation page with the following rights:
When you have successfully generated your OVHcloud tokens, please save them as you will have to use them very soon.
The last information we need is the service_name
: it is the ID of your Public Cloud project.
How to get it?
In the Public Cloud section, you can retrieve your service name ID thanks to the Copy to clipboard
button.
You will also use this information in the Terraform resources definition files.
Terraform instructions
First, create a provider.tf
file with the minimum version, US endpoint (“ovh-us”), and the keys previously retrieved in this guide.
Terraform 0.13 and later:
terraform {
required_providers {
ovh = {
source = "ovh/ovh"
}
}
}
provider "ovh"{
endpoint = "ovh-us"
application_key = "<your_access_key>"
application_secret = "<your_application_secret>"
consumer_key = "<your_consumer_key>"
}
Terraform 0.12 and earlier:
# Configure the OVHcloud Provider
provider "ovh"{
endpoint = "ovh-us"
application_key = "<your_access_key>"
application_secret = "<your_application_secret>"
consumer_key = "<your_consumer_key>"
}
Alternatively the secret keys can be retrieved from your environment.
OVH_ENDPOINT
OVH_APPLICATION_KEY
OVH_APPLICATION_SECRET
OVH_CONSUMER_KEY
This second method (or a similar alternative) is recommended to avoid storing secret data in a source repository.
Here, we defined the ovh-us
endpoint because we want to call the OVHcloud US API, but other endpoints exist depending on your needs:
ovh-us
for OVHcloud US APIovh-ca
for OVHcloud North America APIovh-eu
for OVHcloud Europe API
Create a variables.tf
file with service_name:
variable service_name {
type = string
default = "<your_service_name>"
}
Define the resources you want to create in a new file called ovh_kube_cluster.tf
:
resource "ovh_cloud_project_kube""cluster"{
service_name = var.service_name
name = "my-super-cluster"
region = "US-EAST-VA-1"
version = "1.24"
update_policy = "NEVER_UPDATE" # "ALWAYS_UPDATE" by default but you can also choose "MINIMAL_DOWNTIME" or "NEVER_UPDATE"
}
In this resources configuration, we ask Terraform to create a Kubernetes cluster, in the US-EAST-VA-1 region, using the Kubernetes version 1.24 (the last and recommended version at the time we wrote this tutorial), with a security update policy of “Do not update”.
Now we need to initialize Terraform, generate a plan, and apply it.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of ovh/ovh...
- Installing ovh/ovh v0.20.0...
- Installed ovh/ovh v0.20.0 (signed by a HashiCorp partner, key ID F56D1A6CBDAAADA5)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
The init
command will initialize your working directory which contains .tf
configuration files.
It’s the first command to execute for a new configuration, or it can be executed after doing a checkout of an existing configuration in a given git repository.
The init
command will:
- Download and install Terraform providers/plugins
- Initialize the backend (if defined)
- Download and install modules (if defined)
Now, we can generate our plan:
$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# ovh_cloud_project_kube.cluster will be created
+ resource "ovh_cloud_project_kube""cluster"{
+ control_plane_is_up_to_date=(known after apply)
+ id=(known after apply)
+ is_up_to_date=(known after apply)
+ kubeconfig=(sensitive value)
+ name="my-super-cluster"
+ next_upgrade_versions=(known after apply)
+ nodes_url=(known after apply)
+ region="US-EAST-VA-1"
+ service_name="xxxxxxxxxxxxxxxxxxxx"
+ status=(known after apply)
+ update_policy="NEVER_UPDATE"
+ url=(known after apply)
+ version="1.24"}
Plan: 1 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
Thanks to the plan
command, we can check what Terraform wants to create, modify, or remove.
The plan is okay for us, so let’s apply it:
$ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# ovh_cloud_project_kube.cluster will be created
+ resource "ovh_cloud_project_kube""cluster"{
+ control_plane_is_up_to_date=(known after apply)
+ id=(known after apply)
+ is_up_to_date=(known after apply)
+ kubeconfig=(sensitive value)
+ name="my-super-cluster"
+ next_upgrade_versions=(known after apply)
+ nodes_url=(known after apply)
+ region="US-EAST-VA-1"
+ service_name="xxxxxxxxxxxxxxxxxxxx"
+ status=(known after apply)
+ update_policy="NEVER_UPDATE"
+ url=(known after apply)
+ version="1.24"}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
ovh_cloud_project_kube.cluster: Creating...
ovh_cloud_project_kube.cluster: Still creating... [10s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [20s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [30s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [40s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [50s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [1m0s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [1m10s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [1m20s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [1m30s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [1m40s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [1m50s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [2m0s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [2m10s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [2m20s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [2m30s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [2m40s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [2m50s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [3m0s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [3m10s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [3m20s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [3m30s elapsed]
ovh_cloud_project_kube.cluster: Still creating... [3m40s elapsed]
ovh_cloud_project_kube.cluster: Creation complete after 3m47s [id=76db2764-58d2-4384-b17f-ab38b0c7fc78]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Update
If you want to update the security policy, you can also do it through Terraform. Edit the ovh_kube_cluster.tf
file with this content:
resource "ovh_cloud_project_kube" "cluster" {
service_name = var.service_name
name = "my-super-cluster"
region = "US-EAST-VA-1"
version = "1.24"
update_policy = "ALWAYS_UPDATE" # "ALWAYS_UPDATE" by default but you can also choose "MINIMAL_DOWNTIME" or "NEVER_UPDATE"
}
And apply it:
$ terraform apply
ovh_cloud_project_kube.cluster: Refreshing state... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# ovh_cloud_project_kube.cluster will be updated in-place
~ resource "ovh_cloud_project_kube""cluster"{id="xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx"name="my-super-cluster"
~ update_policy="NEVER_UPDATE" -> "ALWAYS_UPDATE"# (10 unchanged attributes hidden)}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
ovh_cloud_project_kube.cluster: Modifying... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
ovh_cloud_project_kube.cluster: Modifications complete after 1s [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Destroy
If you want to delete the Kubernetes cluster you added through Terraform, you have to execute the terraform destroy
command:
$ terraform destroy
ovh_cloud_project_kube.cluster: Refreshing state... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# ovh_cloud_project_kube.cluster will be destroyed
- resource "ovh_cloud_project_kube""cluster"{
- control_plane_is_up_to_date=true -> null
- id="xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx" -> null
- is_up_to_date=true -> null
- kubeconfig=(sensitive value)
- name="my-super-cluster" -> null
- next_upgrade_versions=[] -> null
- nodes_url="xxxxxx.nodes.c3.gra.k8s.ovh.net" -> null
- region="US-EAST-VA-1" -> null
- service_name="xxxxxxxxxxxxxxxxx" -> null
- status="READY" -> null
- update_policy="ALWAYS_UPDATE" -> null
- url="xxxxxx.c3.gra.k8s.ovh.net" -> null
- version="1.24" -> null
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
ovh_cloud_project_kube.cluster: Destroying... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
ovh_cloud_project_kube.cluster: Still destroying... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx, 10s elapsed]
ovh_cloud_project_kube.cluster: Still destroying... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx, 20s elapsed]
ovh_cloud_project_kube.cluster: Still destroying... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx, 30s elapsed]
ovh_cloud_project_kube.cluster: Destruction complete after 37s
Destroy complete! Resources: 1 destroyed.