Learn how to collect logs from pods in a Kubernetes cluster and send them to Logs Data Platform.
Kubernetes is, in effect, the standard to manage containerized applications on cloud platforms. It is open-source, has a large ecosystem, and has an ever-growing community. Kubernetes is great but once your containers go live in the cloud, you still want to monitor their behavior. The more containers you have, the more difficult it can be to navigate through the logs and have a clear picture of what's happening. How can you centralize all your Kubernetes pods' logs in one place and analyze them easily? By using Logs Data Platform with the help of Fluent Bit. Fluent Bit is a fast and lightweight log processor and forwarder. It is open-source, cloud-oriented, and a part of the Fluentd ecosystem. This tutorial will help you to configure it for Logs Data Platform, and you can of course apply it to our fully managed Kubernetes offer.
Requirements
- an activated Logs Data Platform account
- at least one Stream and its token
- a working Kubernetes cluster with some pods already logging to stdout
Preparation
Before we dive into this tutorial, it is important to understand how we will deploy Fluent Bit. The configuration of Fluent Bit will be similar to the one you can find in the official documentation. Fluent Bit will be deployed as a DaemonSet in every node of the Kubernetes cluster through a Helm installation. Helm is a package manager for Kubernetes that can simplify the deployment of applications on Kubernetes. Fluent Bit will read, parse, and ship every log of every pod of your cluster by default. It will also enrich each log with precious metadata like pod name and ID, container name and IDs, labels, and annotations. As stated in the Fluent Bit documentation, a built-in Kubernetes filter will use Kubernetes API to gather some of that information. This configuration has been tested with Kubernetes 1.30 and Fluent Bit image 2.2.2
Instructions
We will configure Fluent Bit with these steps:
- Create the logging namespace where the Fluent Bit deployment will live.
- Define the Helm values which will be used in the Fluent Bit configuration
- Install the DaemonSet with Helm to launch Fluent Bit
Namespace
Run the following command to create the namespace of Fluent Bit:
Configuration
Once the namespace is created, we can proceed to the next step: define a secret for the X-OVH-TOKEN value of your stream token.
Token Secret creation
There are several methods to create a secret in Kubernetes but we will use the one-liner version of secret creation.
We will create a ldp-token secret with only one key named ldp-token as the value of our token. Replace the your-token-value value with the value of your token.
Helm Values
The Helm installation is documented here. We will customize the values used by the Helm installation to change the configuration of Fluent Bit before its deployment. The default values in the configuration file of the Helm package are located here.
For brevity's sake, we will just detail the part where we change the default values. Make sure to check the default values in the whole file to adapt it to your Kubernetes configuration.
Look for the env: configuration in the file and add the following values to use your secret as an environment variable.
We now need to add several filters to add the token and format logs records for the GELF output of Fluent Bit.
- The first filter parses the incoming messages and adds all metadata information (pod, container, images).
- The second filter adds the X-OVH-TOKEN by using the environment variable we configured earlier.
- The third filter ensures all Kubernetes metadata are flattened on the first level of the record so that they can be used as fields.
- The fourth filter copies the name of the pod in the key host so that this value is properly set. You are free to use any other metadata value that will suit your needs here.
- The final filter ensures there is always a value for the field log. This field will be used as the short_message key for the GELF output and thus cannot be empty. Some software might have their log messages in another field than log. This will prevent these log messages from being lost.
Now we can provide the output configuration that will send logs to Logs Data Platform in GELF format.
In this GELF output configuration we use the address graX.logs.ovh.com. Please change it to the actual address of your Logs Data Platform account. The port used is the one for GELF and tls is activated. Note that Gelf_Short_Message_Key is put to log, because that's where the main Kubernetes log field is.
Launch Fluent Bit
You must first add the Helm repository with the following command:
Then use the following helm command to deploy Fluent Bit on your platform. The values.yaml file contains the modified configuration.
Verify that the pods are running correctly with the command:
You can now fly to the stream interface to witness your beautifully structured logs.
And that's it. Your Kubernetes activity is now perfectly logged in one place!
Go further
For more information and tutorials, please see our other Logs Data Platform support guides or explore the guides for other OVHcloud products and services.