Learn how to successfully configure Managed Databases (also called Cloud Databases) for Kafka via the OVHcloud Control Panel.
Apache Kafka is an open-source and highly resilient event-streaming platform with three main capabilities:
- write or read data to/from stream events;
- store streams of events;
- process streams of events.
You can get more information on Kafka from the official Kafka website.
Subscribe to the service
Log in to your OVHcloud Control Panel and switch to
Public Cloud in the top navigation bar. After selecting your Public Cloud project, click on
Data Streaming in the left-hand navigation bar under Databases & Analytics.
Create a database instance button (click
Create a service if your project already contains databases).
Step 1: Select your database type
Click on the type of database you want to use and then select the version to install from the respective drop-down menu.
Step 2: Select a service plan
In this step, choose an appropriate service plan. If needed, you will be able to upgrade the plan after creation.
Please visit the capabilities page for detailed information on each plan's properties.
Step 3: Select a region
Choose the geographical region of the datacenter where your service will be hosted.
Step 4: Node type and cluster sizing
You can increase the number of nodes and choose the node template in this step. The minimum and maximum amount of nodes depends on the solution chosen in step 2.
Please visit the capabilities page for detailed information on hardware resources and other properties of the database installation.
Take note of the pricing information.
Step 5: Configure your options
You can decide to attach your database to a public or private network.
Step 6: Review and confirm
The panel on the right side of the page will display a summary of your order and the OVHcloud API and Terraform equivalents of creating this database instance.
In a matter of minutes, your new Apache Kafka service will be deployed. Messages in the OVHcloud Control Panel will inform you when the streaming tool is ready to use.
Configure the Apache Kafka service
Once the Cloud Databases for Kafka service is up and running, you will have to define at least one user and one authorized IP to fully connect to the service (as producer or consumer).
General information tab should inform you to create users and authorized IPs.
Step 1 (mandatory): Set up a user
Switch to the
Users tab. An admin user is preconfigured during the service installation. You can add more users by clicking the
Add User button.
Enter a username, then click
Once the user is created, the password is generated. Please keep it secure as it will not be shown again.
Passwords can be reset for the admin user or changed afterward for other users in the
Step 2 (mandatory): Configure authorized IPs
Switch to the
Authorized IPs tab. At least one IP address must be authorized here before you can connect to your database. It can be your laptop IP for example.
+ Add an IP address or IP block (CIDR) opens a new window in which you can add single IP addresses or blocks to allow access to the database.
You can edit and remove database access via the more options
... button in the IP table.
If you don't know how to get your IP, please visit a website like www.WhatismyIP.com. Copy the IP address shown on this website and keep it for later.
Optionally, you can configure access control lists (ACL) for granular permissions and create something called topics, as shown below.
Optional: Create Kafka topics
Topics can be seen as categories, allowing you to organize your Kafka records. Producers write to topics and consumers read from topics.
To create Kafka topics, click on the
Add a topic button:
In advanced configuration you can change the default value for the following parameters:
- Replication (3 brokers by default)
- Partitions (1 partition by default)
- Retention size in bytes (-1: no limitation by default)
- Retention time in hours (-1: no limitation by default)
- Minimum in-sync replica (2 by default)
- Deletion policy
Optional: Configure ACLs on topics
Cloud Databases for Kafka supports access control lists (ACLs) to manage permissions on topics. This approach allows you to limit the operations that are available to specific connections and to restrict access to certain data sets, which improves the security of your data.
By default the admin user has access to all topics with admin privileges. You can define some additional ACLs for all users/topics by clicking the
Add a new entry button:
For a particular user, and one topic (or all with '*'), define the ACL with the following permissions:
- admin: full access to APIs and topic
- read: allow only searching and retrieving data from a topic
- write: allow updating, adding, and deleting data from a topic
- readwrite: full access to the topic
When multiple rules match, they are applied in the order listed above. If no rules match, access is denied.
First CLI connection to your Kafka service
Check also that the user has granted ACLs for the target topics.
Download server and user certificates
To connect to the Apache Kafka service, it is required to use server and user certificates.
1 - Server certificate
The server CA (Certificate Authority) certificate can be downloaded from the General information tab. Select the more options
... button and click
2 - User certificate
The user certificate can be downloaded from the Users tab. Select the more options
... button and click
3 - User access key
Also, download the user access key.
Install an Apache Kafka CLI
As part of the Apache Kafka official installation, you will get different scripts that will also allow you to connect to Kafka in a Java 8+ environment: Apache Kafka Official Quickstart.
We propose to use a generic producer and consumer client instead: Kcat (formerly known as kafkacat). Kcat is more lightweight since it does not require a JVM.
For this client installation, please follow the instructions available at Kafkacat Official Github.
Kcat configuration file
Let's create a configuration file to simplify the CLI commands to act as Kafka Producer and Consumer:
bootstrap.servers=kafka-f411d2ae-f411d2ae.database.cloud.ovh.us:20186 enable.ssl.certificate.verification=false ssl.ca.location=/home/user/kafkacat/ca.pem security.protocol=ssl ssl.key.location=/home/user/kafkacat/service.key ssl.certificate.location=/home/user/kafkacat/service.cert
In our example, the cluster address and port are kafka-f411d2ae-f411d2ae.database.cloud.ovh.us:20186 and the previously downloaded CA certificates are in the /home/user/kafkacat/ folder.
Change these values according to your own configuration.
For this first example let's push the "test-message-key" and its "test-message-content" to the "my-topic" topic.
echo test-message-content | kcat -F kafkacat.conf -P -t my-topic -k test-message-key
The data can be retrieved from "my-topic".
kcat -F kafkacat.conf -C -t my-topic -o -1 -e
Some UI tools for Kafka are also available:
Visit the GitHub examples repository to find out how to connect to your database with several languages.