This page provides the technical capabilities and limitations of the Managed Databases (also called Cloud Databases) for Kafka MirrorMaker offer.
We continuously improve our offers. You can follow and submit ideas to add to our roadmap.
Capabilities and limitations
To view Cloud Databases availability, please see our regions and availability webpage.
Entire database instances have to be in the same region.
The Cloud Databases offer supports the following Kafka versions:
- Kafka MirrorMaker 2.0
You can follow Kafka Release Cycle on their official page.
You can use any of the Kafka-recommended clients to access your cluster.
Additionally, Kafka Connect is available at OVHcloud.
Three plans are available:
Here is an overview of the various plans' capabilities:
|Number of nodes by default
Your choice of plan affects the number of nodes your cluster can run as well as the SLA.
- Essential: the cluster is delivered with 1 node by default.
- Business: the cluster is delivered with 3 nodes by default.
- Enterprise: the cluster is delivered with 6 nodes by default.
Kafka software is under the Apache 2 license, a liberal open-source license. More information can be found here.
Here are the node types you can choose from:
Right now, all nodes of a given cluster should be of the same type and distributed in the same region.
Public as well as private networking (vRack) can be used for all the offers.
Ingress and Egress traffic are included in the service plans and unmetered.
The database service's IP address is subject to change periodically. Thus, it is advised not to rely on these IPs for any configuration, such as connection or egress policy. Instead, utilize the provided DNS record and implement CIDR-based egress policies for more robust and flexible network management.
Private network considerations
Here are some considerations to take into account when using a private network:
- Network ports are created in the private network of your choice. Thus, further operations on that network might be restricted - e.g. you won’t be able to delete the network if you didn’t stop the Cloud Databases services first.
- When connecting from an outside subnet, the OpenStack IP gateway must be enabled in the subnet used for the Database service. The customer is responsible for any other custom network setup.
- Subnet sizing should include considerations for service nodes, other co-located services within the same subnet, and an allocation of additional available IP addresses for maintenance purposes. Failure to adequately size subnets could result in operational challenges and the malfunctioning of services.
Once your service is up and running, you will be able to specify IP addresses (or CIDR blocks) to authorize incoming traffic. Until then, your service will be unreachable.
Kafka replication and data retention
You can select a Kafka source cluster and a Kafka destination cluster from the same Public Cloud project. External Kafka clusters are not supported so far.
You need at least two Kafka clusters to create replication flows.
Replication flows allowed parameters are:
- Topics exclusion
- Sync group offset
- Sync interval in seconds (s)
- Heartbeats (true/false)
Data retention is only limited by your cluster storage space.
Though advanced parameters are supported for Kafka, they are not supported for Kafka MirrorMaker.
Kafka is a streaming tool. We don't back up Kafka data.
Logs and metrics
Logs and metrics are available through the Control Panel and the API. Additionally, cross-service integration can be configured to leverage your logs and metrics in other Cloud Database services. You could then view your Kafka MirrorMaker logs in OpenSearch and metrics in Grafana (metrics have to be exported first in a time series compatible engine such as PostgreSQL or M3db). See our Cloud Databases - Cross Service Integration guide for more information.
- Logs retention: 1000 lines of logs
- Metrics retention: 1 calendar month
Please note that if the database instance is deleted, logs and metrics are also automatically deleted.
Users and roles
The creation of users is allowed via the Control Panel and API.
You can specify a username for each user. By default, the role is admin.