Learn how to use the mutualized inputs of Logs Data Platform to ingest your logs to the platform.
Inputs are the components from Logs Data Platform that you connect to ingest your logs in the platform. In this guide, we focus on the mutualized inputs available to everyone by default.
Requirements
Before reading this documentation, you should:
- have read our Introduction to Logs Data Platform
- have read the Quick Start to Logs Data Platform
- have created and configured a Logs Data Platform account
- have created a Logs Data Platform stream
Instructions
Why different inputs?
Logs Data Platform imposes a few constraints on how your logs are structured to guarantee an efficient and optimal indexation of the logs you are sending to us. The different inputs are responsible for enforcing those constraints regardless of the compatible format you give to your logs and converting them to a common format before storing them in the platform.
The log formats that Logs Data Platform accepts are the following:
-
GELF: This is the native format of logs used by Graylog. This JSON format allows you to send logs easily. See the GELF Payload Specification. The GELF input only accepts a null (
\0
) delimiter. - LTSV: This simple format is very efficient and human-readable. You can learn more about it here. LTSV has two inputs that accept a line delimiter or a null delimiter.
- RFC 5424: This format is commonly used by logs utilities such as syslog. It is extensible enough to allow you to send all your data. More information about it can be found here.
- Cap'n'Proto: The most efficient log format. This is a binary format that allows you to maintain a low footprint and high-speed performance. For more information, check out the official website: Cap'n'Proto.
- Beats: A secure and reliable protocol used by the -beats family in the Elasticsearch ecosystem (Ex: Filebeat, Metricbeat, Winlogbeat).
Mutualized vs Dedicated inputs
This guide describes the mutualized inputs that are included in the platform by default. Before going on with the guide, here is a reminder of the few characteristics of dedicated inputs that differentiate them from mutualized ones:
- They are optional and charged on a per-input basis.
- You can choose which port they listen to.
- You can filter IP addresses that are allowed to send logs.
- You can choose to run Logstash or Flowgger and can configure their sources as well as transform the logs they ingest before storing them in Logs Data Platform.
As you can see, mutualized inputs allow you more flexibility in how you handle your logs at ingestion as well as more security features. If you need any of these features, you should look at the documentation for dedicated inputs.
Ingesting Logs
There are two main ways to ingest the logs generated by your systems or applications in Logs Data Platform:
- Use logs exporter software that parses log files that are locally stored on your filesystem, formats the logs if necessary, and connects to Logs Data Platform's inputs to send the logs. You will find separate documentation for such widely-used software: Filebeat, syslog-ng, NXLog, and the following documentation should allow you to configure any other similar software.
- Use libraries in your own software that can directly send logs to Logs Data Platform. We have some documentation that can help you through this process if you use Python 2, Python 3, or Rust though any other library that is compatible with the OpenSearch API can be used.
Whatever the choice you make, you will have to properly configure your software or libraries to send your logs to Logs Data Platform. The following section will help you find the information relevant to that purpose.
Configuring your software
To configure your software, you need the following information:
-
Input endpoint URL: The endpoint URL has the form XXX.logs.ovh.com, XXX corresponding to the cluster you are assigned to. It can be found in your OVHcloud Control Panel on the home page of your Logs Data Platform account.
-
Input endpoint port: The port to which your software must connect depends on the format of your logs, and whether or not you use a secured transport layer. The table below describes the matching between port and log format, but you will also find this information in your OVHcloud Control Panel.
Syslog RFC5424 Gelf LTSV line LTSV nul Cap’n’Proto Beats TCP/TLS 6514 12202 12201 12200 12204 5044 TCP 514 2202 2201 2200 2204 --- UDP 514 2202 2201 2200 2204 ---
- Certificate: If you use a secured transport layer, the certificates can also be found in the OVHcloud Control Panel.
- X-OVH-TOKEN: The X-OVH-token is used to dispatch the logs you ingest into Logs Data Platform to the correct log stream. You can find the X-OVH-TOKEN corresponding to your stream in the OVHcloud Control Panel under the Data stream tab.
If your software interacts directly with the OpenSearch API, please follow this documentation.
You should now have all the information you need to configure your software to ingest logs in Logs Data Platform.
Go further
For more information and tutorials, please see our other Logs Data Platform support guides or explore the guides for other OVHcloud products and services.