diff --git a/blog/release-0-7-2.md b/blog/release-0-7-2.md index 33f30d92f..d5d6caf36 100644 --- a/blog/release-0-7-2.md +++ b/blog/release-0-7-2.md @@ -6,7 +6,7 @@ date: 2024-04-08 Release date: April 08, 2024 -This is a patch release, containing a critial bug fix to avoid wrongly delete data files ([#3635](https://github.com/GreptimeTeam/greptimedb/pull/3635)). +This is a patch release, containing a critical bug fix to avoid wrongly delete data files ([#3635](https://github.com/GreptimeTeam/greptimedb/pull/3635)). **It's highly recommended to upgrade to this version if you're using v0.7.** diff --git a/docs/user-guide/ingest-data/for-observerbility/kafka.md b/docs/user-guide/ingest-data/for-observerbility/kafka.md index 3c0418231..5d5b9c61b 100644 --- a/docs/user-guide/ingest-data/for-observerbility/kafka.md +++ b/docs/user-guide/ingest-data/for-observerbility/kafka.md @@ -76,7 +76,7 @@ A pipeline processes the logs into structured data before ingestion into Greptim ### Logs with JSON format For logs in JSON format (e.g., `{"timestamp": "2024-12-23T10:00:00Z", "level": "INFO", "message": "Service started"}`), -you can use the built-in [`greptime_identity`](/logs/manage-pipelines.md#greptime_identity) pipeline for direct ingestion. +you can use the built-in [`greptime_identity`](/user-guide/logs/manage-pipelines.md#greptime_identity) pipeline for direct ingestion. This pipeline creates columns automatically based on the fields in your JSON log message. Simply configure Vector's `transforms` settings to parse the JSON message and use the `greptime_identity` pipeline as shown in the following example: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-observerbility/kafka.md b/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-observerbility/kafka.md index 48ed469c5..1cd1a3667 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-observerbility/kafka.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/user-guide/ingest-data/for-observerbility/kafka.md @@ -75,7 +75,7 @@ Pipeline 在写入到 GreptimeDB 之前将日志处理为结构化数据。 ### JSON 格式的日志 对于 JSON 格式的日志(例如 `{"timestamp": "2024-12-23T10:00:00Z", "level": "INFO", "message": "Service started"}`), -你可以使用内置的 [`greptime_identity`](/logs/manage-pipelines.md#greptime_identity) pipeline 直接写入日志。 +你可以使用内置的 [`greptime_identity`](/user-guide/logs/manage-pipelines.md#greptime_identity) pipeline 直接写入日志。 此 pipeline 根据 JSON 日志消息中的字段自动创建列。 你只需要配置 Vector 的 `transforms` 设置以解析 JSON 消息,并使用 `greptime_identity` pipeline,如以下示例所示: