openshift-docs
openshift-docs copied to clipboard
OBSDOCS-118
OBSDOCS-118: Configuring JSON log data for Elasticsearch is defined incorrectly Aligned team: Observability OCP version for cherry-picking: JIRA issues: OBSDOCS-118 Preview pages: https://75444--ocpdocs-pr.netlify.app/openshift-enterprise/latest/observability/logging/log_collection_forwarding/cluster-logging-enabling-json-logging.html#cluster-logging-configuration-of-json-log-data-for-default-elasticsearch_cluster-logging-enabling-json-logging SME review completed: @jcantrill QE review completed: @anpingli Peer review requested:
🤖 Wed Jun 19 10:56:27 - Prow CI generated the docs preview:
https://75444--ocpdocs-pr.netlify.app/openshift-dedicated/latest/observability/logging/log_collection_forwarding/cluster-logging-enabling-json-logging.html https://75444--ocpdocs-pr.netlify.app/openshift-enterprise/latest/observability/logging/log_collection_forwarding/cluster-logging-enabling-json-logging.html https://75444--ocpdocs-pr.netlify.app/openshift-rosa/latest/observability/logging/log_collection_forwarding/cluster-logging-enabling-json-logging.html
line 26 should be flat_labels. only Preserve k8s Common Labels can be show in kubernetes.labels. Refer to https://issues.redhat.com/browse/LOG-2388 "kubernetes": { "flat_labels": [ "logFormat=apache", .... ] }
/lgtm
/retest
New changes are detected. LGTM label has been removed.
@smunje1: all tests passed!
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
/label peer-review-needed
/label peer-review-in-progress
Hello @smunje1 ,
I feel this is not explaining well how to configure JSON.
It should be clear the difference between the configuration needed for LogForwarding as JSON to the internal LogStorage (default)
outputDefaults:
elasticsearch:
structuredTypeKey: kubernetes.labels.managed
structuredTypeName: nologformat
And other outputs:
outputs:
- name: elasticsearch-secure
elasticsearch:
structuredTypeKey: openshift.labels.unmanaged
structuredTypeName: nologformat
As it's now, it's not so much clear and it should be. This part is really what's trying to fix this documentation bug and it continues not doing from the point of view of an user.
At the same time, this command is wrong:
$ oc delete pod --selector logging-infra=collector
For deleting the collector pods, it should be as below, but really, any change in the ClusterLogForward should automatically restart the collector pods, then, the command shouldn't be needed and in case that not restarted, then, it should be reviewed in the ClusterLogForwarder status if any error is present in the logic that leads to don't be restarted automatically:
$ oc -n openshift-logging delete pod -l component=collector
And the last part, I don't understand the reason for having separated the sections below as "Configuring" and "Forwarding" have the same meaning here. If you configure it, you are log forwarding it, then, what's the sense of having two different sections where even the second "Forwarding..." has not an entire ClusterLogforwarding example and it's only present in the first section "Configuring JSON..."?
@r2d2rnd We would like to deal with your feedback in a separate jira and merge this one, since it has already passed SME and QE review. Does that suit you?
Hello @briandooley ,
My comment has 3 parts. The fist part is directly related to the documentation bug OBSDOCS-118 and this is the reason of this ticket.
Until now, the documentation doesn't explain when use "output", it was only explained outputDefaults, then, if we introduce now the "output" definition for JSON in the examples, but not indicated clearly when use outputDefaults and when use output, then, this will add more confusion.
Then, I'd prefer this part to be fixed here as it's directly related to this topic. For the other things commented, I'd open a separated ticket.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.