produced an unexpected new value: .input[1].streams_json: inconsistent values for sensitive │ attribute.
Hi im having some issues determining what inconsistent sensitive values it is referring to.
I was only hoping to update processors. I have (i tihnk) replicated the values of the others vars into terraform
resource "elasticstack_fleet_integration_policy" "kubernetes_policy_integration_policy" {
name = "kubernetes-1"
namespace = "default"
description = "kubernetes-1"
agent_policy_id = elasticstack_fleet_agent_policy.eck_agent_policy.policy_id
integration_name = elasticstack_fleet_integration.kubernetes_integration.name
integration_version = elasticstack_fleet_integration.kubernetes_integration.version
input {
enabled = false
input_id = "audit-logs-filestream"
}
input {
enabled = true
input_id = "container-logs-filestream"
streams_json = jsonencode({
"kubernetes.container_logs" : {
"vars" : {
"paths" : ["/var/log/containers/*$${kubernetes.container.id}.log"],
"symlinks" : true,
"containerParserStream" : "all",
"containerParserFormat" : "auto",
"data_stream.dataset" : "kubernetes.container_logs",
"additionalParsersConfig" : "#",
"custom" : "",
"processors" : <<YAML
- if:
equals.kubernetes.labels.log-json-decode: "true"
then:
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 5
target: "custom_json"
overwrite_keys: true
add_error_key: true
YAML
}
}
})
}
input {
enabled = true
input_id = "events-kubernetes/metrics"
}
input {
enabled = true
input_id = "kube-apiserver-kubernetes/metrics"
}
input {
enabled = false
input_id = "kube-controller-manager-kubernetes/metrics"
}
input {
enabled = true
input_id = "kube-proxy-kubernetes/metrics"
}
input {
enabled = false
input_id = "kube-scheduler-kubernetes/metrics"
}
input {
enabled = true
input_id = "kube-state-metrics-kubernetes/metrics"
}
input {
enabled = true
input_id = "kubelet-kubernetes/metrics"
}
}
agent policy
"type": "filestream",
"policy_template": "container-logs",
"enabled": true,
"streams": [
{
"enabled": true,
"data_stream": {
"type": "logs",
"dataset": "kubernetes.container_logs",
"elasticsearch": {
"dynamic_dataset": true,
"dynamic_namespace": true
}
},
"vars": {
"paths": {
"value": [
"/var/log/containers/*${kubernetes.container.id}.log"
],
"type": "text"
},
"symlinks": {
"value": true,
"type": "bool"
},
"data_stream.dataset": {
"value": "kubernetes.container_logs",
"type": "text"
},
"containerParserStream": {
"value": "all",
"type": "text"
},
"containerParserFormat": {
"value": "auto",
"type": "text"
},
"condition": {
"type": "text"
},
"additionalParsersConfig": {
"value": "#",
"type": "yaml"
},
"processors": {
"value": """- if:
equals.kubernetes.labels.log-json-decode: "true"
then:
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 5
target: "custom_json"
overwrite_keys: true
add_error_key: true
""",
"type": "yaml"
},
"custom": {
"value": "",
"type": "yaml"
}
},
"id": "filestream-kubernetes.container_logs-80ac6a45-8049-4aac-a77b-e6ba648bb27f",
"compiled_stream": {
"id": "kubernetes-container-logs-${kubernetes.pod.name}-${kubernetes.container.id}",
"paths": [
"/var/log/containers/*${kubernetes.container.id}.log"
],
"data_stream": {
"dataset": "kubernetes.container_logs"
},
"prospector": {
"scanner": {
"fingerprint.enabled": true,
"symlinks": true
}
},
"file_identity.fingerprint": null,
"parsers": [
{
"container": {
"stream": "all",
"format": "auto"
}
}
],
"processors": [
{
"add_fields": {
"target": "kubernetes",
"fields": {
"annotations.elastic_co/dataset": """${kubernetes.annotations.elastic.co/dataset|""}""",
"annotations.elastic_co/namespace": """${kubernetes.annotations.elastic.co/namespace|""}""",
"annotations.elastic_co/preserve_original_event": """${kubernetes.annotations.elastic.co/preserve_original_event|""}"""
}
}
},
{
"drop_fields": {
"fields": [
"kubernetes.annotations.elastic_co/dataset"
],
"when": {
"equals": {
"kubernetes.annotations.elastic_co/dataset": ""
}
},
"ignore_missing": true
}
},
{
"drop_fields": {
"fields": [
"kubernetes.annotations.elastic_co/namespace"
],
"when": {
"equals": {
"kubernetes.annotations.elastic_co/namespace": ""
}
},
"ignore_missing": true
}
},
{
"drop_fields": {
"fields": [
"kubernetes.annotations.elastic_co/preserve_original_event"
],
"when": {
"equals": {
"kubernetes.annotations.elastic_co/preserve_original_event": ""
}
},
"ignore_missing": true
}
},
{
"add_tags": {
"tags": [
"preserve_original_event"
],
"when": {
"and": [
{
"has_fields": [
"kubernetes.annotations.elastic_co/preserve_original_event"
]
},
{
"regexp": {
"kubernetes.annotations.elastic_co/preserve_original_event": "^(?i)true$"
}
}
]
}
}
},
{
"if": {
"equals.kubernetes.labels.log-json-decode": "true"
},
"then": [
{
"decode_json_fields": {
"fields": [
"message"
],
"process_array": false,
"max_depth": 5,
"target": "custom_json",
"overwrite_keys": true,
"add_error_key": true
}
}
]
}
]
}
}
]
},
{
"type": "filestream",
"policy_template": "audit-logs",
"enabled": false,
"streams": [
{
"enabled": false,
"data_stream": {
"type": "logs",
"dataset": "kubernetes.audit_logs"
},
"vars": {
"paths": {
"value": [
"/var/log/kubernetes/kube-apiserver-audit.log"
],
"type": "text"
},
"processors": {
"type": "yaml"
},
"condition": {
"type": "text"
}
},
"id": "filestream-kubernetes.audit_logs-80ac6a45-8049-4aac-a77b-e6ba648bb27f"
}
]
}
],
"revision": 15,
"created_at": "2024-10-24T09:27:10.192Z",
"created_by": "system",
"updated_at": "2024-10-31T15:02:18.203Z",
"updated_by": "elastic",
"vars": {}
}
Was anyone ever able to figure out what was causing this issue? Observing a similar error and it doesn't provide that great of context.
@kaykhan @BenB196 I have solved this issue by adding required variable to terraform streams_json. This error occurs when required variables of Kibana Fleet API request are not defined in terraform streams_json.
In my case, I used AWS integration and forgot to add default_regions variables to terrafrom configuration.
In the above case, I think "useFingerprint": true or false, is required to add after "additionalParsersConfig": "#" .
Please reference the following.
@kaykhan @BenB196 I have solved this issue by adding required variable to terraform streams_json. This error occurs when required variables of Kibana Fleet API request are not defined in terraform streams_json.
In my case, I used AWS integration and forgot to add
default_regionsvariables to terrafrom configuration.In the above case, I think
"useFingerprint": true or false,is required to add after"additionalParsersConfig": "#". Please reference the following.
@nandar-p how are you determining what the required fields are?
@nandar-p how are you determining what the required fields are?
@kaykhan I used the following as reference.
By the way, what is the version of Kubernetes Integration?
@nandar-p
Unfortunately i was preoccupied with other things but now im back onto this.
Good point we are using version 1.68.1 kubernetes integration - fingerprint is not there
Do you mind providing your terraform elasticstack_fleet_integration_policy block for additional comparison
It looks like despite setting all the required fields for my integration version i am still getting those errors.
am i missing something?
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to elasticstack_fleet_integration_policy.kubernetes_policy_integration_policy, provider
│ "provider[\"registry.terraform.io/elastic/elasticstack\"]" produced an unexpected new value: .input[1].streams_json: inconsistent values for sensitive
│ attribute.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to elasticstack_fleet_integration_policy.kubernetes_policy_integration_policy, provider
│ "provider[\"registry.terraform.io/elastic/elasticstack\"]" produced an unexpected new value: .input[7].streams_json: inconsistent values for sensitive
│ attribute.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
resource "elasticstack_fleet_integration_policy" "kubernetes_policy_integration_policy" {
name = "kubernetes-default-eks"
namespace = "default"
description = "kubernetes-default-eks"
agent_policy_id = elasticstack_fleet_agent_policy.eck_agent_policy.policy_id
integration_name = "kubernetes"
integration_version = "1.68.1"
input {
enabled = false
input_id = "audit-logs-filestream"
}
input {
enabled = true
input_id = "container-logs-filestream"
streams_json = jsonencode({
"kubernetes.container_logs" : {
"vars" : {
"paths" : [
"/var/log/containers/*$${kubernetes.container.id}.log"
],
"symlinks" : true,
"data_stream.dataset" : "kubernetes.container_logs",
"containerParserStream" : "all",
"containerParserFormat" : "auto",
"additionalParsersConfig" : "# - ndjson:\n# target: json\n# ignore_decoding_error: true\n# - multiline:\n# type: pattern\n# pattern: '^\\['\n# negate: true\n# match: after\n",
"processors" : <<YAML
- if:
equals.kubernetes.labels.log-json-decode: "true"
then:
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 5
target: "acme"
overwrite_keys: true
add_error_key: true
YAML
}
}
})
}
@kaykhan
Thank you for sharing your version and source codes.
I suspect the input processors YAML format. Have you tried without processors and how did it go?
Hello @nandar-p @BenB196 I've spoken with @dimuon on our internal Elastic Slack, and we got a support case about this same issue.
If you want/need more info, please ping me internally so I can give you more info.
Thanks, Gabriel
Same issue here #1346
For anyone landing here later - nandar-p already nailed the fix above, but here's a clear summary:
The rule: Either specify NOTHING or specify EVERYTHING - partial config always triggers this error.
Option A - Don't need customization?
Omit streams_json and vars_json entirely. Just use input_id + enabled:
input {
input_id = "container-logs-filestream"
enabled = true
# no streams_json, no vars_json - integration defaults apply
}
No comparison happens, no error.
Option B - Need customization? Specify EVERY var the API returns, even empty ones. Miss one field = mismatch = error.
Easiest way to get the complete list:
# After first apply (even if it errors, the resource exists)
terraform state pull | jq '.resources[] | select(.type == "elasticstack_fleet_integration_policy") | .instances[0].attributes.input'
Copy all vars from there into your TF verbatim. Then plan shows "No changes".
tl;dr: The provider compares what you send vs what API returns. Partial ≠ complete = "inconsistent values". Either send nothing or send everything.