logstash icon indicating copy to clipboard operation
logstash copied to clipboard

SNMP input with tables appear to split some events into separate table rows for the same index.

Open DanSheps opened this issue 9 months ago • 2 comments

Logstash information:

Please include the following information:

  1. Logstash version: 8.12.0
  2. Logstash installation source: RPM
  3. How is Logstash being run: systemd

Plugins installed:

ogstash-codec-avro (3.4.1)
logstash-codec-cef (6.2.7)
logstash-codec-collectd (3.1.0)
logstash-codec-dots (3.0.6)
logstash-codec-edn (3.1.0)
logstash-codec-edn_lines (3.1.0)
logstash-codec-es_bulk (3.1.0)
logstash-codec-fluent (3.4.2)
logstash-codec-graphite (3.0.6)
logstash-codec-json (3.1.1)
logstash-codec-json_lines (3.1.0)
logstash-codec-line (3.1.1)
logstash-codec-msgpack (3.1.0)
logstash-codec-multiline (3.1.1)
logstash-codec-netflow (4.3.2)
logstash-codec-plain (3.1.0)
logstash-codec-rubydebug (3.1.0)
logstash-filter-aggregate (2.10.0)
logstash-filter-anonymize (3.0.7)
logstash-filter-cidr (3.1.3)
logstash-filter-clone (4.2.0)
logstash-filter-csv (3.1.1)
logstash-filter-date (3.1.15)
logstash-filter-de_dot (1.0.4)
logstash-filter-dissect (1.2.5)
logstash-filter-dns (3.2.0)
logstash-filter-drop (3.0.5)
logstash-filter-elasticsearch (3.16.1)
logstash-filter-fingerprint (3.4.3)
logstash-filter-geoip (7.2.13)
logstash-filter-grok (4.4.3)
logstash-filter-http (1.5.0)
logstash-filter-json (3.2.1)
logstash-filter-kv (4.7.0)
logstash-filter-memcached (1.2.0)
logstash-filter-metrics (4.0.7)
logstash-filter-mutate (3.5.8)
logstash-filter-prune (3.0.4)
logstash-filter-ruby (3.1.8)
logstash-filter-sleep (3.0.7)
logstash-filter-split (3.1.8)
logstash-filter-syslog_pri (3.2.0)
logstash-filter-throttle (4.0.4)
logstash-filter-translate (3.4.2)
logstash-filter-truncate (1.0.6)
logstash-filter-urldecode (3.0.6)
logstash-filter-useragent (3.3.5)
logstash-filter-uuid (3.0.5)
logstash-filter-xml (4.2.0)
logstash-input-azure_event_hubs (1.4.5)
logstash-input-beats (6.7.2)
└── logstash-input-elastic_agent (alias)
logstash-input-couchdb_changes (3.1.6)
logstash-input-dead_letter_queue (2.0.0)
logstash-input-elastic_serverless_forwarder (0.1.4)
logstash-input-elasticsearch (4.19.1)
logstash-input-exec (3.6.0)
logstash-input-file (4.4.6)
logstash-input-ganglia (3.1.4)
logstash-input-gelf (3.3.2)
logstash-input-generator (3.1.0)
logstash-input-graphite (3.0.6)
logstash-input-heartbeat (3.1.1)
logstash-input-http (3.8.0)
logstash-input-http_poller (5.5.1)
logstash-input-imap (3.2.1)
logstash-input-jms (3.2.2)
logstash-input-pipe (3.1.0)
logstash-input-redis (3.7.0)
logstash-input-snmp (1.3.3)
logstash-input-snmptrap (3.1.0)
logstash-input-stdin (3.4.0)
logstash-input-syslog (3.7.0)
logstash-input-tcp (6.4.1)
logstash-input-twitter (4.1.1)
logstash-input-udp (3.5.0)
logstash-input-unix (3.1.2)
logstash-integration-aws (7.1.6)
 ├── logstash-codec-cloudfront
 ├── logstash-codec-cloudtrail
 ├── logstash-input-cloudwatch
 ├── logstash-input-s3
 ├── logstash-input-sqs
 ├── logstash-output-cloudwatch
 ├── logstash-output-s3
 ├── logstash-output-sns
 └── logstash-output-sqs
logstash-integration-elastic_enterprise_search (3.0.0)
 ├── logstash-output-elastic_app_search
 └──  logstash-output-elastic_workplace_search
logstash-integration-jdbc (5.4.6)
 ├── logstash-input-jdbc
 ├── logstash-filter-jdbc_streaming
 └── logstash-filter-jdbc_static
logstash-integration-kafka (11.3.3)
 ├── logstash-input-kafka
 └── logstash-output-kafka
logstash-integration-logstash (1.0.1)
 ├── logstash-input-logstash
 └── logstash-output-logstash
logstash-integration-rabbitmq (7.3.3)
 ├── logstash-input-rabbitmq
 └── logstash-output-rabbitmq
logstash-output-csv (3.0.10)
logstash-output-elasticsearch (11.22.2)
logstash-output-email (4.1.3)
logstash-output-file (4.3.0)
logstash-output-graphite (3.1.6)
logstash-output-http (5.6.0)
logstash-output-lumberjack (3.1.9)
logstash-output-nagios (3.0.6)
logstash-output-null (3.0.5)
logstash-output-pipe (3.0.6)
logstash-output-redis (5.0.0)
logstash-output-stdout (3.1.4)
logstash-output-tcp (6.1.2)
logstash-output-udp (3.2.0)
logstash-output-webhdfs (3.1.0)
logstash-patterns-core (4.3.4)

JVM: Bundled

OS version (uname -a if on a Unix-like system): 3.10.0-1160.49.1.el7.x86_64 #1 SMP Tue Nov 30 15:51:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:

I am using the SNMP input plugin with SNMP tables querying the following mibs:

                    "1.3.6.1.4.1.14179.2.2.1.1.1",
                    "1.3.6.1.4.1.14179.2.2.1.1.2",
                    "1.3.6.1.4.1.14179.2.2.1.1.3",
                    "1.3.6.1.4.1.14179.2.2.1.1.4",
                    "1.3.6.1.4.1.14179.2.2.1.1.6",
                    "1.3.6.1.4.1.14179.2.2.1.1.8",
                    "1.3.6.1.4.1.14179.2.2.1.1.9",
                    "1.3.6.1.4.1.14179.2.2.1.1.13",
                    "1.3.6.1.4.1.14179.2.2.1.1.16",
                    "1.3.6.1.4.1.14179.2.2.1.1.17",
                    "1.3.6.1.4.1.14179.2.2.1.1.19",
                    "1.3.6.1.4.1.14179.2.2.1.1.22",
                    "1.3.6.1.4.1.14179.2.2.1.1.26",
                    "1.3.6.1.4.1.14179.2.2.1.1.27",
                    "1.3.6.1.4.1.14179.2.2.1.1.28",
                    "1.3.6.1.4.1.14179.2.2.1.1.33",
                    "1.3.6.1.4.1.14179.2.2.1.1.37",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.32",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.104",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.106",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.105",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.38"

The full table config is:

        tables => [
            {
                "name" => "aps"
                "columns" => [
                    "1.3.6.1.4.1.14179.2.2.1.1.1",
                    "1.3.6.1.4.1.14179.2.2.1.1.2",
                    "1.3.6.1.4.1.14179.2.2.1.1.3",
                    "1.3.6.1.4.1.14179.2.2.1.1.4",
                    "1.3.6.1.4.1.14179.2.2.1.1.6",
                    "1.3.6.1.4.1.14179.2.2.1.1.8",
                    "1.3.6.1.4.1.14179.2.2.1.1.9",
                    "1.3.6.1.4.1.14179.2.2.1.1.13",
                    "1.3.6.1.4.1.14179.2.2.1.1.16",
                    "1.3.6.1.4.1.14179.2.2.1.1.17",
                    "1.3.6.1.4.1.14179.2.2.1.1.19",
                    "1.3.6.1.4.1.14179.2.2.1.1.22",
                    "1.3.6.1.4.1.14179.2.2.1.1.26",
                    "1.3.6.1.4.1.14179.2.2.1.1.27",
                    "1.3.6.1.4.1.14179.2.2.1.1.28",
                    "1.3.6.1.4.1.14179.2.2.1.1.33",
                    "1.3.6.1.4.1.14179.2.2.1.1.37",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.32",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.104",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.106",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.105",
                    "1.3.6.1.4.1.9.9.513.1.1.1.1.38"
                ]
            }
        ]

When querying some of our access points, a few of the mib entries overflow into a new "index" object (same index for the table, but a new object within the table array).

This means that if I split {} aps, it results in 2 partial events instead of 1 full event. I would expect it would keep all indexed mibs together.

This may be related to data size perhaps, however I cannot confirm. This only happens on 4 of the Wireless Access Points in the table list our of 489.

Steps to reproduce:

  1. Configure an SNMP input with the above table setting
  2. split {} the table into separate events
  3. Output the event

DanSheps avatar May 16 '24 16:05 DanSheps