pfelk
pfelk copied to clipboard
Snort Dashboard - most data missing
Describe the bug The snort dashboard is void of data in the following panels:
Map New Rules/Time Rules-Classifications Rules / Source Country Org / Source Country Classification_Heat Map Priority
It does have log entries in the discover panel
To Reproduce Steps to reproduce the behavior:
- Go to snort dashboard
- Observe blank panels
Screenshots
Firewall System (please complete the following information):
- pfSense
- 23.09-RELEASE (amd64) built on Wed Nov 1 6:56:00 AEDT 2023 FreeBSD 14.0-CURRENT
Operating System (please complete the following information):
- OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
Installation method (manual, ansible-playbook, docker, script):
manual
Elasticsearch, Logstash, Kibana (please complete the following information):
- Version of ELK components (
dpkg -l [elasticsearch]|[logstash]|[kibana])
ii elasticsearch 8.13.1 arm64 Distributed RESTful search engine built for the cloud
ii kibana 8.13.1 arm64 Explore and visualize your Elasticsearch data
ii logstash 1:8.13.1-1 arm64 An extensible logging pipeline
Elasticsearch, Logstash, Kibana logs:
- Elasticsearch logs (
tail -f /var/log/elasticsearch/[your-elk-cluster-name].log)
oot@pfelk:/home/stephencooper# tail -50 /var/log/elasticsearch/elasticsearch.log
[2024-04-09T13:49:43,013][INFO ][o.e.x.a.APMPlugin ] [pfelk] APM ingest plugin is disabled
[2024-04-09T13:49:46,162][INFO ][o.e.t.n.NettyAllocator ] [pfelk] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2024-04-09T13:49:46,305][INFO ][o.e.i.r.RecoverySettings ] [pfelk] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2024-04-09T13:49:46,573][INFO ][o.e.d.DiscoveryModule ] [pfelk] using discovery type [multi-node] and seed hosts providers [settings]
[2024-04-09T13:49:56,269][INFO ][o.e.n.Node ] [pfelk] initialized
[2024-04-09T13:49:56,273][INFO ][o.e.n.Node ] [pfelk] starting ...
[2024-04-09T13:49:56,412][INFO ][o.e.x.s.c.f.PersistentCache] [pfelk] persistent cache index loaded
[2024-04-09T13:49:56,416][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [pfelk] deprecation component started
[2024-04-09T13:49:56,783][INFO ][o.e.t.TransportService ] [pfelk] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2024-04-09T13:50:03,667][WARN ][o.e.c.c.ClusterBootstrapService] [pfelk] this node is locked into cluster UUID [RYGTYdJTS0O1u7CE4IP3fg] but [cluster.initial_master_nodes] is set to [raspberrypi]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/guide/en/elasticsearch/reference/8.13/important-settings.html#initial_master_nodes
[2024-04-09T13:50:04,325][INFO ][o.e.c.s.MasterService ] [pfelk] elected-as-master ([1] nodes joined in term 6)[_FINISH_ELECTION_, {pfelk}{HZ9Vb6TqRDCBFvpuyUK-Yw}{NCyV1Y_wQBa30Hg1-sFLRw}{pfelk}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.13.1}{7000099-8503000} completing election], term: 6, version: 403, delta: master node changed {previous [], current [{pfelk}{HZ9Vb6TqRDCBFvpuyUK-Yw}{NCyV1Y_wQBa30Hg1-sFLRw}{pfelk}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.13.1}{7000099-8503000}]}
[2024-04-09T13:50:04,719][INFO ][o.e.c.s.ClusterApplierService] [pfelk] master node changed {previous [], current [{pfelk}{HZ9Vb6TqRDCBFvpuyUK-Yw}{NCyV1Y_wQBa30Hg1-sFLRw}{pfelk}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.13.1}{7000099-8503000}]}, term: 6, version: 403, reason: Publication{term=6, version=403}
[2024-04-09T13:50:05,072][INFO ][o.e.c.f.AbstractFileWatchingService] [pfelk] starting file watcher ...
[2024-04-09T13:50:05,129][INFO ][o.e.c.f.AbstractFileWatchingService] [pfelk] file settings service up and running [tid=61]
[2024-04-09T13:50:05,193][INFO ][o.e.h.AbstractHttpServerTransport] [pfelk] publish_address {192.168.0.30:9200}, bound_addresses {[::]:9200}
[2024-04-09T13:50:05,215][INFO ][o.e.c.c.NodeJoinExecutor ] [pfelk] node-join: [{pfelk}{HZ9Vb6TqRDCBFvpuyUK-Yw}{NCyV1Y_wQBa30Hg1-sFLRw}{pfelk}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.13.1}{7000099-8503000}] with reason [completing election]
[2024-04-09T13:50:05,362][INFO ][o.e.n.Node ] [pfelk] started {pfelk}{HZ9Vb6TqRDCBFvpuyUK-Yw}{NCyV1Y_wQBa30Hg1-sFLRw}{pfelk}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.13.1}{7000099-8503000}{ml.allocated_processors_double=4.0, ml.allocated_processors=4, ml.machine_memory=8188366848, transform.config_version=10.0.0, xpack.installed=true, ml.config_version=12.0.0, ml.max_jvm_size=4093640704}
[2024-04-09T13:50:08,389][INFO ][o.e.x.s.a.Realms ] [pfelk] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2024-04-09T13:50:08,403][INFO ][o.e.l.ClusterStateLicenseService] [pfelk] license [65231b51-28e0-4970-b609-f109d2da40cf] mode [basic] - valid
[2024-04-09T13:50:08,421][INFO ][o.e.g.GatewayService ] [pfelk] recovered [46] indices into cluster_state
[2024-04-09T13:50:11,045][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [pfelk] Node [{pfelk}{HZ9Vb6TqRDCBFvpuyUK-Yw}] is selected as the current health node.
[2024-04-09T13:50:11,049][ERROR][o.e.i.g.GeoIpDownloader ] [pfelk] exception during geoip databases update
org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active
at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:131) ~[?:?]
at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:279) ~[?:?]
at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:160) ~[?:?]
at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:59) ~[?:?]
at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:34) ~[elasticsearch-8.13.1.jar:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:984) ~[elasticsearch-8.13.1.jar:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.13.1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1570) ~[?:?]
[2024-04-09T13:50:14,710][INFO ][o.e.i.g.DatabaseNodeService] [pfelk] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2024-04-09T13:50:15,130][INFO ][o.e.x.t.t.TransformTask ] [pfelk] [endpoint.metadata_united-default-8.13.0] updating state for transform to [{"task_state":"started","indexer_state":"stopped","checkpoint":1,"progress":{"docs_indexed":0,"docs_processed":0},"should_stop_at_checkpoint":false,"auth_state":{"timestamp":1712576215988,"status":"green"}}].
[2024-04-09T13:50:15,133][INFO ][o.e.x.t.t.TransformTask ] [pfelk] [endpoint.metadata_current-default-8.13.0] updating state for transform to [{"task_state":"started","indexer_state":"stopped","checkpoint":1,"progress":{"docs_indexed":0,"docs_processed":0},"should_stop_at_checkpoint":false,"auth_state":{"timestamp":1712576216103,"status":"green"}}].
[2024-04-09T13:50:15,471][INFO ][o.e.i.g.DatabaseNodeService] [pfelk] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2024-04-09T13:50:16,331][INFO ][o.e.x.t.t.TransformPersistentTasksExecutor] [pfelk] [endpoint.metadata_current-default-8.13.0] successfully completed and scheduled task in node operation
[2024-04-09T13:50:16,722][INFO ][o.e.x.t.t.TransformPersistentTasksExecutor] [pfelk] [endpoint.metadata_united-default-8.13.0] successfully completed and scheduled task in node operation
[2024-04-09T13:50:20,062][INFO ][o.e.i.g.DatabaseNodeService] [pfelk] successfully loaded geoip database file [GeoLite2-City.mmdb]
[2024-04-09T13:50:24,448][INFO ][o.e.c.r.a.AllocationService] [pfelk] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.ds-logs-pfelk-suricata-2024.04.08-000001][0]]])." previous.health="RED" reason="shards started [[.ds-logs-pfelk-suricata-2024.04.08-000001][0]]"
[2024-04-09T17:08:48,151][INFO ][o.e.c.m.MetadataMappingService] [pfelk] [.ds-logs-pfelk-suricata-2024.04.08-000001/6j_K54-mRr653cpkUlnCuQ] update_mapping [_doc]
[2024-04-09T18:50:12,178][INFO ][o.e.c.m.MetadataMappingService] [pfelk] [.ds-logs-pfelk-suricata-2024.04.08-000001/6j_K54-mRr653cpkUlnCuQ] update_mapping [_doc]
[2024-04-09T19:46:18,513][INFO ][o.e.c.m.MetadataMappingService] [pfelk] [.ds-.logs-deprecation.elasticsearch-default-2024.04.08-000001/5IIoU5ktSqaR3ObJr6hqGg] update_mapping [_doc]
[2024-04-09T21:56:39,778][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [pfelk] updating index lifecycle policy [pfelk]
[2024-04-09T21:57:10,688][WARN ][o.e.c.m.MetadataIndexTemplateService] [pfelk] index template [pfelk-firewall] has index patterns [*-pfelk-firewall*] matching patterns from existing older templates [.monitoring-logstash,.monitoring-es,.monitoring-beats,.monitoring-kibana] with patterns (.monitoring-logstash => [.monitoring-logstash-7-*],.monitoring-es => [.monitoring-es-7-*],.monitoring-beats => [.monitoring-beats-7-*],.monitoring-kibana => [.monitoring-kibana-7-*]); this template [pfelk-firewall] will take precedence during new index creation
[2024-04-09T21:58:22,710][WARN ][o.e.c.m.MetadataIndexTemplateService] [pfelk] index template [pfelk-kea] has index patterns [*-pfelk-kea-dhcp*] matching patterns from existing older templates [.monitoring-logstash,.monitoring-es,.monitoring-beats,.monitoring-kibana] with patterns (.monitoring-logstash => [.monitoring-logstash-7-*],.monitoring-es => [.monitoring-es-7-*],.monitoring-beats => [.monitoring-beats-7-*],.monitoring-kibana => [.monitoring-kibana-7-*]); this template [pfelk-kea] will take precedence during new index creation
[2024-04-09T21:59:22,435][WARN ][o.e.c.m.MetadataIndexTemplateService] [pfelk] index template [pfelk-dhcp] has index patterns [*-pfelk-dhcp*] matching patterns from existing older templates [.monitoring-logstash,.monitoring-es,.monitoring-beats,.monitoring-kibana] with patterns (.monitoring-logstash => [.monitoring-logstash-7-*],.monitoring-es => [.monitoring-es-7-*],.monitoring-beats => [.monitoring-beats-7-*],.monitoring-kibana => [.monitoring-kibana-7-*]); this template [pfelk-dhcp] will take precedence during new index creation
[2024-04-09T21:59:54,739][WARN ][o.e.c.m.MetadataIndexTemplateService] [pfelk] index template [pfelk-unbound] has index patterns [*-pfelk-unbound*] matching patterns from existing older templates [.monitoring-logstash,.monitoring-es,.monitoring-beats,.monitoring-kibana] with patterns (.monitoring-logstash => [.monitoring-logstash-7-*],.monitoring-es => [.monitoring-es-7-*],.monitoring-beats => [.monitoring-beats-7-*],.monitoring-kibana => [.monitoring-kibana-7-*]); this template [pfelk-unbound] will take precedence during new index creation
[2024-04-09T22:00:21,330][WARN ][o.e.c.m.MetadataIndexTemplateService] [pfelk] index template [pfelk-other] has index patterns [*-pfelk-captive*, *-pfelk-snort*, *-pfelk-squid*] matching patterns from existing older templates [.monitoring-logstash,.monitoring-es,.monitoring-beats,.monitoring-kibana] with patterns (.monitoring-logstash => [.monitoring-logstash-7-*],.monitoring-es => [.monitoring-es-7-*],.monitoring-beats => [.monitoring-beats-7-*],.monitoring-kibana => [.monitoring-kibana-7-*]); this template [pfelk-other] will take precedence during new index creation
root@pfelk:/home/stephencooper#
- Logstash logs (
tail -f /var/log/logstash/logstash-plain.log)
root@pfelk:/home/stephencooper# tail -f /var/log/logstash/logstash-plain.log
[2024-04-09T13:51:55,262][WARN ][logstash.filters.grok ][pfelk] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-04-09T13:51:55,563][WARN ][logstash.filters.grok ][pfelk] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-04-09T13:51:55,860][WARN ][logstash.filters.grok ][pfelk] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-04-09T13:51:56,839][INFO ][logstash.javapipeline ][pfelk] Starting pipeline {:pipeline_id=>"pfelk", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.pfelk", "/etc/pfelk/conf.d/02-firewall.pfelk", "/etc/pfelk/conf.d/05-apps.pfelk", "/etc/pfelk/conf.d/30-geoip.pfelk", "/etc/pfelk/conf.d/49-cleanup.pfelk", "/etc/pfelk/conf.d/50-outputs.pfelk"], :thread=>"#<Thread:0x4717b3ea /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-09T13:52:08,483][INFO ][logstash.javapipeline ][pfelk] Pipeline Java execution initialization time {"seconds"=>11.64}
[2024-04-09T13:52:08,628][INFO ][logstash.javapipeline ][pfelk] Pipeline started {"pipeline.id"=>"pfelk"}
[2024-04-09T13:52:08,646][INFO ][logstash.inputs.syslog ][pfelk][pfelk-firewall-0001] Starting syslog udp listener {:address=>"0.0.0.0:5140"}
[2024-04-09T13:52:08,646][INFO ][logstash.inputs.syslog ][pfelk][pfelk-firewall-0001] Starting syslog tcp listener {:address=>"0.0.0.0:5140"}
[2024-04-09T13:52:08,682][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:pfelk], :non_running_pipelines=>[]}
[2024-04-09T13:52:11,160][INFO ][logstash.inputs.syslog ][pfelk][pfelk-firewall-0001] new connection {:client=>"192.168.1.1:3665"}
- Kibana logs (
journalctl -u kibana.service)
Apr 08 20:25:35 raspberrypi systemd[1]: Started kibana.service - Kibana.
Apr 08 20:25:35 raspberrypi kibana[4540]: Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/8.13/production.html#openssl-legacy-provider
Apr 08 20:25:37 raspberrypi kibana[4540]: {"log.level":"info","@timestamp":"2024-04-08T10:25:37.664Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.4.0","env":{"pid":4540,"proctitle":"/usr/share/kibana/bin/../node/bin/node","os":"linux 6.6.20+rpt-rpi-v8","arch":"arm64","host":"raspberrypi","timezone":"UTC+10>
Apr 08 20:25:38 raspberrypi kibana[4540]: Native global console methods have been overridden in production environment.
Apr 08 20:25:41 raspberrypi kibana[4540]: [2024-04-08T20:25:41.805+10:00][INFO ][root] Kibana is starting
Apr 08 20:25:41 raspberrypi kibana[4540]: [2024-04-08T20:25:41.969+10:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]
Apr 08 20:26:20 raspberrypi kibana[4540]: [2024-04-08T20:26:20.081+10:00][INFO ][plugins-service] The following plugins are disabled: "cloudChat,cloudExperiments,cloudFullStory,profilingDataAccess,profiling,securitySolutionServerless,serverless,serverlessObservability,serverlessSearch".
Apr 08 20:26:20 raspberrypi kibana[4540]: [2024-04-08T20:26:20.502+10:00][INFO ][http.server.Preboot] http server running at http://0.0.0.0:5601
Apr 08 20:26:21 raspberrypi kibana[4540]: [2024-04-08T20:26:21.615+10:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [interactiveSetup]
Apr 08 20:26:21 raspberrypi kibana[4540]: [2024-04-08T20:26:21.733+10:00][INFO ][preboot] "interactiveSetup" plugin is holding setup: Validating Elasticsearch connection configuration…
Apr 08 20:26:21 raspberrypi kibana[4540]: [2024-04-08T20:26:21.954+10:00][INFO ][root] Holding setup until preboot stage is completed.
Apr 08 20:26:22 raspberrypi kibana[4540]: i Kibana has not been configured.
Apr 08 20:26:22 raspberrypi kibana[4540]: Go to http://0.0.0.0:5601/?code=263193 to get started.
Apr 08 20:27:36 raspberrypi kibana[4540]: Your verification code is: 263 193
Apr 08 20:28:24 raspberrypi kibana[4540]: [2024-04-08T20:28:24.513+10:00][INFO ][cli] Reloading Kibana configuration (reason: configuration might have changed during preboot stage).
Apr 08 20:28:25 raspberrypi kibana[4540]: [2024-04-08T20:28:24.584+10:00][INFO ][cli] Reloaded Kibana configuration (reason: configuration might have changed during preboot stage).
Apr 08 20:28:25 raspberrypi kibana[4540]: [2024-04-08T20:28:25.031+10:00][WARN ][config.deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
Apr 08 20:28:45 raspberrypi kibana[4540]: [2024-04-08T20:28:45.274+10:00][INFO ][plugins-system.standard] Setting up [149] plugins: [devTools,translations,share,screenshotMode,usageCollection,telemetryCollectionManager,telemetryCollectionXpack,taskManager,kibanaUsageCollection,cloud,newsfeed,savedObjectsFinder,noDataPage,monitoringCol>
Apr 08 20:28:47 raspberrypi kibana[4540]: [2024-04-08T20:28:47.113+10:00][INFO ][plugins.taskManager] TaskManager is identified by the Kibana UUID: a762be3c-c910-44a8-97b6-d39f07817d9d
Apr 08 20:28:49 raspberrypi kibana[4540]: [2024-04-08T20:28:49.716+10:00][INFO ][custom-branding-service] CustomBrandingService registering plugin: customBranding
Apr 08 20:28:53 raspberrypi kibana[4540]: [2024-04-08T20:28:53.685+10:00][WARN ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, but is not supported for Linux Debian 12 OS. Automatically setting 'xpack.screenshotting.browser.chromium.disableSandbox: true'.
Apr 08 20:28:56 raspberrypi kibana[4540]: [2024-04-08T20:28:56.402+10:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
Apr 08 20:28:56 raspberrypi kibana[4540]: [2024-04-08T20:28:56.406+10:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
Apr 08 20:28:56 raspberrypi kibana[4540]: [2024-04-08T20:28:56.689+10:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
Additional context Add any other context about the problem here.
Attach the pfELK Error Log (error.pfelk), for Better Assistance*
- Do not copy/paste log; attach as a file
Hi, I was wondering if you had any thoughts on this?
@a3ilson I am happy to work with you resolve this...
Check Discover for the pfelk-other* logs. I’m hoping snort logs are being collected. If so, if you can provided those logs, I’ll check to see if the parses are working and go from there.
Hopefully this is related to #527
I can see events in the snort discover part of the dashboard, but when I try to search in the elasticsearch interface, the string you mentioned returns no results.
@thetravellor I was hoping to have created mappings from the snort grok pattern. However, there might be additional fields that require mapping.
- Delete the pfelk-other template
- Reload the pfelk-other template (new/updated)
- Load the pfelk-snort template (new)
- Might have too delete the pfelk-snort data view (Kibana) and reload
After the above has been completed, let it run and/or allow for snort logs to be generated and check back to see if the dashboard is working. If not, please provide a few "event.original" messages from the pfelk-snort discover viewer.
at Step 4 I deleted pfelk-snort data view, and now I get "Could not find the data view: id-snort" in the snort dashboard, which I guess is logical. What do I need to do recreate it? (Ive no practical experience with kibana at this level before)
Download the snort dashboard saving the file anything.ndjson. Next, navigate to Stack Management>>Kibana>>Saved Objects and click import. Drag or click import and select the previously downloaded snort dashboard file (i.e., .ndjson).
Once imported, all saved snort objects should be available.
Ref: https://github.com/pfelk/pfelk/blob/main/install/templates.md#a-manual-method
Each log can be expanded by clicking on the diagonal/opposing arrows to the far left. Once expanded, provide a screenshot, text of the event.original or JSON [preferred] which will allow me to better understand how the data is stored.
[ Event Sample 1.json Event Sample 2.json ](url)
Please confirm the following:
- Select "syslog RFC5424, with RFC 3339..." as the Log Messaging Format
- Provided log appears to be be in RFC 3164 (default)...the other format will allow for additional fidelity.
Otherwise, looking over the provided JSON files, the most if not all of the Snort visualizations should be working.
In addition to the above, let's confirm a few more items:
-
navigate to your "Index Templates"
-
-
navigate to discover and select
*-pfelk-snort*-
- My reference image isn't the best reference as I do not use Snort (OPNsense supports Suricate)
-
-
navigate to "Stack Management>>Kibana>>Data Views" click on
*-pfelk-snort*and provide a screenshot of the defined fields. -
Hi there, I tried to change the syslog setting as suggested and ended up breaking every dashboard (unbound, dhcp etc), so I reverted that.
Now I don't know what I did but after doing this reversal, much of the snort dashboard started working (along with getting all the other dashboards working). Go figure.
I had also installed the extra syslog package, which I had removed. I also reinstalled it for no other scientific reason than to return pfsense and pfelk to the previous known working state.
The only element that still shows nothing is the map of the world.
Apologize and I've experience similar issues in the past typically when updating and/or configuring when OPNsense was already sending messages which seemed to mess things up (i.e., sending logs prior to installing templates etc...).
I took a look at the snort dashboard (raw - not running snort) and the geo fields appear to be correct but that dashboard was built circa 2022 (Elsatic v7.11). The map may require rebuilding - what's your comfort level building visualizations?
My apologies for the delay in response. I know very little about the inner workings of ELK (kudos to you that you do), but I do work in IT in my day job and I can probably figure it out if you provide the steps involved.
looks to have some errors
@thetravellor - are you able to provide:
- a few
event.originaloutputs of the snort logs? - screenshot of the discover (-pfelk-snort) view depicting the parsed fields?