Fixed Graylog alerts
Graylog changed API call for quering alerts and events. Added graph for showing alerts and events.
Thank you for your interest in contributing to Checkmk! Consider looking into Readme regarding process details.
General information
Since Graylog Version 6.0 API changed for querying events and alerts. Current no information is show for this service.
Bug reports
I have tested with Graylog 6.0.7 and Checkmk 2.3.0p16 and 2.3.0p19. No information is shown for Graylog Cluster Alerts. Both software is installed on Debian 12 with latest updates.
New agent output
<<<graylog_alerts:sep(0)>>>
{"alerts": {"num_of_events": 0, "num_of_alerts": 0}}
<<<graylog_cluster_stats:sep(0)>>>
{"stream_count": 3, "stream_rule_count": 0, "stream_rule_count_by_stream": {"000000000000000000000001": 0, "000000000000000000000002": 0, "000000000000000000000003": 0}, "user_count": 1, "output_count": 0, "output_count_by_type": {}, "dashboard_count": 1, "input_count": 1, "global_input_count": 0, "input_count_by_type": {"org.graylog.plugins.beats.Beats2Input": 1}, "extractor_count": 0, "extractor_count_by_type": {}, "elasticsearch": {"cluster_name": "graylog", "cluster_version": "2.15.0", "status": "Green", "cluster_health": {"number_of_nodes": 1, "number_of_data_nodes": 1, "active_shards": 11, "relocating_shards": 0, "active_primary_shards": 11, "initializing_shards": 0, "unassigned_shards": 0, "timed_out": false, "pending_tasks": 0, "pending_tasks_time_in_queue": []}, "nodes_stats": {"total": 1, "master_only": -1, "data_only": -1, "master_data": -1, "client": -1}, "indices_stats": {"index_count": 11, "store_size": 1806622179, "field_data_size": 0, "id_cache_size": 0}}, "mongo": {"servers": ["localhost:27017"], "build_info": {"version": "7.0.15", "git_version": "57939cc60865b0ce431c7e08c2589fa266a1a740", "sys_info": "deprecated", "loader_flags": null, "compiler_flags": null, "allocator": "tcmalloc", "version_array": [7, 0, 15, 0], "javascript_engine": "mozjs", "bits": 64, "debug": false, "max_bson_object_size": 16777216}, "host_info": {"system": {"current_time": "2024-10-30T09:05:52.775Z", "hostname": "karl", "cpu_addr_size": 64, "mem_size_mb": 11944, "num_cores": 4, "cpu_arch": "x86_64", "numa_enabled": false}, "os": {"type": "Linux", "name": "PRETTY_NAME=\"Debian GNU/Linux 12 (bookworm)\"", "version": "Kernel 6.1.0-26-amd64"}, "extra": {"version_string": "Linux version 6.1.0-26-amd64 ([email protected]) (gcc-12 (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.112-1 (2024-09-30)", "libc_version": "2.36", "kernel_version": "6.1.0-26-amd64", "cpu_frequency_mhz": "2808.074", "cpu_features": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat umip md_clear flush_l1d arch_capabilities", "scheduler": null, "page_size": 4096, "num_pages": 3057851, "max_open_files": 64000}}, "server_status": {"host": "karl", "version": "7.0.15", "process": "mongod", "pid": 556, "uptime": 9013, "uptime_millis": 9010654, "uptime_estimate": 9010, "local_time": "2024-10-30T11:32:42.374Z", "connections": {"current": 12, "available": 51188, "total_created": 15}, "network": {"bytes_in": 71806939, "bytes_out": 86296644, "num_requests": 121103}, "memory": {"bits": 64, "resident": 208, "virtual": 572, "supported": true, "mapped": -1, "mapped_with_journal": -1}, "storage_engine": {"name": "wiredTiger"}}, "database_stats": {"db": "graylog", "collections": 52, "objects": 598, "avg_obj_size": 556.9096989966555, "data_size": 333032, "storage_size": 1327104, "num_extents": null, "indexes": 128, "index_size": 2932736, "file_size": null, "ns_size_mb": null, "extent_free_list": null, "data_file_version": null}}}
<<<graylog_cluster_traffic:sep(0)>>>
{"from": "2024-10-29T00:00:00.000Z", "to": "2024-10-30T11:32:42.386Z", "input": {"2024-10-30T09:00:00.000Z": 277888598, "2024-10-30T10:00:00.000Z": 19697252, "2024-10-30T11:00:00.000Z": 10554791}, "output": {"2024-10-30T09:00:00.000Z": 337773683, "2024-10-30T10:00:00.000Z": 21097867, "2024-10-30T11:00:00.000Z": 11268993}, "decoded": {"2024-10-30T09:00:00.000Z": 324348467, "2024-10-30T10:00:00.000Z": 20518171, "2024-10-30T11:00:00.000Z": 10959681}}
<<<graylog_failures:sep(0)>>>
{"count": 0, "failures": [], "total": 0, "ds_param_since": 1800}
<<<graylog_jvm:sep(0)>>>
{"jvm.memory.heap.used": 238671848, "jvm.memory.heap.committed": 1073741824, "jvm.memory.heap.init": 1073741824, "jvm.memory.heap.max": 1073741824, "jvm.memory.heap.usage": 0.22228047996759415}
Proposed changes
I have updated to new API request to get the needed information again. URL and query was updated to new version. It is tested with 5 different customers of my company. After those changes it is running again.
This patch is updating graylog files in master. Those changes should be also applied to 2.3.0pXX.
All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.
I have read the CLA Document and I hereby sign the CLA or my organization already has a signed CLA.
Hello, any update for me? Customers are waiting for this to be applied. Thanks.
Hi @sven-ruess
Sorry it took so long to get back to you about your open pull request. I want to manage your expectations that this PR will not be able to make it into Checkmk 2.5. I understand this isn't the news you were expecting.
The reason is that this pull request is quite large regarding the changes and new functionality it introduces. While we previously streamlined our process to handle small, concise pull requests much faster, we don't yet have a process for such large PRs. These require a significant technical analysis and development effort that we cannot currently plan for. Such larger pull requests, require a dedicated process.
We truly appreciate the contributions you and other users make. We want to make the most out of this valuable input which you have invested a great effort into. Building a process to handle this is on our agenda as we strive to bring in these larger contributions.
We will keep this PR open and will be getting back to you as soon as we can.