icinga2
icinga2 copied to clipboard
Timeperiod exclusions doesn't work as expected
Describe the bug
In our config, we use two timeperiods, one one excluding the other one and it doesn't seem to work when a time period contains multiple sub-timeperiod.
To Reproduce
Here is a configuration sample:
object TimePeriod "24x7" {
ranges = {
tuesday = "00:00-23:59"
}
}
object TimePeriod "onDuty" {
// object attributes
excludes = []
ranges = {
tuesday = "09:00-09:30"
}
}
object TimePeriod "offDuty" {
excludes = ["onDuty"]
ranges = {
tuesday = "00:00-23:59"
}
}
object User "me" {
email = "[email protected]"
enable_notifications = true
groups = []
period = "24x7"
}
object Host "test" {
address = "127.0.0.1"
zone = "fr"
check_command = "check_host_alive"
check_interval = 20s
check_period = "offDuty"
enable_active_checks = 1
enable_event_handler = 0
enable_flapping = 0
enable_notifications = 0
enable_passive_checks = 0
max_check_attempts = 3
retry_interval = 5s
}
apply Service "test-me" {
host_name = "test"
display_name = "test"
check_command = "dummy"
check_interval = 20s
retry_interval = 5s
check_period = "offDuty"
enable_notifications = 1
vars.dummy_state = 2
vars.dummy_text = "TEST KO"
vars.is_test = "True"
vars.notification_interval = 60
vars.notification_options_states = [Critical, Unknown, Warning]
vars.notification_options_types = [Problem, Recovery]
vars.notification_period = "offDuty"
assign where host.name == "test"
}
apply Notification "service-bysms" to Service {
// object attributes
command = "notify-service-by-sms"
period = "24x7"
users = ["me"]
states = service.vars.notification_options_states
types = service.vars.notification_options_types
if (service.vars["notification_interval"]) {
interval = service.vars["notification_interval"]
} else {
interval = 1800
}
if (service.vars["notification_period"]) {
period = service.vars["notification_period"]
}
assign where host.name == "test"
}
Expected behavior
To me, in this example, the offDuty period should be 00:00-09:00,09:30-00:00
Your Environment
Include as many relevant details about the environment you experienced the problem in
- Version used (
icinga2 --version
):
icinga2 - The Icinga 2 network monitoring daemon (version: r2.10.5-1)
Copyright (c) 2012-2019 Icinga GmbH (https://icinga.com/)
License GPLv2+: GNU GPL version 2 or later <http://gnu.org/licenses/gpl2.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
System information:
Platform: Debian GNU/Linux
Platform version: 9 (stretch)
Kernel: Linux
Kernel version: 4.14.119
Architecture: x86_64
Build information:
Compiler: GNU 6.3.0
Build host: cb654124b660
Application information:
General paths:
Config directory: /etc/icinga2
Data directory: /var/lib/icinga2
Log directory: /var/log/icinga2
Cache directory: /var/cache/icinga2
Spool directory: /var/spool/icinga2
Run directory: /run/icinga2
Old paths (deprecated):
Installation root: /usr
Sysconf directory: /etc
Run directory (base): /run
Local state directory: /var
Internal paths:
Package data directory: /usr/share/icinga2
State path: /var/lib/icinga2/icinga2.state
Modified attributes path: /var/lib/icinga2/modified-attributes.conf
Objects path: /var/cache/icinga2/icinga2.debug
Vars path: /var/cache/icinga2/icinga2.vars
PID path: /run/icinga2/icinga2.pid
- Operating System and version:
# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.9 (stretch)
Release: 9.9
Codename: stretch
- Enabled features (
icinga2 feature list
):
Disabled features: command compatlog debuglog elasticsearch gelf graphite influxdb opentsdb perfdata statusdata syslog
Enabled features: api checker ido-pgsql livestatus mainlog notification
- Config validation (
icinga2 daemon -C
):
...
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 5293 Services.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 2 LivestatusListeners.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 IcingaApplication.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 2070 Hosts.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 EventCommand.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 FileLogger.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 6 NotificationCommands.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 22087 Notifications.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 NotificationComponent.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 154 HostGroups.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 ApiListener.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 CheckerComponent.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 4 Zones.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 6 Endpoints.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 4 ApiUsers.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 15 Users.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 247 CheckCommands.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 1 IdoPgsqlConnection.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 94 UserGroups.
[2019-08-06 12:51:25 +0200] information/ConfigItem: Instantiated 14 TimePeriods.
[2019-08-06 12:51:25 +0200] information/ScriptGlobal: Dumping variables to file '/var/cache/icinga2/icinga2.vars'
[2019-08-06 12:51:25 +0200] information/cli: Finished validating the configuration file(s).
Thanks !
Do you have the debug logs from Thursday 09:00 til 10:00 to help mitigate the problem? Obviously the TimePeriod object is used as check_period
, not notification period, so trace the scheduled checks and their results.
Thanks for your reply.
Hum unfortunately no. I will try to get some logs today but I will have to spaw a dedicated instance.
The timeperiod is used as check and notification period, look at period = service.vars["notification_period"]
in the notification apply rule. By the way, do you have a smarter way of migrating notification_period, notification_interval and notification_options from a nagios object (I mean something that can be automated) ?
Here is a configuration which reproduce the problem on a fresh icinga install on debian stretch and v2.10.5-1 (same environment as my production):
# cat /etc/icinga2/conf.d/test.conf
/*object TimePeriod "24x7" {
ranges = {
monday = "00:00-23:59"
tuesday = "00:00-23:59"
wednesday = "00:00-23:59"
thursday = "00:00-23:59"
friday = "00:00-23:59"
saturday = "00:00-23:59"
sunday = "00:00-23:59"
}
}*/
object TimePeriod "onDuty" {
// object attributes
excludes = []
ranges = {
monday = "09:00-10:00"
tuesday = "09:00-10:00"
wednesday = "09:00-10:00"
thursday = "09:00-10:00"
friday = "09:00-10:00"
saturday = "09:00-10:00"
sunday = "09:00-10:00"
}
}
object TimePeriod "offDuty" {
excludes = ["onDuty"]
ranges = {
monday = "00:00-23:59"
tuesday = "00:00-23:59"
wednesday = "00:00-23:59"
thursday = "00:00-23:59"
friday = "00:00-23:59"
saturday = "00:00-23:59"
sunday = "00:00-23:59"
}
}
object User "me" {
email = "[email protected]"
enable_notifications = true
groups = []
period = "24x7"
}
object Host "test" {
address = "127.0.0.1"
check_command = "hostalive"
check_interval = 20s
check_period = "offDuty"
enable_active_checks = 1
enable_event_handler = 0
enable_flapping = 0
enable_notifications = 0
enable_passive_checks = 0
max_check_attempts = 3
retry_interval = 5s
}
apply Service "test-me" {
host_name = "test"
display_name = "test"
check_command = "dummy"
check_interval = 20s
retry_interval = 5s
check_period = "offDuty"
enable_notifications = 1
vars.dummy_state = 2
vars.dummy_text = "TEST KO"
vars.is_test = "True"
vars.notification_interval = 60
vars.notification_options_states = [Critical, Unknown, Warning]
vars.notification_options_types = [Problem, Recovery]
vars.notification_period = "offDuty"
assign where host.name == "test"
}
apply Notification "service-bysms" to Service {
// object attributes
command = "mail-service-notification"
period = "24x7"
users = ["me"]
states = service.vars.notification_options_states
types = service.vars.notification_options_types
if (service.vars["notification_interval"]) {
interval = service.vars["notification_interval"]
} else {
interval = 1800
}
if (service.vars["notification_period"]) {
period = service.vars["notification_period"]
}
assign where host.name == "test"
}
Logs:
[2019-08-08 09:41:04 +0200] debug/CheckerComponent: Scheduling info for checkable 'test!test-me' (2019-08-08 09:41:04 +0200): Object 'test!test-me', Next Check: 2019-08-08 09:41:04 +0200(1.56525e+09).
[2019-08-08 09:41:04 +0200] debug/CheckerComponent: Executing check for 'test!test-me'
[2019-08-08 09:41:04 +0200] debug/Checkable: Update checkable 'test!test-me' with check interval '20' from last check time at 2019-08-08 09:40:43 +0200 (1.56525e+09) to next check time at 2019-08-08 09:41:22 +0200(1.56525e+09).
[2019-08-08 09:41:04 +0200] debug/Checkable: Update checkable 'test!test-me' with check interval '20' from last check time at 2019-08-08 09:41:04 +0200 (1.56525e+09) to next check time at 2019-08-08 09:41:22 +0200(1.56525e+09).
[2019-08-08 09:41:04 +0200] debug/DbEvents: add checkable check history for 'test!test-me'
[2019-08-08 09:41:04 +0200] debug/CheckerComponent: Check finished for object 'test!test-me'
[2019-08-08 09:41:05 +0200] notice/NotificationComponent: Attempting to send reminder notification 'test!test-me!service-bysms'
[2019-08-08 09:41:05 +0200] notice/Notification: Attempting to send reminder notifications for notification object 'test!test-me!service-bysms'.
[2019-08-08 09:41:05 +0200] information/Notification: Sending reminder 'Problem' notification 'test!test-me!service-bysms' for user 'me'
[2019-08-08 09:41:05 +0200] debug/DbEvents: add notification history for 'test!test-me'
[2019-08-08 09:41:05 +0200] debug/DbEvents: add contact notification history for service 'test!test-me' and user 'me'.
[2019-08-08 09:41:05 +0200] notice/Process: Running command '/etc/icinga2/scripts/mail-service-notification.sh' '-4' '127.0.0.1' '-6' '' '-b' '' '-c' '' '-d' '2019-08-08 09:41:05 +0200' '-e' 'test-me' '-l' 'test' '-n' 'test' '-o' 'TEST KO' '-r' '[email protected]' '-s' 'CRITICAL' '-t' 'PROBLEM' '-u' 'test': PID 8820
[2019-08-08 09:41:05 +0200] debug/DbEvents: add log entry history for 'test!test-me'
[2019-08-08 09:41:05 +0200] information/Notification: Completed sending 'Problem' notification 'test!test-me!service-bysms' for checkable 'test!test-me' and user 'me'.
[2019-08-08 09:41:05 +0200] notice/Process: PID 8820 ('/etc/icinga2/scripts/mail-service-notification.sh' '-4' '127.0.0.1' '-6' '' '-b' '' '-c' '' '-d' '2019-08-08 09:41:05 +0200' '-e' 'test-me' '-l' 'test' '-n' 'test' '-o' 'TEST KO' '-r' '[email protected]' '-s' 'CRITICAL' '-t' 'PROBLEM' '-u' 'test') terminated with exit code 0
The notification period through service vars is working correctly:
# icinga2 object list --type Notification --name 'test!test-me!service-bysms'
Object 'test!test-me!service-bysms' of type 'Notification':
% declared in '/etc/icinga2/conf.d/test.conf', lines 86:1-86:45
* __name = "test!test-me!service-bysms"
* command = "mail-service-notification"
% = modified in '/etc/icinga2/conf.d/test.conf', lines 88:2-88:38
* command_endpoint = ""
* host_name = "test"
% = modified in '/etc/icinga2/conf.d/test.conf', lines 86:1-86:45
* interval = 60
% = modified in '/etc/icinga2/conf.d/test.conf', lines 96:10-96:57
* name = "service-bysms"
* package = "_etc"
% = modified in '/etc/icinga2/conf.d/test.conf', lines 86:1-86:45
* period = "offDuty"
% = modified in '/etc/icinga2/conf.d/test.conf', lines 89:2-89:16
% = modified in '/etc/icinga2/conf.d/test.conf', lines 102:10-102:53
* service_name = "test-me"
% = modified in '/etc/icinga2/conf.d/test.conf', lines 86:1-86:45
* source_location
* first_column = 1
* first_line = 86
* last_column = 45
* last_line = 86
* path = "/etc/icinga2/conf.d/test.conf"
* states = [ "Critical", "Unknown", "Warning" ]
% = modified in '/etc/icinga2/conf.d/test.conf', lines 93:2-93:50
* templates = [ "service-bysms" ]
% = modified in '/etc/icinga2/conf.d/test.conf', lines 86:1-86:45
* times = null
* type = "Notification"
* types = [ "Problem", "Recovery" ]
% = modified in '/etc/icinga2/conf.d/test.conf', lines 94:2-94:48
* user_groups = null
* users = [ "me" ]
% = modified in '/etc/icinga2/conf.d/test.conf', lines 91:2-91:15
* vars = null
* zone = ""
To me, in this example, the notification should not have occurred between 9h and 10h. I also tried to play with the prefer_includes attribute but it doesn't change anything :-/
I'm a friend of keeping the details inside the notification objects and apply rules, I don't like the old way of stashing everything into hosts and services for notifications. That's also why the notification objects exist, I am one of the architects of that feature. Anyhow, your approach looks sufficient to me, I didn't understand its intention up until now though.
Logs will help to mitigate further. You can also create a dummy TimePeriod configuration with the above excludes and then query the REST API at /v1/objects/timeperiods
for the is_inside
attribute at that very moment, e.g. with a loop checking that every second.
Yes I understand :-) For now I need my old prod to work along with Icinga in order to validate everything is working as expected. That's why I did this "bad thing" to make my migration script working. Once Icinga will have replaced my old monitoring infrastructure I'll rewrite my config the Icinga-way :-)
OK for the is_inside
test, will do.
OK so very stange :-) I manage to reproduce it serveral times. Here is the scenario:
- add the above configuration (change onDuty timeperiod times for the test purpose)
- start Icinga -> it won't work
- enable API and restart Icinga
- it works! I have the following logs:
[2019-08-08 14:03:32 +0200] notice/CheckerComponent: Skipping check for object 'test!test-me': not in check period 'offDuty'
[2019-08-08 14:03:32 +0200] debug/CheckerComponent: Checks for checkable 'test!test-me' are disabled. Rescheduling check.
[2019-08-08 14:03:32 +0200] debug/Checkable: Update checkable 'test!test-me' with check interval '20' from last check time at 2019-08-08 14:01:39 +0200 (1.56527e+09) to next check time at 2019-08-08 14:03:51 +0200(1.56527e+09).
Timeperiod logs:
- with API disabled:
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:00:23 2019' <-> 'Fri Aug 9 14:00:23 2019' from TimePeriod '24x7'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Adding segment 'Thu Aug 8 00:00:00 2019' <-> 'Fri Aug 9 00:00:00 2019' to TimePeriod '24x7'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Adding segment 'Fri Aug 9 00:00:00 2019' <-> 'Sat Aug 10 00:00:00 2019' to TimePeriod '24x7'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:00:23 2019' <-> 'Fri Aug 9 14:00:23 2019' from TimePeriod 'never'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:00:23 2019' <-> 'Fri Aug 9 14:00:23 2019' from TimePeriod 'offDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Adding segment 'Thu Aug 8 00:00:00 2019' <-> 'Thu Aug 8 23:59:00 2019' to TimePeriod 'offDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Adding segment 'Fri Aug 9 00:00:00 2019' <-> 'Fri Aug 9 23:59:00 2019' to TimePeriod 'offDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Merge TimePeriod 'offDuty' with 'onDuty' Method: exclude
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 12:35:00 2019' <-> 'Thu Aug 8 13:00:00 2019' from TimePeriod 'offDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Removing segment 'Fri Aug 9 09:00:00 2019' <-> 'Fri Aug 9 10:00:00 2019' from TimePeriod 'offDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:00:23 2019' <-> 'Fri Aug 9 14:00:23 2019' from TimePeriod 'onDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Adding segment 'Thu Aug 8 12:35:00 2019' <-> 'Thu Aug 8 14:05:00 2019' to TimePeriod 'onDuty'
[2019-08-08 14:00:23 +0200] debug/TimePeriod: Adding segment 'Fri Aug 9 09:00:00 2019' <-> 'Fri Aug 9 10:00:00 2019' to TimePeriod 'onDuty'
- with API enabled:
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:03:14 2019' <-> 'Fri Aug 9 14:03:14 2019' from TimePeriod 'onDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Adding segment 'Thu Aug 8 12:35:00 2019' <-> 'Thu Aug 8 14:05:00 2019' to TimePeriod 'onDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Adding segment 'Fri Aug 9 09:00:00 2019' <-> 'Fri Aug 9 10:00:00 2019' to TimePeriod 'onDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:03:14 2019' <-> 'Fri Aug 9 14:03:14 2019' from TimePeriod 'never'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:03:14 2019' <-> 'Fri Aug 9 14:03:14 2019' from TimePeriod 'offDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Adding segment 'Thu Aug 8 00:00:00 2019' <-> 'Thu Aug 8 23:59:00 2019' to TimePeriod 'offDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Adding segment 'Fri Aug 9 00:00:00 2019' <-> 'Fri Aug 9 23:59:00 2019' to TimePeriod 'offDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Merge TimePeriod 'offDuty' with 'onDuty' Method: exclude
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 12:35:00 2019' <-> 'Thu Aug 8 14:05:00 2019' from TimePeriod 'offDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Removing segment 'Fri Aug 9 09:00:00 2019' <-> 'Fri Aug 9 10:00:00 2019' from TimePeriod 'offDuty'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Removing segment 'Thu Aug 8 14:03:14 2019' <-> 'Fri Aug 9 14:03:14 2019' from TimePeriod '24x7'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Adding segment 'Thu Aug 8 00:00:00 2019' <-> 'Fri Aug 9 00:00:00 2019' to TimePeriod '24x7'
[2019-08-08 14:03:14 +0200] debug/TimePeriod: Adding segment 'Fri Aug 9 00:00:00 2019' <-> 'Sat Aug 10 00:00:00 2019' to TimePeriod '24x7'
[2019-08-08 14:08:14 +0200] debug/TimePeriod: Removing segment 'Fri Aug 9 14:03:14 2019' <-> 'Fri Aug 9 14:08:14 2019' from TimePeriod 'never'
[2019-08-08 14:08:14 +0200] debug/TimePeriod: Removing segment 'Fri Aug 9 14:03:14 2019' <-> 'Fri Aug 9 14:08:14 2019' from TimePeriod 'onDuty'
If you grep 12:35 for instance it seems that segments addition/removal order is not the same. So it has something to deal with API. In my production case I have a distributed architecture so probably another case.
PS: timeperiod in this example:
object TimePeriod "onDuty" {
// object attributes
excludes = []
ranges = {
monday = "09:00-10:00"
tuesday = "09:00-10:00"
wednesday = "09:00-10:00"
thursday = "12:35-14:05"
friday = "09:00-10:00"
saturday = "09:00-10:00"
sunday = "09:00-10:00"
}
}
Sounds like #7239 if you always need to restart twice.
You're right, the double restart did the trick but in a distributed architecture it seems that is_inside
is always to true
. I tried to restart both masters twice but it doesn't work :-/
We're a bit busy with RC testing, so debugging this may take a while. Writing from my ipad here. Meanwhile, I'd suggest to analyse in the debug logs if there's an update between the masters on TimePeriods. I doubt it, but why should it be true all the time. Also, check that the TimePeriod object has paused=false being active. Not sure why that would influence segment calculation, but that's code I am not familiar with.
If you're brave, use the centos7-dev vagrant box, compile icinga2 and add breakpoints in gdb for these calculations. The development docs have more insights.
OK I understand, no problem. I actually have a workaround by defining a timeperiod by manually computing offDuty periods :-)
The paused
value is true. Here is the complete output of the console:
<1> => DateTime()
{
type = "DateTime"
value = 1565673496.336826
}
<2> => get_time_period("offDuty")
{
__name = "offDuty"
active = true
display_name = "offDuty"
excludes = [ "onDuty" ]
extensions = {
DbObject = {
type = "Object"
}
}
ha_mode = 0.000000
includes = [ ]
is_inside = true
name = "offDuty"
original_attributes = null
package = "shinken_migration"
pause_called = false
paused = true
prefer_includes = true
ranges = {
friday = "00:00-23:59"
monday = "00:00-23:59"
thursday = "00:00-23:59"
tuesday = "00:00-23:59"
wednesday = "00:00-23:59"
}
resume_called = false
segments = [ {
begin = 1565647200.000000
end = 1565733540.000000
}, {
begin = 1565733600.000000
end = 1565819940.000000
} ]
source_location = {
first_column = 1.000000
first_line = 93.000000
last_column = 27.000000
last_line = 93.000000
path = "/var/lib/icinga2/api/packages/shinken_migration/35629376-2ba1-476b-8dd6-173084a7a540/zones.d/global-templates/timeperiods.conf"
}
start_called = true
state_loaded = true
stop_called = false
templates = [ "offDuty", "legacy-timeperiod" ]
type = "TimePeriod"
update = {
arguments = [ "tp", "begin", "end" ]
deprecated = false
name = "Internal#LegacyTimePeriod"
side_effect_free = false
type = "Function"
}
valid_begin = 1565647200.000000
valid_end = 1565819940.000000
vars = null
version = 0.000000
zone = "global-templates"
}
<3> => get_time_period("onDuty")
{
__name = "onDuty"
active = true
display_name = "onDuty"
excludes = [ ]
extensions = {
DbObject = {
type = "Object"
}
}
ha_mode = 0.000000
includes = [ ]
is_inside = true
name = "onDuty"
original_attributes = null
package = "shinken_migration"
pause_called = false
paused = true
prefer_includes = true
ranges = {
"2018-05-10" = "00:00-23:59"
"2019-01-01" = "00:00-23:59"
"2019-05-01" = "00:00-23:59"
"2019-05-08" = "00:00-23:59"
"2019-05-30" = "00:00-23:59"
"2019-07-14" = "00:00-23:59"
"2019-08-15" = "00:00-23:59"
"2019-11-01" = "00:00-23:59"
"2019-11-11" = "00:00-23:59"
"2019-12-25" = "00:00-23:59"
"2020-01-01" = "00:00-23:59"
"2020-04-13" = "00:00-23:59"
"2020-05-01" = "00:00-23:59"
"2020-05-08" = "00:00-23:59"
"2020-05-21" = "00:00-23:59"
"2020-07-14" = "00:00-23:59"
"2020-08-15" = "00:00-23:59"
"2020-11-01" = "00:00-23:59"
"2020-11-11" = "00:00-23:59"
"2020-12-25" = "00:00-23:59"
"2021-01-01" = "00:00-23:59"
"2021-04-05" = "00:00-23:59"
"2021-05-01" = "00:00-23:59"
"2021-05-08" = "00:00-23:59"
"2021-05-13" = "00:00-23:59"
"2021-07-14" = "00:00-23:59"
"2021-08-15" = "00:00-23:59"
"2021-11-01" = "00:00-23:59"
"2021-11-11" = "00:00-23:59"
"2021-12-25" = "00:00-23:59"
"2022-04-18" = "00:00-23:59"
"2022-05-26" = "00:00-23:59"
"2023-04-10" = "00:00-23:59"
"2023-05-18" = "00:00-23:59"
"2024-04-01" = "00:00-23:59"
"2024-05-09" = "00:00-23:59"
"2025-04-21" = "00:00-23:59"
"2025-05-29" = "00:00-23:59"
"2026-04-06" = "00:00-23:59"
"2026-05-14" = "00:00-23:59"
"2027-03-29" = "00:00-23:59"
"2027-05-06" = "00:00-23:59"
"2028-04-17" = "00:00-23:59"
"2028-05-25" = "00:00-23:59"
"2029-04-02" = "00:00-23:59"
"2029-05-10" = "00:00-23:59"
"2030-04-22" = "00:00-23:59"
"2030-05-30" = "00:00-23:59"
"2031-04-14" = "00:00-23:59"
"2031-05-22" = "00:00-23:59"
"august 15" = "00:00-23:59"
"december 25" = "00:00-23:59"
friday = "00:00-09:59,18:01-23:59"
"january 1" = "00:00-23:59"
"july 14" = "00:00-23:59"
"may 1" = "00:00-23:59"
"may 8" = "00:00-23:59"
monday = "00:00-09:59,18:01-23:59"
"november 1" = "00:00-23:59"
"november 11" = "00:00-23:59"
saturday = "00:00-23:59"
sunday = "00:00-23:59"
thursday = "00:00-09:59,18:01-23:59"
tuesday = "00:00-09:59,18:01-23:59"
wednesday = "00:00-09:59,18:01-23:59"
}
resume_called = false
segments = [ {
begin = 1565647200.000000
end = 1565683140.000000
}, {
begin = 1565712060.000000
end = 1565733540.000000
}, {
begin = 1565733600.000000
end = 1565769540.000000
}, {
begin = 1565798460.000000
end = 1565819940.000000
} ]
source_location = {
first_column = 1.000000
first_line = 116.000000
last_column = 26.000000
last_line = 116.000000
path = "/var/lib/icinga2/api/packages/shinken_migration/35629376-2ba1-476b-8dd6-173084a7a540/zones.d/global-templates/timeperiods.conf"
}
start_called = true
state_loaded = true
stop_called = false
templates = [ "onDuty", "legacy-timeperiod" ]
type = "TimePeriod"
update = {
arguments = [ "tp", "begin", "end" ]
deprecated = false
name = "Internal#LegacyTimePeriod"
side_effect_free = false
type = "Function"
}
valid_begin = 1565647200.000000
valid_end = 1565819940.000000
vars = null
version = 0.000000
zone = "global-templates"
}
Unfortunately I won't have the time to debug this for the moment :-/
It might be related to serializer fixes with the segments, discussed and fixed with @Elias481 earlier. Though I'm in the midst of 2.11 testing and Icinga meetup Linz preparations, so it will take a while.
OK no problem I understand :-)
I don't think is related to serializer fixes in this case, at least I don't see any connection. And I have no idea how it could be in "start_called" state without beeing intiaielized and have started a time to refresh them. Also I do not exactly see what paused mean (but as far as I see it cannot have the effect that start_called is the state but timeranges not initializes). I don't have a HA setup for testing in place and no time currenlty.
@darkweaver87 Please could you test whether the snapshot packages are still affected. If yes, please also test the packages from here – pick your OS from the "binary" column and then "Download" the "Job artifacts".
@Al2Klimov: OK I will do but I won't have the time in the next 2 months (at least) unfortunatelly. May it be better to include this in a unit test suite ?
Don't worry, we have time. 🙂
OK anyway thanks for trying to fix it :-)
If the "Job artifacts" are already gone while you're going to test them, please let me know – I'll re-create them.
PING @darkweaver87
Hello,
Sorry but I don't work for my previous company anymore, thus I don't have a working environment to validate/invalidate this.
Rémi
@darkweaver87 Please could you test whether the snapshot packages are still affected. If yes, please also test the packages from here – pick your OS from the "binary" column and then "Download" the "Job artifacts".
I am experiencing the same issue in icinga2 2.10.3-2+deb10u1 (Debian Buster). Could I verify the fix?
Yes, with these packages:
https://git.icinga.com/packaging/deb-icinga2/-/jobs/153741/artifacts/download
ref/IP/44756