openhab-core
openhab-core copied to clipboard
Rule file provider
Rule file provider
- Create rule file provider for
JSONfiles found inconf/automation/rules - Create rule file provider for
YAMLfiles in accordance with the "new YAML format" (#3666) - Create rule template provider for
YAMLfiles in accordance with the "new YAML format" (#3666) - Fix
DELETEevent bug inFolderObserver
Description
This PR has changed quite a bit from when it was originally created, so I've updated the description.
The primary task of this PR is to provide file-based YAML and JSON rules. The JSON parser supports multiple rules in one file if they are supplied within an array, which means enclosed in [ and ] and separated by ,.
YAML files also support rule templates, both in the "new YAML format" developed in #3666. There has been a separate RFC in #4797 discussing how to define the syntax for rules and rules templates. Rules use the rules element, white rule templates use the ruleTemplates element.
This PR is very loosely coupled to #4591 in that they are both split out as PRs from the same "work branch". The FolderObserver file watcher DELETE fix has been included here, since it was discovered after #4591 was submitted.
It should also be mentioned that the documentation currently claims support for JSON based rules, but no evidence that this exists, or has existed in previous versions, has been found in the code:
The automation engine reads rule json files from the
{openhab-dir}/automation/*.jsondirectory.
The documentation should probably be updated to specify {openhab-dir}/automation/rules/*.json
Original PR description below:
Rule file provider
- Create rule file provider for
JSONandYAMLfiles- Fix
DELETEevent bug inFolderObserverDescription
The primary task of this PR is to provide file-based
YAMLandJSONrules. It supports multiple rules in one file if they are supplied within an array, which means enclosed in[and]and separated by,.This PR is very loosely coupled to #4591 in that they are both split out as PRs from the same "work branch", and this PR use the
AbstractJacksonYAMLParserintroduced in #4591. TheFolderObserverfile watcherDELETEfix has been included here, since it was discovered after #4591 was submitted.There is one issue that might need to be discussed, and that is the location to look for the
YAMLandJSONfiles. There are two "obvious candidates",conf/automation/rulesandconf/rules. The latter is the folder already in use for Rules DSL files, while the former isn't currently used (or automatically created when OH is installed). In either case, some minor changes to the documentation should be made as well, but it's hard to write that until the final location has been decided.Currently, this PR uses
conf/rules, but that's very easy to change. From a user's perspective, this might be the most logical, as they are still rules, although the format is different. It doesn't seem like this causes any conflicts, but a debug log message is created both from the Rules DSL parser (that it ignores the.yamlor.jsonfile) and from theAbstractFileProviderthat it doesn't have a parser for.rulesfiles. None of these are shown by default. The involved classes could be modified to not log these cases, as they are expected, but both are "generic in nature" and introducing exceptions to concrete file extensions doesn't really "fit".If
conf/automation/rulesare used instead, the "conflict" with two parsers monitoring the same folder would be avoided, but it might be more confusing for the end user to have to place the rule files in two different locations. It should also be considered if this folder should be created when the installation is set up, like with many of the other folders.It should also be mentioned that the documentation currently claims support for
JSONbased rules, but no evidence that this exists, or has existed in previous versions, has been found in the code:The automation engine reads rule json files from the
{openhab-dir}/automation/*.jsondirectory.If this actually is wrong, the documentation should be corrected.
The primary task of this PR is to provide file-based
YAMLandJSONrules.
We are already in the process to add YAML for different kinds of things with the ability to put everything in a same file. I don't know what you are doing exactly with this PR but we have to take care to have something globally consistent. And also use the same YAML library. In case it makes no sense to consider the other discussions about YAML in the current context, you can forget my message.
We are already in the process to add YAML for different kinds of things with the ability to put everything in a same file. I don't know what you are doing exactly with this PR but we have to take care to have something globally consistent. And also use the same YAML library.
I'm not familiar with this effort and exactly what it will end up doing, but this PR adds a rule file provider for both JSON and YAML. The YAML parser is the same as the one already used for rule templates, and it is the same YAML parser (Jackson) as I've found other places where YAML is parsed in Core.
This isn't related to any UI or how MainUI deals with YAML. It simply deserializes "object" in either YAML or JSON into a "OH Rule object".
What I need is some feedback on which folder to use. If conf/rules is preferred, this PR is ready. If not, I need to make some modifications.
You complains about no feedback on your PRs but I see I already answered to you. I am still thinking and I am not alone that we should have a global approach for YAML, the idea is the ability to put all sorts of things in our new YAML file format including items, things, but also rules. If you propose to create another YAML file, I am not sure it is the best idea. Typically, we should simply add a "rules" element in our new YAML format. That may be the reason you got no other feedback.
Regarding your last question about the folder, sorry I have not the answer, rule management is one of the OH parts I am knowing less.
You complains about no feedback on your PRs but I see I already answered to you. I am still thinking and I am not alone that we should have a global approach for YAML, the idea is the ability to put all sorts of things in our new YAML file format including items, things, but also rules. If you propose to create another YAML file, I am not sure it is the best idea. Typically, we should simply add a "rules" element in our new YAML format. That may be the reason you got no other feedback.You complains about no feedback on your PRs but I see I already answered to you. I am still thinking and I am not alone that we should have a global approach for YAML, the idea is the ability to put all sorts of things in our new YAML file format including items, things, but also rules. If you propose to create another YAML file, I am not sure it is the best idea. Typically, we should simply add a "rules" element in our new YAML format. That may be the reason you got no other feedback.
If that's the case, it should be made much more clear. I interpreted your comment more as a "please remember to think of this", not as a blocker.
To explain, this is a part of a larger work that I have isolated out as a separate PR to make things more manageable to review. So, I have other code that builds on the capabilities this provides.
Currently, there are only two sources of rules in standard OH, and that is using rule DSL files or managed rules created in the UI. In addition, add-ons can of course inject rules any way they like, but I don't think this is done much except for by the scripting add-ons.
This means that there are no way to provide unmanaged rules unless they are written as rule DSL. The rule DSL syntax doesn't support rule templates, so if you want to use rule templates, you must use managed rules only. This PR provides a way to create unmanaged rules of any type, it's not bound to rules DSL (although the rules CAN use rules DSL or any other scripting language) and it supports rule templates.
According to the current documentation, this already exists for JSON - but I've both tested it and searched everywhere in core for a trace of this, and it's simply not there. So, this in fact provides functionality that's already claimed to be there.
The reason for adding YAML in addition to JSON is because there's a "unique problem" with JSON when dealing with scripts. Almost all rules will contain at least one script, and since JSON doesn't allow multi-line strings, you have to provide the whole script as one long line separated by \n. That's not really practical, which is why YAML is very helpful, because it does have multi-line string support. If it wasn't for that, I'd rather see YAML burn in a certain place, because I just can't stand white-space based formats. I think they are highly "volatile" to work with, and not any easier to read or write. It takes very little time to get used to a properly formatted syntax, and people should invest that little time instead of going down the path of all the problems white-space based formats cause.
The YAML parser used for this is the same YAML parser already used for rule templates. Rules and rule templates are "almost the same thing", a rule template is a rule with some placeholders not yet filled in. So, it would be very logical, to me at least, that they support the same syntax and have the same possibilities.
I tried to find information about this "global approach for YAML", and I didn't really find it. I read through some PRs that I thought might be related, I searched the forum, but I still don't know what the idea really is. As such, it's difficult for me to know how these things would work together or conflict.
If the idea is that you can have one huge YAML file with all the different kind of configuration in it, I'd say that I personally wouldn't find that attractive. It's comparable to write a program as one huge class/unit of continuous code instead of separating it into a logical structure separate unrelated things. I can see that it can be handy in some cases, it might be handy to be able to define a Thing and the corresponding Item(s) in the same file. That's fine, but to go from there to forcing "everything" into the same file seems strange to me. I don't know if that's the idea though.
But, in any case, this new "global parser" would have to implement a huge number of Providers. As such, it would have to implement RuleProvider to support rules. There's no conflict with having a standalone rule provider and one "combined provider" that also provides rules. Are you planning to remove all the existing standalone providers for different things when this "combined provider" is in place? If so, the rule provider in this PR could be removed along with all the others. I still don't quite see the conflict.
This means that there are no way to provide unmanaged rules unless they are written as rule DSL. The rule DSL syntax doesn't support rule templates, so if you want to use rule templates, you must use managed rules only. This PR provides a way to create unmanaged rules of any type, it's not bound to rules DSL (although the rules CAN use rules DSL or any other scripting language) and it supports rule templates. ... The YAML parser used for this is the same YAML parser already used for rule templates. Rules and rule templates are "almost the same thing", a rule template is a rule with some placeholders not yet filled in. So, it would be very logical, to me at least, that they support the same syntax and have the same possibilities.
DSL rule syntax should have been enhanced to support template. As you explained it, there is almost no difference between a rule and a rule template. This is something I could have a look (but not soon).
I tried to find information about this "global approach for YAML", and I didn't really find it. I read through some PRs that I thought might be related, I searched the forum, but I still don't know what the idea really is. As such, it's difficult for me to know how these things would work together or conflict.
There is mainly a very big issue/RFC #3666, if I am not wrong, you even posted a message in it.
If the idea is that you can have one huge YAML file with all the different kind of configuration in it, I'd say that I personally wouldn't find that attractive. It's comparable to write a program as one huge class/unit of continuous code instead of separating it into a logical structure separate unrelated things. I can see that it can be handy in some cases, it might be handy to be able to define a Thing and the corresponding Item(s) in the same file. That's fine, but to go from there to forcing "everything" into the same file seems strange to me. I don't know if that's the idea though.
Putting several kind of things in the same YAML file will just be an option, of course. You could also have one file per item, per thing, per rule, if that is your preference. I am pinging @rkoshak as I believe he is one of the big sponsors of a YAML file format that could contain different kind of information and I believe his idea was also for rules.
But, in any case, this new "global parser" would have to implement a huge number of Providers. As such, it would have to implement RuleProvider to support rules. There's no conflict with having a standalone rule provider and one "combined provider" that also provides rules. Are you planning to remove all the existing standalone providers for different things when this "combined provider" is in place? If so, the rule provider in this PR could be removed along with all the others. I still don't quite see the conflict.
No, we add something new but there is no plan to remove existing providers.
My point is that you want to define a new YAML file format dedicated to rules and rule templates, I am just not sure it should be done outside our YAML model repository. And the syntax should be discussed to be the most user friendly as possible. I see in your PR that you made it aligned to the JSON object. @rkoshak: what is your feeling with that ?
My suggestion would be at this time to just consider a JSON provider + a potential extension of the DSL rule to support rule template. But once again, that is only MY suggestion and not a decision against your proposal. Maybe there is something I do not understand well as I am not familiar with rule template.
I am finishing these days the support of items/metadata/channel links in the new YAML file format. The next step is probably the support of rules and rule templates. The syntax will certainly be discussed in #3666 .
DSL rule syntax should have been enhanced to support template. As you explained it, there is almost no difference between a rule and a rule template. This is something I could have a look (but not soon).
I don't think that would be very practical, but I guess I'd have to make some examples to show what I mean. However, this PR allows the use of templates both for Rules DSL and other scripts. The reason that I don't think it's "practical" is that there's nowhere to store the "rule stub"/"rule configuration" in Rules DSL. This is stored in the configuration object of the rule itself, which doesn't "exist" in a Rules DSL context. Remember that a Rules DSL file is parsed into a Rule, and during that parsing, the content is split into trigger(s) and an action. The Rules DSL format is quite limited in that it doesn't support conditions and doesn't support multiple actions, for example. Neither does it allow storage of for example the configuration object. But, the resulting object that is created by parsing a Rules DSL script does. With this PR, you can supply such an "already parsed" Rules DSL rule, where you can have as many triggers, conditions and actions that you want - in addition to said configuration.
There is mainly a very big issue/RFC https://github.com/openhab/openhab-core/issues/3666, if I am not wrong, you even posted a message in it.
OK, then I failed to grasp the full intention of #3666. I tried to read it, but it is extremely long, so I couldn't keep "full focus" all the way.
Putting several kind of things in the same YAML file will just be an option, of course. You could also have one file per item, per thing, per rule, if that is your preference. I am pinging @rkoshak as I believe he is one of the big sponsors of a YAML file format that could contain different kind of information and I believe his idea was also for rules.
Yes, but fundamentally, to make it possible to put several things in the same YAML file, you must define some syntax that allow these to be parsed as different objects. This PR does no such thing, it only tries to deserialize a Rule from either YAML or JSON. As such, I still don't think there is much conflict, although you could maybe argue that this provider might become redundant when #3666 is in place, depending on how this is actually implemented. But, I still think this provider has its purpose as a complement to the rule template parser that is already there, not to mention that the documentation states that this functionality (minus YAML) already is in OH.
My point is that you want to define a new YAML file format dedicated to rules and rule templates,
Again, I don't see it as "defining a format", as it's just a "raw mapping" of the actual structure of a Rule. There is no syntax that has any meaning specific to this provider, there is nothing for anybody to "learn". You can copy what you find in the "Code" tab in the UI and paste it in a file that will then be picked up by this provider.
My suggestion would be at this time to just consider a JSON provider + a potential extension of the DSL rule to support rule template.
The problem with that, as I see it, is how terribly inconvenient it is to work with scripts in JSON, having to edit/maintain an entire script that is one line, separated by \n. Humans aren't really suitable for that, so it would probably mean to have to write it in YAML or some other format, and then do the conversion to JSON manually, outside of OH. It makes much more sense that OH will do that conversion for you.
OK, then I failed to grasp the full intention of https://github.com/openhab/openhab-core/issues/3666. I tried to read it, but it is extremely long, so I couldn't keep "full focus" all the way.
#3666 is creating a standardized way to define everything that is currently definable via DSL (i.e. .items, .things, .persist, and .rules) using YAML. It's coupled with other PRs which added ability to have OH translate between the formats and UI updates which allow one to see the DSL version of a Thing (for example) and not just a YAML representation of the JSON.
Anything done on this PR needs to be reconciled with what is going on on #3666 because we don't want to inadvertantly introduce multiple YAML approaches.
You can copy what you find in the "Code" tab in the UI and paste it in a file that will then be picked up by this provider.
With #3666 and related PRs the YAML you find in the code tab will not necessarily be a straight mapping of the JSON any more. It might not even be YAML any more.
The problem with that, as I see it, is how terribly inconvenient it is to work with scripts in JSON, having to edit/maintain an entire script that is one line, separated by \n. Humans aren't really suitable for that, so it would probably mean to have to write it in YAML or some other format, and then do the conversion to JSON manually, outside of OH. It makes much more sense that OH will do that conversion for you.
I think @lolodomo's intent is to suggest that the YAML part of this waits until more progress is made on #3666 and then support for YAML templates would be added as part of #3666 when they get to that point.
I am pinging @rkoshak as I believe he is one of the big sponsors of a YAML file format that could contain different kind of information and I believe his idea was also for rules.
I definitely want rules to be included as part of #3666 and I would very much like the ability to define, load, and use rule tempaltes as well.
My point is that you want to define a new YAML file format dedicated to rules and rule templates, I am just not sure it should be done outside our YAML model repository. And the syntax should be discussed to be the most user friendly as possible. I see in your PR that you made it aligned to the JSON object. @rkoshak: what is your feeling with that ?
I agree. I think we have an opportunity to take advantage of both PRs and come to a uinfied solution.
I think the main problem here is that the two efforts have been going on in parallel without coordination. I don't think we necessarily need to wait for5 one to finish before working on the other, but I think more coordination is needed to come out with a unified approach.
Anything done on this PR needs to be reconciled with what is going on on #3666 because we don't want to inadvertantly introduce multiple YAML approaches.
I get that, I'm just not sure I understand exactly how to "reconcile" it. #3666 is about a new format, based on YAML, if I understand it correctly, while this PR merely converts YAML to JSON - just like what is already done with rule templates. I think there should be parity between rules and rule templates as well.
Are there any examples of the formats that has already been implemented by #3666, so that I can get a better understanding of the idea?
The current implementation for rule templates doesn't really "bind" much, since rule templates as a whole is undocumented. As such, very few know that this even exists or how to use it. #3666 could probably simply replace the YAML parsing for rule templates without actually "breaking" anything out there. The question is how far into the future this is.
It could be argued that the same could be done with this. Merge it now with the "raw mapping" and then remove the YAML parsing when the result of #3666 is ready. As long as it's not documented, it won't see much use anyway, but it will at least be "usable" without having to do YAML -> JSON using an external converter for any change to the rules.
This is some of the test files I've used, it's the same two rules in JSON and YAML. See how utterly useless JSON is for scripts..
JSON:
[
{
"actions": [
{
"configuration": {
"script": "// Version 1.0\nvar {TimerMgr, helpers} = require('openhab_rules_tools');\nconsole.loggerName = 'org.openhab.automation.rules_tools.TimeStateMachine';\n//osgi.getService('org.apache.karaf.log.core.LogService').setLevel(console.loggerName, 'DEBUG');\n\nhelpers.validateLibraries('4.2.0', '2.0.3');\n\nconsole.debug('Starting state machine in ten seconds...');\n\n// Properties\nvar STATE_ITEM = \"DemoDateTime\";\nvar DT_GROUP = \"DemoSwitchGroup\";\nvar DAY_TYPES = ['custom', 'holiday', 'dayset', 'weekend', 'weekday', 'default'];\nvar NAMESPACE = 'tsm';\nvar USAGE = 'Time Based State Machine Usage:\\n'\n + 'All date times must be a member of ' + DT_GROUP + '.\\n'\n + 'Each member of the Group must have ' + NAMESPACE + ' Item metadata of the following format:\\n'\n + ' .items file: ' + NAMESPACE +'=\"STATE\"[type=\"daytype\", set=\"dayset\", file=\"uri\"]\\n'\n + \" UI YAML: use '\" + NAMESPACE + \"' for the namespace and metadata format:\\n\"\n + ' value: STATE\\n'\n + ' config:\\n'\n + ' type: daytype\\n'\n + ' set: dayset\\n'\n + ' file: uri\\n'\n + 'Where \"STATE\" is the state machine state that begins at the time stored in that Item, '\n + '\"daytype\" is one of \"default\", \"weekday\", \"weekend\", \"dayset\", \"holiday\", or \"custom\". '\n + 'If \"dayset\" is chosen for the type, the \"set\" property is required indicating the name of the '\n + 'custom dayset configured in Ephemeris. If \"custom\" is chosen as the type, the \"file\" property '\n + 'is required and should be the fully qualified path the the Ephemeris XML file with the custom '\n + 'holidays defined. The \"set\" and \"file\" properties are invalid when choosing any of the other '\n + '\"types\".';\n\n/**\n * Validates the passed in Item has valid NAMESPACE metadata.\n *\n * @param {string} itemName name of the Item to check\n * @throws exception if the metadata doesn't exist or is invalid\n */\nvar validateItemConfig = (itemName) => {\n const md = items[itemName].getMetadata()[NAMESPACE];\n\n if(md.value === undefined || md.value === null || md.value === '') {\n throw itemName + ' has malformed ' + NAMESPACE + ' metadata, no value found!';\n }\n\n const dayType = md.configuration['type'];\n if(!dayType) {\n throw itemName + ' has malformed ' + NAMESPACE + ' metadata, required \"type\" property is not found!';\n }\n\n if(dayType == 'dayset' && !md.configuration['set']) {\n throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is \"dayset\" but required \"set\" property is not found!';\n }\n\n if(dayType == 'custom' && !md.configuration['file']) {\n throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is \"custom\" but required \"file\" property is not found!';\n }\n\n if(!items[itemName].type.startsWith('DateTime')) {\n throw itemName + ' is not a DateTime Item!';\n }\n\n if(items[itemName].isUninitialized) {\n throw itemName + \" is not initialized!: \" + items[itemName].state;\n }\n\n console.debug(itemName+ ' is valid');\n};\n\n/**\n * Return all members of the DT_GROUP that has a \"type\" metadata configuration property that\n * matches the passed in type.\n *\n * @param {string} type the day type defined in the metadata we want to get the Items for\n * @returns {Array} all the Items with the matching type in the metadata\n */\nvar getItemsOfType = (type) => {\n const allItems = items[DT_GROUP].members;\n return allItems.filter( item => item.getMetadata()[NAMESPACE].configuration['type'] == type);\n};\n\n/**\n * Returns true if all the Items of the given type have a unique \"state\" value\n * in the metadata.\n *\n * @param {string} the day type\n * @returns {boolean} true if all states are unique, false otherwise\n */\nvar checkUniqueStates = (type) => {\n const allItems = getItemsOfType(type);\n const states = new Set(allItems.map(i => { return i.getMetadata()[NAMESPACE].value; }));\n return !allItems.length || allItems.length == states.size;\n};\n\n/**\n * Check that all Items are configured correctly.\n */\nvar validateAllConfigs = () => {\n console.debug('Validating Item types, Item metadata, and Group membership');\n\n // Check that all members of the Group have metadata\n const itemsWithMD = items[DT_GROUP].members.filter(item => item.getMetadata(NAMESPACE)).length;\n if(itemsWithMD != items[DT_GROUP].members.length) {\n const noMdItems = items[DT_GROUP].members.filter(item => !item.getMetadata(NAMESPACE));\n console.warn('The following Items do not have required ' + NAMESPACE + ' metadata: ' + noMdItems.map(item => item.name).join(', '));\n return false; // no sense on performing any additional tests\n }\n\n // Check each Item's metadata\n let isGood = helpers.checkGrpAndMetadata(NAMESPACE, DT_GROUP, validateItemConfig, USAGE);\n\n // Check the state item\n if(!items[STATE_ITEM]){\n console.warn('The state Item ' + STATE_ITEM + ' does not exist!');\n isGood = false;\n }\n\n if(!items[STATE_ITEM].type.startsWith('String')) {\n console.warn('The state Item ' + STATE_ITEM + ' is not a String Item!');\n isGood = false;\n }\n\n // Check to see if we have a default set of Items\n if(!getItemsOfType('default')) {\n console.warn('There are no \"default\" day type Items defined! Make sure you have all day types covered!');\n // we'll not invalidate if there are no \"default\" items\n }\n\n // Check that each data set has a unique state for each Item\n DAY_TYPES.forEach(type => {\n if(!checkUniqueStates(type)) {\n console.warn('Not all the metadata values for Items of type ' + type + ' are unique!');\n isGood = false;\n }\n })\n\n // Report if all configs are good or not\n if(isGood) {\n console.debug('All ' + NAMESPACE + ' Items are configured correctly');\n }\n return isGood;\n};\n\n/**\n * Pull the set of Items for today based on Ephemeris. The Ephemeris hierarchy is\n * - custom\n * - holiday\n * - dayset\n * - weeekend\n * - weekday\n * - default\n *\n * If there are no DateTime Items defined for today's type, null is returned.\n */\nvar getTodayItems = () => {\n // Get all the DateTime Items that might apply to today given what type of day it is\n // For example, if it's a weekend, there will be no weekday Items pulled. Whether or not\n // the entry in this dict has an array of Items determines whether today is of that day\n // type.\n const startTimes = [\n { 'type': 'custom', 'times' : getItemsOfType('custom').filter(item => actions.Ephemeris.isBankHoliday(0, item.getMetadata()[NAMESPACE].configuration['file'])) },\n { 'type': 'holiday', 'times' : (actions.Ephemeris.isBankHoliday()) ? getItemsOfType('holiday') : [] },\n { 'type': 'dayset', 'times' : getItemsOfType('dayset').filter(item => actions.Ephemeris.isInDayset(items.getMetadata()[NAMESPACE].configuration['set'])) },\n { 'type': 'weekend', 'times' : (actions.Ephemeris.isWeekend()) ? getItemsOfType('weekend') : [] },\n { 'type': 'weekday', 'times' : (!actions.Ephemeris.isWeekend()) ? getItemsOfType('weekday') : [] },\n { 'type': 'default', 'times' : getItemsOfType('default') }\n ];\n\n // Go through startTimes in order and choose the first one that has a non-empty list of Items\n const dayStartTimes = startTimes.find(dayset => dayset.times.length);\n\n if(dayStartTimes === null) {\n console.warn('No DateTime Items found for today');\n return null;\n }\n else {\n console.info('Today is a ' + dayStartTimes.type + ' day.');\n return dayStartTimes.times;\n }\n};\n\n/**\n * Returns a function called to transition the state machine from one state to the next\n *\n * @param {string} state the new state to transition to\n * @param {function} the function that transitions the state\n */\nvar stateTransitionGenerator = (state) => {\n return function() {\n console.info('Transitioning Time State Machine from ' + items[STATE_ITEM].state + ' to ' + state);\n items[STATE_ITEM].sendCommand(state);\n }\n}\n\n/**\n * Returns a function that generates the timers for all the passed in startTimes\n *\n * @param {Array} startTimes list of today's state start times\n * @param {timerMgr.TimerMgr} timers collection of timers\n * @returns {function} called to generate the timers to transition between the states\n */\nvar createTimersGenerator = (timers) => {\n return function() {\n\n if(validateAllConfigs()) {\n\n // Cancel the timers, skipping the debounce timer\n console.debug('Cancelling existing timers');\n timers.cancelAll();\n\n // Get the set of Items for today's state machine\n console.debug(\"Acquiring today's state start times\");\n const startTimes = getTodayItems();\n\n // Get the state and start time, sort them ignoring the date, skip the ones that have\n // already passed and create a timer to transition for the rest.\n console.debug('Creating timers for times that have not already passed');\n var mapped = startTimes.map(i => { return { 'state': i.getMetadata()[NAMESPACE].value,\n 'time' : time.toZDT(i.state).toToday() } });\n mapped.sort((a,b) => {\n if(a.time.isBefore(b.time)) return -1;\n else if(a.time.isAfter(b.time)) return 1;\n else return 0;\n })\n .filter(tod => tod.time.isAfter(time.toZDT()))\n .forEach(tod => {\n // TODO: see if we can move to rules instead of timers\n console.debug('Creating timer for ' + tod.state + ' at ' + tod.time);\n timers.check(tod.state, tod.time.toString(), stateTransitionGenerator(tod.state));\n });\n\n // Figure out the current time of day and move to that state if necessary\n var beforeTimes = mapped.sort((a,b) => {\n if(a.time.isAfter(b.time)) return -1;\n else if(a.time.isBefore(b.time)) return 1;\n else return 0;\n })\n .filter(tod => tod.time.isBefore(time.toZDT()));\n if(!beforeTimes.length) {\n console.debug(\"There is no date time for today before now, we can't know what the current state is, keeping the current time of day state of \" + items[STATE_ITEM].state + \".\");\n }\n else {\n const currState = beforeTimes[0].state\n const stateItem = items[STATE_ITEM];\n console.info('The current state is ' + currState);\n if(stateItem.state != currState) stateItem.sendCommand(currState)\n }\n }\n else {\n console.warn('The config is not valid, cannot proceed!');\n }\n\n };\n};\n\nvar timers = cache.private.get('timers', () => TimerMgr());\n\n// Wait a minute after the last time the rule is triggered to make sure all Items are done changing (e.g.\n// Astro Items) before calculating the new state.\ntimers.check('debounce',\n 'PT10S',\n createTimersGenerator(timers),\n true,\n () => { console.debug('Flapping detected, waiting before creating timers for today'); });\n",
"type": "application/javascript"
},
"id": "3",
"inputs": {
},
"type": "script.ScriptAction"
}
],
"conditions": [
],
"configDescriptions": [
{
"context": "item",
"description": "String Item that holds the current time of day's state.",
"filterCriteria": [
{
"name": "type",
"value": "String"
}
],
"label": "Time of Day State Item",
"name": "timeOfDay",
"required": true,
"type": "TEXT"
},
{
"context": "item",
"description": "Has as members all the DateTime Items that define time of day states.",
"filterCriteria": [
{
"name": "type",
"value": "Group"
}
],
"label": "Times of Day Group",
"name": "timesOfDayGrp",
"required": true,
"type": "TEXT"
},
{
"description": "The Item metadata namespace (e.g. \"tsm\").",
"label": "Time of Day Namespace",
"name": "namespace",
"required": true,
"type": "TEXT"
}
],
"description": "Creates timers to transition a state Item to a new state at defined times of day.",
"name": "Time Based State Machine rule",
"triggers": [
{
"configuration": {
"groupName": "DemoSwitchGroup"
},
"id": "1",
"type": "core.GroupStateChangeTrigger"
},
{
"configuration": {
"startlevel": 100
},
"id": "2",
"type": "core.SystemStartlevelTrigger"
},
{
"configuration": {
"time": "00:05"
},
"id": "4",
"type": "timer.TimeOfDayTrigger"
}
],
"uid": "rules_tools:tsm4"
},
{
"actions": [
{
"configuration": {
"script": "var from = parseFloat(oldState.toString().split(' ')[0]);\nvar to = parseFloat(newState.toString().split(' ')[0]);\n\nprint(from + '>' + to);\n\nif (to < 2 && from >= 2) {\n events.sendCommand('DemoSwitch', 'LSELECT');\n}\n",
"type": "application/javascript"
},
"id": "2",
"inputs": {
},
"type": "script.ScriptAction"
}
],
"conditions": [
],
"configDescriptions": [
{
"context": "item",
"description": "Item that holds the power (in watts) of the washing machine. Can be a quantity type (Number:Power).",
"label": "Power Item",
"name": "powerItem",
"required": true,
"type": "TEXT"
},
{
"defaultValue": 2,
"description": "When the power measurement was at or above the threshold and crosses below it, trigger the alert.",
"label": "Threshold",
"name": "threshold",
"required": true,
"type": "DECIMAL"
},
{
"context": "item",
"description": "Item to send a command to when the measured power gets below the threshold. For instance, a Hue light advanced Alert channel.",
"label": "Alert Item",
"name": "alertItem",
"required": true,
"type": "TEXT"
},
{
"defaultValue": "LSELECT",
"description": "Command to send to the alert item (for an item linked to a Hue light alert channel, LSELECT will flash the light for a few seconds).",
"label": "Alert Command",
"name": "alertCommand",
"required": true,
"type": "TEXT"
}
],
"description": "This will monitor the power consumption of a washing machine and send an alert command when it gets below a threshold, meaning it has finished.",
"name": "Alert when Washing Machine Finished rule",
"triggers": [
{
"configuration": {
"itemName": "CurrentPower",
"state": ""
},
"id": "1",
"type": "core.ItemStateChangeTrigger"
}
],
"uid": "ysc:washing_machine_alert2.4"
}
]
YAML:
- actions:
- configuration:
script: >
// Version 1.0
var {TimerMgr, helpers} = require('openhab_rules_tools');
console.loggerName =
'org.openhab.automation.rules_tools.TimeStateMachine';
//osgi.getService('org.apache.karaf.log.core.LogService').setLevel(console.loggerName,
'DEBUG');
helpers.validateLibraries('4.2.0', '2.0.3');
console.debug('Starting state machine in ten seconds...');
// Properties
var STATE_ITEM = "DemoDateTime";
var DT_GROUP = "DemoSwitchGroup";
var DAY_TYPES = ['custom', 'holiday', 'dayset', 'weekend',
'weekday', 'default'];
var NAMESPACE = 'tsm';
var USAGE = 'Time Based State Machine Usage:\n'
+ 'All date times must be a member of ' + DT_GROUP + '.\n'
+ 'Each member of the Group must have ' + NAMESPACE + ' Item metadata of the following format:\n'
+ ' .items file: ' + NAMESPACE +'="STATE"[type="daytype", set="dayset", file="uri"]\n'
+ " UI YAML: use '" + NAMESPACE + "' for the namespace and metadata format:\n"
+ ' value: STATE\n'
+ ' config:\n'
+ ' type: daytype\n'
+ ' set: dayset\n'
+ ' file: uri\n'
+ 'Where "STATE" is the state machine state that begins at the time stored in that Item, '
+ '"daytype" is one of "default", "weekday", "weekend", "dayset", "holiday", or "custom". '
+ 'If "dayset" is chosen for the type, the "set" property is required indicating the name of the '
+ 'custom dayset configured in Ephemeris. If "custom" is chosen as the type, the "file" property '
+ 'is required and should be the fully qualified path the the Ephemeris XML file with the custom '
+ 'holidays defined. The "set" and "file" properties are invalid when choosing any of the other '
+ '"types".';
/**
* Validates the passed in Item has valid NAMESPACE metadata.
*
* @param {string} itemName name of the Item to check
* @throws exception if the metadata doesn't exist or is invalid
*/
var validateItemConfig = (itemName) => {
const md = items[itemName].getMetadata()[NAMESPACE];
if(md.value === undefined || md.value === null || md.value === '') {
throw itemName + ' has malformed ' + NAMESPACE + ' metadata, no value found!';
}
const dayType = md.configuration['type'];
if(!dayType) {
throw itemName + ' has malformed ' + NAMESPACE + ' metadata, required "type" property is not found!';
}
if(dayType == 'dayset' && !md.configuration['set']) {
throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is "dayset" but required "set" property is not found!';
}
if(dayType == 'custom' && !md.configuration['file']) {
throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is "custom" but required "file" property is not found!';
}
if(!items[itemName].type.startsWith('DateTime')) {
throw itemName + ' is not a DateTime Item!';
}
if(items[itemName].isUninitialized) {
throw itemName + " is not initialized!: " + items[itemName].state;
}
console.debug(itemName+ ' is valid');
};
/**
* Return all members of the DT_GROUP that has a "type" metadata configuration property that
* matches the passed in type.
*
* @param {string} type the day type defined in the metadata we want to get the Items for
* @returns {Array} all the Items with the matching type in the metadata
*/
var getItemsOfType = (type) => {
const allItems = items[DT_GROUP].members;
return allItems.filter( item => item.getMetadata()[NAMESPACE].configuration['type'] == type);
};
/**
* Returns true if all the Items of the given type have a unique "state" value
* in the metadata.
*
* @param {string} the day type
* @returns {boolean} true if all states are unique, false otherwise
*/
var checkUniqueStates = (type) => {
const allItems = getItemsOfType(type);
const states = new Set(allItems.map(i => { return i.getMetadata()[NAMESPACE].value; }));
return !allItems.length || allItems.length == states.size;
};
/**
* Check that all Items are configured correctly.
*/
var validateAllConfigs = () => {
console.debug('Validating Item types, Item metadata, and Group membership');
// Check that all members of the Group have metadata
const itemsWithMD = items[DT_GROUP].members.filter(item => item.getMetadata(NAMESPACE)).length;
if(itemsWithMD != items[DT_GROUP].members.length) {
const noMdItems = items[DT_GROUP].members.filter(item => !item.getMetadata(NAMESPACE));
console.warn('The following Items do not have required ' + NAMESPACE + ' metadata: ' + noMdItems.map(item => item.name).join(', '));
return false; // no sense on performing any additional tests
}
// Check each Item's metadata
let isGood = helpers.checkGrpAndMetadata(NAMESPACE, DT_GROUP, validateItemConfig, USAGE);
// Check the state item
if(!items[STATE_ITEM]){
console.warn('The state Item ' + STATE_ITEM + ' does not exist!');
isGood = false;
}
if(!items[STATE_ITEM].type.startsWith('String')) {
console.warn('The state Item ' + STATE_ITEM + ' is not a String Item!');
isGood = false;
}
// Check to see if we have a default set of Items
if(!getItemsOfType('default')) {
console.warn('There are no "default" day type Items defined! Make sure you have all day types covered!');
// we'll not invalidate if there are no "default" items
}
// Check that each data set has a unique state for each Item
DAY_TYPES.forEach(type => {
if(!checkUniqueStates(type)) {
console.warn('Not all the metadata values for Items of type ' + type + ' are unique!');
isGood = false;
}
})
// Report if all configs are good or not
if(isGood) {
console.debug('All ' + NAMESPACE + ' Items are configured correctly');
}
return isGood;
};
/**
* Pull the set of Items for today based on Ephemeris. The Ephemeris hierarchy is
* - custom
* - holiday
* - dayset
* - weeekend
* - weekday
* - default
*
* If there are no DateTime Items defined for today's type, null is returned.
*/
var getTodayItems = () => {
// Get all the DateTime Items that might apply to today given what type of day it is
// For example, if it's a weekend, there will be no weekday Items pulled. Whether or not
// the entry in this dict has an array of Items determines whether today is of that day
// type.
const startTimes = [
{ 'type': 'custom', 'times' : getItemsOfType('custom').filter(item => actions.Ephemeris.isBankHoliday(0, item.getMetadata()[NAMESPACE].configuration['file'])) },
{ 'type': 'holiday', 'times' : (actions.Ephemeris.isBankHoliday()) ? getItemsOfType('holiday') : [] },
{ 'type': 'dayset', 'times' : getItemsOfType('dayset').filter(item => actions.Ephemeris.isInDayset(items.getMetadata()[NAMESPACE].configuration['set'])) },
{ 'type': 'weekend', 'times' : (actions.Ephemeris.isWeekend()) ? getItemsOfType('weekend') : [] },
{ 'type': 'weekday', 'times' : (!actions.Ephemeris.isWeekend()) ? getItemsOfType('weekday') : [] },
{ 'type': 'default', 'times' : getItemsOfType('default') }
];
// Go through startTimes in order and choose the first one that has a non-empty list of Items
const dayStartTimes = startTimes.find(dayset => dayset.times.length);
if(dayStartTimes === null) {
console.warn('No DateTime Items found for today');
return null;
}
else {
console.info('Today is a ' + dayStartTimes.type + ' day.');
return dayStartTimes.times;
}
};
/**
* Returns a function called to transition the state machine from one state to the next
*
* @param {string} state the new state to transition to
* @param {function} the function that transitions the state
*/
var stateTransitionGenerator = (state) => {
return function() {
console.info('Transitioning Time State Machine from ' + items[STATE_ITEM].state + ' to ' + state);
items[STATE_ITEM].sendCommand(state);
}
}
/**
* Returns a function that generates the timers for all the passed in startTimes
*
* @param {Array} startTimes list of today's state start times
* @param {timerMgr.TimerMgr} timers collection of timers
* @returns {function} called to generate the timers to transition between the states
*/
var createTimersGenerator = (timers) => {
return function() {
if(validateAllConfigs()) {
// Cancel the timers, skipping the debounce timer
console.debug('Cancelling existing timers');
timers.cancelAll();
// Get the set of Items for today's state machine
console.debug("Acquiring today's state start times");
const startTimes = getTodayItems();
// Get the state and start time, sort them ignoring the date, skip the ones that have
// already passed and create a timer to transition for the rest.
console.debug('Creating timers for times that have not already passed');
var mapped = startTimes.map(i => { return { 'state': i.getMetadata()[NAMESPACE].value,
'time' : time.toZDT(i.state).toToday() } });
mapped.sort((a,b) => {
if(a.time.isBefore(b.time)) return -1;
else if(a.time.isAfter(b.time)) return 1;
else return 0;
})
.filter(tod => tod.time.isAfter(time.toZDT()))
.forEach(tod => {
// TODO: see if we can move to rules instead of timers
console.debug('Creating timer for ' + tod.state + ' at ' + tod.time);
timers.check(tod.state, tod.time.toString(), stateTransitionGenerator(tod.state));
});
// Figure out the current time of day and move to that state if necessary
var beforeTimes = mapped.sort((a,b) => {
if(a.time.isAfter(b.time)) return -1;
else if(a.time.isBefore(b.time)) return 1;
else return 0;
})
.filter(tod => tod.time.isBefore(time.toZDT()));
if(!beforeTimes.length) {
console.debug("There is no date time for today before now, we can't know what the current state is, keeping the current time of day state of " + items[STATE_ITEM].state + ".");
}
else {
const currState = beforeTimes[0].state
const stateItem = items[STATE_ITEM];
console.info('The current state is ' + currState);
if(stateItem.state != currState) stateItem.sendCommand(currState)
}
}
else {
console.warn('The config is not valid, cannot proceed!');
}
};
};
var timers = cache.private.get('timers', () => TimerMgr());
// Wait a minute after the last time the rule is triggered to make
sure all Items are done changing (e.g.
// Astro Items) before calculating the new state.
timers.check('debounce',
'PT10S',
createTimersGenerator(timers),
true,
() => { console.debug('Flapping detected, waiting before creating timers for today'); });
type: application/javascript
id: '3'
inputs: {}
type: script.ScriptAction
conditions: []
configDescriptions:
- context: item
description: String Item that holds the current time of day's state.
filterCriteria:
- name: type
value: String
label: Time of Day State Item
name: timeOfDay
required: true
type: TEXT
- context: item
description: Has as members all the DateTime Items that define time of day states.
filterCriteria:
- name: type
value: Group
label: Times of Day Group
name: timesOfDayGrp
required: true
type: TEXT
- description: The Item metadata namespace (e.g. "tsm").
label: Time of Day Namespace
name: namespace
required: true
type: TEXT
description: >-
Creates timers to transition a state Item to a new state at defined times of
day.
name: Time Based State Machine rule
triggers:
- configuration:
groupName: DemoSwitchGroup
id: '1'
type: core.GroupStateChangeTrigger
- configuration:
startlevel: 100
id: '2'
type: core.SystemStartlevelTrigger
- configuration:
time: '00:05'
id: '4'
type: timer.TimeOfDayTrigger
uid: 'rules_tools:tsm5'
- actions:
- configuration:
script: |
var from = parseFloat(oldState.toString().split(' ')[0]);
var to = parseFloat(newState.toString().split(' ')[0]);
print(from + '>' + to);
if (to < 2 && from >= 2) {
events.sendCommand('DemoSwitch', 'LSELECT');
}
type: application/javascript
id: '2'
inputs: {}
type: script.ScriptAction
conditions: []
configDescriptions:
- context: item
description: >-
Item that holds the power (in watts) of the washing machine. Can be a
quantity type (Number:Power).
label: Power Item
name: powerItem
required: true
type: TEXT
- defaultValue: 2
description: >-
When the power measurement was at or above the threshold and crosses
below it, trigger the alert.
label: Threshold
name: threshold
required: true
type: DECIMAL
- context: item
description: >-
Item to send a command to when the measured power gets below the
threshold. For instance, a Hue light advanced Alert channel.
label: Alert Item
name: alertItem
required: true
type: TEXT
- defaultValue: LSELECT
description: >-
Command to send to the alert item (for an item linked to a Hue light
alert channel, LSELECT will flash the light for a few seconds).
label: Alert Command
name: alertCommand
required: true
type: TEXT
description: >-
This will monitor the power consumption of a washing machine and send an
alert command when it gets below a threshold, meaning it has finished.
name: Alert when Washing Machine Finished rule
triggers:
- configuration:
itemName: CurrentPower
state: ''
id: '1'
type: core.ItemStateChangeTrigger
uid: 'ysc:washing_machine_alert2.5'
Are there any examples of the formats that has already been implemented by https://github.com/openhab/openhab-core/issues/3666, so that I can get a better understanding of the idea?
I think Items and Things are done but rules haven't been started yet AFAIK.
The question is how far into the future this is.
The goal is #3666 will be part of OH 5.0. That seems achievable.
Are there any examples of the formats that has already been implemented by https://github.com/openhab/openhab-core/issues/3666, so that I can get a better understanding of the idea?
Yes, you can find it in #4691 and #4776.
The question is how far into the future this is.
The goal is #3666 will be part of OH 5.0. That seems achievable.
Yes but you will not have everything supported in the YAML format in OH 5.0. The delay is too short.
It could be argued that the same could be done with this. Merge it now with the "raw mapping" and then remove the YAML parsing when the result of https://github.com/openhab/openhab-core/issues/3666 is ready. As long as it's not documented, it won't see much use anyway, but it will at least be "usable" without having to do YAML -> JSON using an external converter for any change to the rules.
That is a possible option.
Yes, you can find it in #4691 and #4776.
Thanks.
I'm trying to debug YamlModelRepositoryImpl to see how the files are parsed (I find that easier than browsing to related and unrelated code, to get my bearings), but I can't get it to start (breakpoints don't trigger, there's nothing logged by it in the log). Is it disabled somehow in current main, and if so, how do I "enable" it?
In Eclipse, you need to add org.openhab.core.model.yaml Probably something to change to have it by default.
In Eclipse, you need to add org.openhab.core.model.yaml
Add where? To the "demo app" run requirements?
The best is to discuss the syntax first ho get an agreement.
The best is to discuss the syntax first ho get an agreement.
I just want to run/debug to understand how it works, including how the files are parsed.
I just want to run/debug to understand how it works, including how the files are parsed.
Files are parsed in class YamlModelRepositoryImpl and resulting DTO for each type of elements (things, items, tags, ...) are "distributed" to providers. So to add a new kind of elements, you need mainly to add a DTO class + a provider in charge to handle DTO to provide final elements (things, items, ...) to one of our existing registries.
Files are parsed in class YamlModelRepositoryImpl and resulting DTO for each type of elements (things, items, tags, ...) are "distributed" to providers. So to add a new kind of elements, you need mainly to add a DTO class + a provider in charge to provide a registry.
Yes, I've gotten the basic idea of this. There are some details that aren't clear to me though, which I think I will figure out quickly using the debugger. That is things like the "watch folder" where these files are picked up, and how the object itself is deserialized. I mean, if it only supports "declared" fields, or if it will support anything that can be deserialized. Likewise, if there is some kind of default mechanism for unspecified fields.
When it comes to supporting e.g rules in itself using @YamlElementName("rules") is quite obvious, so I'm more interested in the details. I usually find that figuring those things out using a debugger is more efficient than me asking lots and lots of questions 😉
Adding it to "Run Requirements" did it, thanks.
I now see that it's watching the whole of conf for .yaml files, except the automation subfolder. Has this been discussed and decided to be the "best" solution, or is it somewhat preliminary?
I'm thinking that it might lead to some potential trouble if this parser will claim "all" YAML files, given that YAML is used for so many things these days. There can be e.g addons that want to use YAML for configuration or similar. Given that this is a "specific format" built upon YAML, it might be better to come up with an entirely new extension and leave .yaml to general use. That way, this parser could "more rightfully" require that the specific structure with element names etc. are followed for these files, instead of throwing a tantrum over files that are for other things.
now see that it's watching the whole of conf for .yaml files, except the automation subfolder. Has this been discussed and decided to be the "best" solution, or is it somewhat preliminary?
It was done because YAML files could be encountered with one of the rule engine I believe but I don't remember the details. My idea was rather to watch the usual sub-folders like conf/tags, conf/items, conf/things, ... but as there was a wish to put potentially different stuff in the same YAML file, a YAML file containing items and things for example could be put anywhere in the conf folder.
Normally, this conf folder should contain only configuration files for openHAB and we know which ones.
There can be e.g addons that want to use YAML for configuration or similar.
But addons never read in conf folder. Is it something required for automation engines ?
Given that this is a "specific format" built upon YAML, it might be better to come up with an entirely new extension and leave .yaml to general use.
I am not in favour of that because having the yaml extension has many advantages like properly supported by editors.
Normally, this conf folder should contain only configuration files for openHAB and we know which ones.
Yes, my point is that this isn't YAML the "format", it's a particular form of YAML with very specific rules. By claiming all YAML files in all the configuration tree, you basically exclude using YAML for any other kind of configuration in the future.
But addons never read in conf folder. Is it something required for automation engines ?
Some certainly do - I don't have the great overview over add-ons, but I've seen several of the scripting languages read from conf subfolders (which is probably why the automation folder was excluded. But, in addition to what is now, why make it so that this won't be a possibility also in the future?
I am not in favour of that because having the yaml extension has many advantages like properly supported by editors.
I see that point, although most editors are easy to tell what format to "interpret as" if they come across an unknown extension. But, maybe a dedicated subfolder of conf where files read by this parser would live instead? Users could create further subfolders there to organize it as they would like.
I've hacked together a very rudimentary rule parser for the "new" YAML system. It's not finished by a long shot, but it does work.
@lolodomo I've used the Thing files as a starting point, and tweaked from there. As such, there are lots of stuff I need to remove. I'm currently wondering about the ReadyTracker implementation. Is that there merely to know when the ThingTypes are ready, or does it serve further purpose(s)? The rule provider doesn't need the ThingTypes, so if that's the case, I can just remove ReadyTracker.
Yes, certainly. I did not use it for items for example.
You could have looked at the YamlSemanticTagProvider which is largely more simple.
I think I have the basics in place now. This file works fine:
version: 2
rules:
basicrule:
name: Basic Rule
visibility: VISIBLE
config: {}
triggers:
- id: "1"
config:
startlevel: 100
type: core.SystemStartlevelTrigger
- id: "2"
config:
time: 14:05
type: timer.TimeOfDayTrigger
conditions:
- inputs: {}
id: "5"
config:
offset: 0
type: ephemeris.WeekdayCondition
actions:
- inputs: {}
id: "3"
config:
itemName: SleepSetTemperature
command: "21.0"
type: core.ItemCommandAction
- inputs: {}
id: "4"
config:
sink: webaudio
text: The sleep temperature has been set
type: media.SayAction
Thus far I've used field names from the Core Java objects, except for configuration which I have changed to config. I did this because it seems to be what has been agreed upon for the other YAML formats..?
I've read and read in #3666, but there's just so much text there that it's hard to find things. Is there some a list of words that should be shortened in there somewhere? Or is this decided on a "per case" basis? Personally, I think it complicates things a lot that the same fields should have different names in different places, but I guess it's deemed too much work to type out some of these things. What about e.g configurationDescriptions? Is there a "lazy" form of that already in use somewhere?
Except figuring out exactly what words should be translated/mapped like configuration -> config, I think the only things missing are the validation (what rules should constitute a "valid" rule) and the formalities (JavaDocs, spotless compliance etc.).
I've seen mention of this "generator" to convert between formats, but as far as I can understand, it doesn't apply to rules. You can't "convert" a rule between Rules DSL and some other scripting language. You can of course "convert" a .rules file into a YAML file where the "action" is still Rules DSL code, but that is effectively done when the rule is parsed by the Rules DSL provider. But, since this format is different from what would be produced by YAMLification of the Java object (config != configuration, uid being dropped and placed as the "map key" instead, there should probably be code to produce a "compliant code" somewhere also. I'm not sure where this is though.
Actions, conditions and values all have their own IDs, and are currently presented as an array of objects where the identifier is a field in the object. Should this also be transformed into a "map structure" where the identifier is extracted and placed a level above the rest? Is the idea that this should be done on any level down the hierarchy?
Should this also be transformed into a "map structure" where the identifier is extracted and placed a level above the rest? Is the idea that this should be done on any level down the hierarchy?
yes - if it's the unique id / name it should be a map.
rules:
basicrule:
name: Basic Rule
Is "basicrule" the type or the id of the rule? Or is "name" the unique identifier?
Is "basicrule" the type or the id of the rule? Or is "name" the unique identifier?
It's the UID.
yes - if it's the unique id / name it should be a map.
There's something I really don't understand here. How are users supposed to make the YAML code? Making it this way means that they can't copy/paste what's in the "Code" tab, so as far as I can tell, the only option is to construct it by hand. Is that really "user friendly"?
When looking at the code in latest main for Things, the "Code" tab still produce the "array design", not the "map design". So, how exactly are users supposed to figure this out?