grules
grules copied to clipboard
new version
I know this is a dramatic rewrite, but still includes the same concepts. Feel free to just be like nah dog and it will be fine.
But here were the problems in the current version that needed changing:
Problem 1
no support for path to evaluate arrays of objects
example:
{
"friends": [
{"first": "stephen", "last": "stanton"},
{"first": "jared", "last": "piedt"},
{"first": "andrew", "last": "coulter"}
]
}
Solution
You can now build a rule to use your comparers against any of the fields in your array (utilizing gjson syntax)
Problem 2
no ability for a multi legged decision tree
If you wanted to have something that said "evaluate true IF (this OR that is true) OR (that AND this is true)
The composites
in the current library will only evaluate as an AND. Forcing you to create multiple engine
s and do the or yourself.
You could also not have any depth to rules.
Solution
Rules can now be comprised of rules. Allowing a true binary decision tree capable of whatever you wanna build.
Extras
To fix this, it would require breaking changes. So I went and added in a bit of extras that helped.
No more manual marshalling
Before, the user had to pass in a map[string]interface{}
, this is no longer required. And no engine
needs to be built. A simple easy function is all that is required:
func Evaluate(json string, rule string) (bool, string)
Feedback
Previously, when you evaluated, it would just return a boolean. You'll notice with the signature above there is a string that also gets returned. If false is returned, the reason will say WHY the evaluation was false. This will be super helpful for debugging and creating new rules.
You'll notice that it isn't implemented yet though. Didn't have time to dig into that but that shouldn't take long. I just added it to the signature so that there wouldn't be another breaking change when that was implemented.
gjson
This isn't an extra that I added but this library utilized the gjson library which is super feature rich and can be utilized with this library.
Final Words
This is a dramatic rewrite. So feel free to say no to this. But I do believe this could be really useful and powerful.
~I also haven't updated the readme yet.~
Updated
@thestephenstanton thanks for the effort here, I'm still reviewing.
Could you add benchmarks for all comparators and evaluation steps? I think several people used this library for performance reasons so I am cautious to accept a change when the performance impact is not understood.
Furthermore, is there a reason for changing the terminology from Comparators
to Compares
? Nit pick, but seems like there is not really a need to change this, and it may confuse upgraders.
Lastly, forcing the user to parse the json and the rule on every call to evaluate seems unnecessarily expensive. Imagine the case where the same rule is being checked against millions of events, we should parse the rule once, then parse the event each time. Naively, I think this would reduce the amount of work by 50%.
Could you add benchmarks for all comparators and evaluation steps? I think several people used this library for performance reasons so I am cautious to accept a change when the performance impact is not understood.
I didn't change any of the comparators, so they are the same benchmarks. After looking back at the readme just now, I notice that you did have benchmarks for the engine that I didn't notice before. So I can do that and show the differences in a comment here then add it to the readme for sure. I know gjson is super fast. But we can talk more about that when I get those benchmarks done.
I also must have accidentially gotten rid of the file containing the benchmarks. Will add that back.
Furthermore, is there a reason for changing the terminology from
Comparators
toCompares
? Nit pick, but seems like there is not really a need to change this, and it may confuse upgraders.
No, I think I just changed it early on as a "hey maybe this is better" but you're right, if merging into this library, I would change it to comparators to reduce the number of things changing.
Lastly, forcing the user to parse the json and the rule on every call to evaluate seems unnecessarily expensive. Imagine the case where the same rule is being checked against millions of events, we should parse the rule once, then parse the event each time. Naively, I think this would reduce the amount of work by 50%.
I think I was thinking about this too much for the use case we are using it for; which only requires us to run it once. But I do see what you mean about just getting someone to unmarshal the rule once and pass it in just in case that is the use case. So I will add that.
@huttotw
Could you add benchmarks for all comparators and evaluation steps? I think several people used this library for performance reasons so I am cautious to accept a change when the performance impact is not understood.
Added benchmarks. With the big change to Evaluate
, there was actually almost no difference in speed compared to the old benchmark from v1.
Furthermore, is there a reason for changing the terminology from
Comparators
toCompares
? Nit pick, but seems like there is not really a need to change this, and it may confuse upgraders.
Changed it back.
Lastly, forcing the user to parse the json and the rule on every call to evaluate seems unnecessarily expensive. Imagine the case where the same rule is being checked against millions of events, we should parse the rule once, then parse the event each time. Naively, I think this would reduce the amount of work by 50%.
Fixed it.
Also, I have found 1 interesting problem when doing a path like obj.arrayOfObjs.#.anotherObj.anotherArrayOfObjs.#.field
~This is really the only problem remaining.~
And now it is fixed
I have nothing to do with this PR, why I got mentioned?
@thestephenstanton Could you merge or close this PR?I don't want to got mention,although this is a problem of github and I already send a message to github to solve this problem, but they seems to consider nothing wrong, so I ask your help
@thestephenstanton Could you merge or close this PR?I don't want to got mention,although this is a problem of github and I already send a message to github to solve this problem, but they seems to consider nothing wrong, so I ask your help
@FateXRebirth Nobody mentioned or tagged you... and nobody is commenting on this PR except you
What you said I totally agree. But this PR showed up on my dashboard of GitHub ( as screenshot I provide below ),I also very confuse why this happened, I ask for help to Github, But they tell me everything is normal, no any problems. That why I want to you do me a favor... just close this PR directly if this PR won't be used in the future.