Hello Everyone, I am evaluating different Rule Engine for our Production usecase and want it to have very low latency (think of like a promotions service in e-commerce store) . Did some PoC with GoLang GRule and it was fine however documentation is not that great/helpful, and personally also not able to increase development speed with golang. Now want to explore Clara and wondering if someone already had some documentation done around Benchmarking. GRule has some good details (impressive numbers) on benchmarking page https://github.com/hyperjumptech/grule-rule-engine/blob/master/docs/Benchmarking_en.md
There are a few benchmarks, although a lot of performance optimisation has been done against non-public data and rulesets. More public benchmarks/performance tests would be useful if someone had anything to add btw. This was done a while back but is a bit out of date in terms of current performance concerns: https://github.com/rbrush/clara-benchmark In terms of the performance of running rules I don’t think the number of rules or facts is particularly predictive of performance problems - it tends to be more particular bad patterns at this point in terms of how rules interact with each other that cause problems.
The time to create a session will be more closely correlated with the number of rules, but this should only need to be done once in your JVM per set of rules since sessions are immutable.
Anything similar for Clara will be of great help and i can push it through my team to user Clojure in Prod.
Is there any information around performance scaling over the number of rules defined? I am trying to decide between loading several thousand rules for different customer accounts vs loading the rules for the customer account at the time of the request.
I don’t think we have ever had any documented benchmark information
Clara has been optimized heavily to support cases with large numbers of rules and facts in a production environment, but those sorts of details were all internal and never generalized