clara

http://www.clara-rules.org/
devn 2019-01-09T05:44:59.020400Z

I think they are saying what I might assume to be true. Large numbers of facts coupled with large numbers of rules are likely to create memory pressure as they scale in number. But, I’ll go on record as saying I doubt physical limits are a bottleneck when it comes to performance in Clara. Mechanical sympathists in the audience?

2019-01-09T15:07:40.031100Z

Agreed with (1), with the proviso that one could create multiple sessions and get parallelization that way. Regarding (2), in part I'd have to give the annoying-but-true answer that much depends on the structure and size of your rule/fact set. For many use cases, particularly on clj since it has gotten most of the perf optimization, perf likely is good enough to be a non-issue. CPU can be a concern, especially for rulesets with lots of truth maintenance work. When you say memory bandwidth, do you just mean memory use or the actual speed of retrieval of data from RAM/the CPU cache? It isn't clear to me how you'd optimize the latter effectively running on top of the JVM, but TBH I haven't thought about that level of optimization in the context of Clara. I suspect there's still much lower hanging fruit to work on in term of perf improvements.

eraserhd 2019-01-09T15:09:13.032400Z

@mikerod by memory bandwidth, I mean memory bus speed. If the working set is larger than cache, I assume the speed constraint is loading the data to the CPU, not number crunching.

eraserhd 2019-01-09T15:10:28.033100Z

And the way to optimize it is to choose a different AWS instance type :D

2019-01-09T15:25:59.033400Z

I see. Thanks for clarifying

2019-01-09T15:26:44.034700Z

Yeah. All of this can contribute. The underlying hardware solution would likely be a last resort time. But if you are seeing a perf issue and can get profiler sampling data that could be useful.

2019-01-09T15:30:50.036300Z

There are also some ways to look at tracing output to get a sense of count of times parts of the network were evaluated. Sometimes you can look for outliers there

👍 1
2019-01-09T15:53:23.041200Z

Keep in mind that the CPU cache will hold more than just what you think of as the data e.g. routines of the JVM itself, methods from the Clojure runtime, Clara itself, etc. If you were trying to make the cache efficient that might dominate the actual “data” you’re working with. I’m a bit skeptical that that’s your dominating performance factor though - agreed with Mike that a profiler snapshot showing what’s taking time in Clara would be helpful.