Interesting. And yeah seems we’re talking about the same with update/modify. I’m only slightly familiar with it from Drools.
I searched the slack history at clojureverse and wasn’t able to find your explanation 😞. Sounds very intriguing. I’ll keep looking and thinking about this over the holiday. Thanks so much for your responses they give me a lot to think about 🙂
I think I’ll just make a blog post on the external update thing
Yeah, Drools had an update mechanism. I think they had some interesting optimizations done with it. Even more so in 6+ of it when they had a large overhaul to how the fact propagation of the rule network worked I think.
I used to spend quite a bit of time trying to parse through the Drools codebase. It’s pretty “intense” though 😛
Have you read Clara’s source? 😄
Haha. Oh yes. I’m all caught up on that one.
Is 6+ with the Phreak algorithm instead of Rete?
Sorry…I should just google that….:)
Yep
I don’t know that I fully consider it separate. But they do. It does have a lot of modifications over rete. However all modern rete impl’s do.
There are some good ideas there though. I think Clara is competitive with the optimization it has though in many cases.
Drools 5 vs 6 added a lot of things at once. It makes it hard to know which ones are paying off the most. At least from what is public. I think the most important optimization was just the batched fact propagation across the network. That is done in Drools 6 phreak via a “lazy” sort of evaluation of rules.
Clara could potentially do a bit better with lazy propagation. However just having the batched propagation seems to work pretty well already.
I thought Phreak was more eager
Overall I’m under the impression Clara was written with distributed computing in mind…I don’t know if other rule engines compete with that, and to me it seems like a ridiculously powerful feature
Hmm. It should be more lazy. Hah. There are lots of blogs on it by Drools people. Good blogs really. They put out some good content that is helpful more generally
But yes Clara does a good job in terms of its flexibility to be used in different ways.
Some good abstractions in place.
Have you used clara-storm or have any desire to run Clara across multiple instances?
That was Something Ryan put together early on with Clara. I believe he said it was mostly an experiment or demonstration of how it might be done. Not a production worthy product.
And early on I think he had some thoughts about where to apply it but went in a different direction instead. I haven’t had the need but I’ve spent a little time thinking about it. I think it’d be cool to see Clara used across Something like distributed processes
Yeah, Storm support was just an experiment for a need that never really materialized so we didn't pursue it further. The internals of Clara were designed to support a distributed working memory, so it could be something we pick up at some point.
All tests are passing last I checked 🙂
I’ve been interested in two applications related to it. Mainly stream processing with Kafka but also web workers in the browser
The test coverage in that project is pretty weak, though. We're using Clara in Spark now, but are keeping working memory local to each worker...basically our workload lets us do groupby operations to group needed facts into a common process and then using Clara there.