@hiredman thanks! The readme says in memory only but plans for persistence with an rdf triple store or graph database. I’ll look into it more
@wparker That was poorly phrased but basically yeah. More specifically I’m trying to figure out whether Clara or rete in general could be implemented as a hybrid in memory and persistent db like Datomic that uses rete for in memory processing as well as for querying persistent data that can never fit into memory.
Having rules writing to persistent storage would be key too
@alex-dixon seems doable
but not without a bunch of effort if you were going to try to implement a memory that works well/performant doing that
Long ago, Ryan did an example storm cluster-based Clara impl of memory. it was experimental but did show how memory could be extended potentially
Thanks. Yeah I’ve been looking at that. It’s not clear to me what’s going on there. Best I could figure out is storm is doing a lot of the work
yeah, it’d require storm understanding I’m sure
and it isn’t really the same thing
You could certainly try to hook a memory impl up to have facts potentially in slower storage, you’d just have to deal with that serialization stuff and also the concern of how to make it not super slow
also, accumulators are an issue
if you are dealing in numbers of facts too large for memory, but then you try to do an accumulator that needs to see them all at once
I watched the naga talk before too. It was interesting. It seemed a bit more special-purposed than Clara, but still cool to think about
I’m thinking of using RocksDB specifically https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks
never saw that one
looks like it could be interesting to experiment with the idea though