@whilo I'm almost done with Carmine. Should be done by the end of tomorrow. I'll tackle couchdb over the weekend.
@whilo Done with Redis. Added tests. And added github action. https://github.com/alekcz/konserve-carmine
I'm just passing along, but does a backend for replicative means having a backend for datahike?
@jeroenvandijk yeah it does. One will need setup the datahike "connector". But it's a super trivial task once you have a backend. This was the entire connector for the firebase store I put together.
(ns datahike-firebase.core
(:require [datahike.store :refer [empty-store delete-store connect-store scheme->index]]
[hitchhiker.tree.bootstrap.konserve :as kons]
[konserve-fire.core :as fire]
[superv.async :refer [<?? S]]))
(defmethod empty-store :fire [config]
(kons/add-hitchhiker-tree-handlers
(<?? S (fire/new-fire-store (:db config) :env (:env config) :root (:root config)))))
(defmethod delete-store :fire [config]
(let [store (<?? S (fire/new-fire-store (:db config) :env (:env config) :root (:root config)))]
(fire/delete-store store)))
(defmethod connect-store :fire [config]
(<?? S (fire/new-fire-store (:db config) :env (:env config) :root (:root config))))
(defmethod scheme->index :fire [_]
:datahike.index/hitchhiker-tree)
@alekcz360 that’s really cool. Any downsides/upsides between these backends? Or does it all depend on the underlying backend?
@jeroenvandijk I'm not an expert on the topic by any stretch of the imagination. As far as I understand, datahike flushes the hitchhiker-tree to the backend at regular intervals asynchronously, so the store speed doesn't particularly affect the datahike's performance.
@jeroenvandijk @alekcz360 Yes, flushing is decoupled, but not asynchronous. That would definitely be doable, one way to achieve it right now is to use Redis or the filestore without fsync'ing.
@whilo could probably give a more precise answer on that.
I'd need to understand datahike a bit better to give you pros and cons of using a particular store in relation to datahike.
If datahike is out of the picture and you're just using konserve as a store then it really depends on underlying backend.
Basically all backends boil down to being used as key value stores. So it depends on how quickly you can store a binary blob in your backend or get it from there. That can include the speed of network IO in your system. Since the indices are persistent fragments can also be locally cached in-memory on each peer independent of the backend.
One thing to keep in mind is that some backends still use threads like CouchDB, LevelDB or Redis to unblock core.async. This can produce an overhead in system load if you write a lot to Datahike. Datahike also should write all fragments at one in its flush procedure and not sequentially, that is straightforward to fix, but we did not manage to do it yet: https://github.com/replikativ/hitchhiker-tree/blob/master/src/hitchhiker/tree.cljc#L484
@whilo Done with CouchDB. Added tests. And added a github action. https://github.com/alekcz/konserve-clutch
I'll need to make changes to both redis and couchdb once we've added -get-version
to the protocol and pushed it to clojars
With datahike all same, same? Like how Datomic has the same characteristics over the different backends?