This discussion is really great! It is really cool that the circle with local first software closes now. When I started to work on replikativ in 2013 I tried to establish automerge for the full Clojure ecosystem including backends and in 2015 the inkandswitch people contacted me about it. I have talked to Martin Kleppmann and he has done some cool work on implementing CRDTs in Datalog that seems better to me than using their object-oriented interface: https://speakerdeck.com/ept/data-structures-as-queries-expressing-crdts-using-datalog?slide=22. I have implemented some of them in addition to @niko963’s implementation last year, but did not manage to help Martin write the paper yet. I think local-first is only part of the story though and there is value in different levels of consistency that is difficult to generalize between them. By moving in the direction of a more restricted language like Datalog I think it is easier to describe different semantics and execute them efficiently.
@whilo Yes! It feels like the circle is beginning to close...but not there yet. Was there a recording made of Martin's talk? Also, I'm embarrased to ask, but can you recommend a concise resource from which I can learn the mathematical notation used in these slides?
Looks like this is a good reference: https://plato.stanford.edu/entries/set-theory/basic-set-theory.html
That is a good starting point for set theory, which is the foundation of most of modern mathematics. We had collected quite a bit of material a few years ago together with @metasoarous and @mail524 here https://github.com/metasoarous/datsync/wiki/Literature. I think this is a good introduction to Datalog: http://www.nowpublishers.com/articles/foundations-and-trends-in-databases/DBS-017
Thank you! Will dig in to those...
@alekcz360 I have finally had the time to write down our updated backend specification https://github.com/replikativ/konserve/blob/feature_metadata_support/doc/backend.org. Do you think you could provide your repository for it? The addition of metadata makes the protocols slightly less simple, I am not sure how plausible our design is for outsiders. We need to do it to provide concurrent garbage collection. The =feature_metadata_support= branch already contains the memory store which I would consider a reference implementation since it is understandable through Clojure's semantics.
@whilo This is great news. I'll give the new spec a bash. Then build out the example repo.
@whilo I think I've managed to port my lib. To konserve 0.6.0
any chance you could push the snapshot to clojars? Want to updated my pipeline. Once that's workinging I'll get to work on the template library
@whilo In terms of this "Be aware that you must not use blocking IO operations in go-routines." What do you think about a thread-try macro?
@steedman87 This is really cool! How does mochi compare to Roam? I have been thinking about our frontend programming model a lot lately, I hope I can write down an evaluation with proposals tomorrow night. Do you know https://airtable.com/ ? It looks like a successful frontend programming model, but I have not used it yet. Other candidates I am looking in are fulcro, http://witheve.com and PouchDB. One thing that we are struggeling with is how to map the indices of the backend database onto the frontends. I have the idea of replicating the indices in p2p style https://lambdaforge.io/2019/12/08/replicate-datahike-wherever-you-go.html, but this only works if every peer is roughly interested in the same Datoms (and should have read access to them).
It’s funny you mention Airtable, Roam, Fulcro, and PouchDB/RxDB — a few of us are currently discussing those (and more) off-channel! We’re all thinking about the same issues.
Thanks @whilo I believe Roam is more focused on explorative note-taking, while I built Mochi to scratch a specific itch with spaced repeition flash cards (that also supports long-form notes and card linking).
when you say airtable looks to have a successful frontend programming model, are you talking about their tech stack, or the product itself?
I use pouchdb with Mochi, but I feel that it’s not particularly well suited for clojure/script, and that there could be better homegrown clojure native solutions that fill a similar role
Mochi started out as a single-user application, so couchdb’s db-per-user model worked pretty well. I simply replicate the entire database locally and load it in memory as a re-frame app-db. Obviously things get trickier when you introduce features where users are sharing data
For example when querying for data, sometimes you’ll need to query the local data (e.g stored in indexeddb, documents created by the user), and sometimes query a remote db (e.g. documents published by other users).
so in re-frame you would need to write an fx that sends an xhr request to the remote db, and in response load that data into the global app-db atom
Also as Adam mentioned, I’ve started a little group chat off-channel to discuss these concepts as I didn’t want to spam the datahike channel with things that weren’t directly related to the project. I wonder if we should create a new channel or bring the discussion back into #datahike? (We’ve gone quite off topic a number of times haha)
@steedman87 possibly a channel for general discussion of local-first apps?
I guess one question I have is whether whilo et al. envisage datahike as eventually supporting the local-first approach (or even a superset of that), as a piece in a larger local-first stack, or if the approach requires something totally new.
Hey all, really interesting conversation on multiple fronts. I’m currently working on an https://github.com/athensresearch/athens/. Roam, which uses datascript, let’s you export all your data as datoms, so up until now I’ve only been doing UI development with my exported data. But now I’m looking into building the backend. Was considering doing Datomic+Datsync, but datahike seems like a solid choice too. P.S. Would love to drop into the group chat or new channel when it’s created.
@tangj1122 It is great to meet you. I am using the Zettelkasten method inside of emacs and thought about mapping org-mode parse trees onto a schema like Roam (basically having emacs run a query on the Roam database to create the file to edit first and transact it back on file save events). There is also https://org-roam.readthedocs.io/en/latest/ which uses SQLite, but maybe the author would like this idea as well 🙂.
Oh interesting, you’re doing Zettelkasten in Emacs but not with org-roam? I’m not great with elisp, but I think the queries would be more fun with datalog than elisp 🙂