I don’t know. For my implementation it’ll be dependent on the underlying database as it’s done in a single SPARQL query with a property path, and the graphs I’m working with are small enough to live in memory anyway. I would expect the query to be memory bound though it’ll depend on the implementation. For my data cycles are a very rare edge case, and one of the main reasons for testing it is to invalidate certain graphs and avoid running algorithms that may not terminate in the presence of cycles over the data.
I’m interested in trying Asami and would be interested in knowing some typical setups. Haven’t worked with in-memory only databases much, so… do you import from disk/DB (or wherever) on start and go from there, or what is a typical usecase? 🙂
At work we load from a whole lot of JSON or EDN
For in-memory stores, when I’m saving a lot of data, I’ll do an export-data to a file, and import-data when I restart
Thanks, great 🙂 So, haven’t looked but is there built-in import/export functionality?
Btw, read most of you background articles. Fascinating and a great read. Thanks for sharing so much, really interesting! 😄
thank you
For loading from JSON, XML or other messy files, that would be regular parsing and a lot of transacts, right?
I haven’t tried from XML, but so long as you have a seq of objects it’s fine
You can insert statements (triples) easily enough, but we tend to not do that. Instead we go with objects. These are deconstructed into triples, and they all go in together in a transaction
Here’s a copy paste from a session I was using on Friday…
(require '[asami.core :as d])
(require '[cheshire.core :as json])
(require '[<http://clojure.java.io|clojure.java.io> :as io])
(def bundle-dir (io/file "/data/bundle"))
(def bundles (map (comp json/parse-stream io/reader) (.listFiles bundle-dir)))
(def tx2 @(d/transact conn {:tx-data bundles}))
That was Clojure (not ClojureScript) of course
each file in the bundle directory held a single map object, and it was several MB of data
> the graphs I’m working with are small enough to live in memory anyway ah okay, that definitely tips it in favour of just doings things outside of whatever durable store you use 🙂
To load triples, I tend to just write them as edn, and load them that way.
e.g. triples.edn
[[#a/n[123] :type :person]
[#a/n[123] :first-name "Betty"]
[#a/n[123] :last-name "Rubble"]
[#a/n[123] :spouse #a/n[124]]
[#a/n[123] :child #a/n[125]]
[#a/n[124] :type :person]
[#a/n[124] :first-name "Barney"]
[#a/n[124] :last-name "Rubble"]
[#a/n[124] :spouse #a/n[123]]
[#a/n[124] :child #a/n[125]]
[#a/n[125] :type :person]
[#a/n[125] :first-name "Bam-Bam"]
[#a/n[125] :last-name "Rubble"]]
(require '[clojure.edn :as edn])
(def data (edn/read-string (slurp "triples.edn")))
(def tx @(d/transact conn {:tx-triples data}))
Fantastic, thank you so much. This helps me get going without (as much) stumbling around first… 👍😊
Maybe I should write a wiki page with the above
Please do. 🙂