Not sure why I wasn't expecting to find the Stakeholders list under Community
. :woman-facepalming: Thanks! I'd love to know about more LOD projects... understanding the surface area will help make more informed decisions as the project plods forward.
rdf4j .... offers an easy-to-use API that can be connected to all leading RDF database solutions.
-- I'm still finding this whole space a bit confusing. It seems like Jena actually does all the database work itself (maybe?) but RDF4j relies exclusively on a third-party triplestore? https://en.wikipedia.org/wiki/Comparison_of_triplestores sheds a little light on the situation but I'm still a bit unclear who/what is writing data to disk in these different setups. š
@steven427 Ok I asked some friends and colleagues about some of the bigger projects in the heritage/museums/library/arts/humanities space hereās some more: The british museumās catalog (this is the one I remembered but couldnāt quite find): https://www.britishmuseum.org/collection/object/W_1892-0516-351-a (looks like theyāve hidden or removed their public sparql endpoint, but the structure of the collections is clearly SKOS - I have it on good authority theyāre also using cidoc that I mentioned in the thread) Another big one The library of congress: https://id.loc.gov/ Europeana ā which is a huge cross europe collaboration to connect the european heritage sector, through various projects based around linked data etcā¦ https://pro.europeana.eu/ e.g. see here for one of their many big projects: https://www.europeana.eu/en try searching for e.g. āvan goghā or historiana here: https://historiana.eu/ plus many othersā¦ Also the British Library: https://bnb.data.bl.uk/ Also all UK legislation is represented/managed as a repository of linked data, giving URI identifiers for everything on the official site here: https://www.legislation.gov.uk/developer/uris
Ok asked another friend who works for a client of ours who used to work at the BBC on their linked data platforms > 4 years agoā¦ This is what he said about heritage orgs that he knows of who ha(d|ve) linked data projects in the UK at least: > BBC themselves, The National Archives, British Museum, National Library Wales, National Library Scotland, Rijksmuseum, Getty (thesauri for artists, geography and others), Wellcome, Archaeology Data Service, Peopleās Collection Wales, Science Museum, University of Manchester Image Collection, Tate Gallery, BFI Archive Collections, Natureā¦
BBC was a big one obviouslyā¦ their news publishing and editorial processes uses linked data so journalists can cross reference articles and topics/articles when writing them, and also IIRC the olympics, and I think football coverage was/is done on linked dataā¦ though I could be wrong about the footy.
@rickmoynihan This is a great list! Thanks so much for taking the time to compile this. Before it's lost to the sands of Slack-is-the-worst-service-possible-for-something-like-Clojurians (:face_with_rolling_eyes:) do you know if this channel is logged somewhere?
Itās logged here: https://clojurians-log.clojureverse.org/rdf
@simongray Sorry I was not online yesterday. Iāve only just seen your comments now.
In general, I like @samirās responses.
"foo"
and "foo"@en
are different literals. In fact, for RDF 1.0, there were 3 distinct types of string:
ā¢ "foo"
was a Simple Literal
ā¢ "foo"^^<xml:string>
was a Typed Literal
ā¢ "foo"@en
was a simple literal with a Language Tag
All 3 were distinct, and I canāt tell you the grief this caused. It was a relief when RDF 1.1 was introduced and gave all simple literals (that didnāt have a language tag) a datatype of xml:string
. Those with a tag are now rdf:langString
In terms of SPARQL stores, there are requirements on correctness, but performance may be terrible. In general, Jena worked hard for correctness, but typically did so with naĆÆve code. Over time, this was reimplemented for better performance.
Generally, most stores get indexed around triples, and not strings. The store I worked on (Originally called Tucana, then Kowari, and finally Mulgara) had the option to use a Lucene index on strings, and extended the query language to allow for Lucene lookups. But SPARQL was intentionally technology agnostic, so how you might implement string indexing is not considered.
For instance, a Patricia index may be used for all strings, and then any queries that include a regex
could convert that operation into an index lookup. However, Iām not aware of anyone who did that (we started on Tucana, but lost funding). Consequently, I think that most regex queries are managed exclusively as filtersā¦ and that will never scale.
As for languagesā¦ the idea of tagging is to provide semantics for a group of letters. The simple literal "chat"
is just a sequence of 4 unicode characters. However, "chat"@en
has a semantic that means a conversation, and "chat"@fr
has a semantic that means a male cat. These semantics were considered important to capture
I still find it weird and not every ergonomic that in a system where knowledge is otherwise defined using named relations, for some reason this particular information has to be hardcoded into strings. š but thank you for the in-depth history lesson.
@quoll btw Paula, if I may ask, what is the end goal of Asami? the readme says it is inspired by RDF, but it doesnāt really mention RDF otehrwise. If I wanted to use it as a triplestore for an existing dataset I guess I would have to develop code for importing RDF files and other necessary functionality?
Thatās right, you would. Though I have an old project that would get you some of the way there
Ummmā¦ the end goal. I only have vague notions right now. I can tell you why I started and where itās going š
Please do š
It was written for Naga. Naga was designed to be an agnostic rule engine for graph databases. Implement a protocol for a graph database, and Naga could execute rules for it
I thought I would start with Datomic, then implement something for SPARQL, OrientDBā¦ etc
But I made the mistake of showing my manager, and he got excited, and asked me to develop it for work instead of evenings and weekends. I agreed, so long as it stayed open source, which he was good with
But then he said that he wanted it to all be open source, and he wasnāt keen on Datomic for that reason. So could I write a simple database to handle it? Sure. I had only stopped working on Mulgara because I donāt like Java, so restarting with Clojure sounded like a good idea (second systems effect be damned!) š
Initially, Asami only did 3 things: ā¢ indexed data ā¢ inner joins ā¢ query optimizing
hah, ok, so itās mainly because your manager dislikes closed source software? That is a fantastic 1st world problem to have.
yup
But I did it in about a week, so it wasnāt a big deal
nice
The majority of that was the query planner
you could argue that it wasnāt needed (Datomic doesnāt have one), but: a) Iād done it before b) rules could potentially create queries that were in suboptimal form. Iāve been bitten by this in the past
Some time later, he called me and asked me to port it to ClojureScript. So it moved into the browser
Since then, Iāve been getting more requests for more features. Right now it handles a LOT
Thatās when I started a new pet project (evenings and weekends)
It seems like a lot of work is happening in this space at the moment with Asami, Datalevin, Datahike, Datascript. Kind of exciting.
This is for backend storage. It is loosely based on Mulgara, but with a lot of innovations, and new emphasis
Honestly, if Iād known about Datascript (which had started), then I would have just used that
Anywayā¦ I mentioned the backend storage, and several managers all got excited about it. So THAT is now my job
HAHA
And for the first time, theyāve given me someone else to help
Heās doing the ClojureScript implementation (over IndexedDB)
Iām doing the same thing on memory-mapped files. But itās behind a set of protocols which makes it all look the same to the index code
I also hope to include other options, like S3 buckets. These will work, because everything is immutable (durable, persistent, full history, etc)
Do you see a future where a common protocol like ring can be developed for all of these Datomic-like databases? So much work is happening in parallel.
That was actually exactly the perspective that Naga has!
The protocol that Naga asks Databases to implement is oriented specifically to Nagaās needs, but it works pretty well
I see. So perhaps itās just a question of willingness to integrate.
Well, the way Iāve done it in Naga has been as a set of package directories which implement the protocol for each database. Unfortunately, Iāve been busy, so I only have directories for Asami and Datomic
But they both work š
I imagine that it wouldnāt be hard to do Datascript
The main thing that Datascript/Datomic miss is a query API that allows you to do an INSERT/SELECT (which SPARQL has)
I need to get some real work done before heading āhomeā for today, i.e. moving from the desk to the sofa. Thanks for an interesting conversation. Iām keeping an eye on Asami (and now naga). Really interesting projects.
Thank you
They look quiet right now because Iām working on the storage branch
@quoll: Sounds like youāve both had a very interesting career, and currently have a dream job. Most managers would never entertain the need to implement a new database; though it sounds like youāve done it many times. :thumbsup: @eric.d.scott spoke here a while back about doing something that sounded similar; providing some common abstraction across RDF and other graph stores / libraries. I definitely see the appeal; but I donāt really understand the real world use case. Why is it necessary for your business? Swapping out an RDF database for a different RDF one can be enough work as it is (due to radically different performance profiles), let alone moving across ecosystems. Or am I misunderstanding the purpose of the abstraction; is it to make more backends look like graphs? Which is a use case I totally get š. Regardless Iād love to hear more about your work
only twice: Mulgara and now Asami
At work, there is no impetus to be able to swap things out š
but any libraries that use a graph database have motivation to do it
particularly if the library is supposed to have broader appeal than for just the team developing it
For instanceā¦ there is no need for Asami to have a SPARQL front end, but itās a ticket, because Iād like to make it more accessible to people
yeah ok thatās fair
Besides, if I donāt implement a SPARQL front end, it will be embarrassing!!!
lol
For anyone readingā¦ I was on the SPARQL committee
I donāt know how you could live with yourselfā¦ š
exactly!
ahh well in that caseā¦ I donāt know how you could live with yourself š
If you donāt mind me asking, if you could re-live being on that committee, knowing what you do now, what would you do differently?
Well, it was a learning experience for me. A number of interests were on the committee to push the standard in a direction that most suited their existing systems. So rather than introducing technical changes, or working against specific things, I would have focused more on communication with each member of the committee. Not that I think I did a terrible job, but I could have done better
From a technical perspective, I would have liked to see a tighter definition around aggregates, with algorithmic description.
But thatās just because I find a bit of flexibility in some of the edge cases there. Also, having a default way to handle things, even if theyāre not the ideal optimized approach, would have been nice to have
That said, thatās essentially what Jena sets out to do. They try to be the reference implementation, and they most certainly donāt take the optimized approach
The early versions of Jena saved triples as a flat list, and resolved patterns as filters against them š
Andy had some long conversations with me about Mulgaraās storage while he was planning out Fuseki
Also @rickmoynihan: > Sounds like youāve both had a very interesting career, and currently have a dream job Yes! I have certainly been spoiled! I honestly donāt know how I have managed to keep coming back to these things, but Iām happy that I have. Of course, Iāve done other things in the between, but even those can be informative (for instance, Iāve had opportunities to work with both Datomic and OrientDB)
Oh! I just thought of something I could have mentioned in the SPARQL committee that continues to frustrate meā¦ transactions!
Itās possible to send several operations through at once. e.g. An insert; an insert/select; a delete. But there are limits on what you can manage there. There are occasions where transactions are important.
Datomic is frustrating that way too, because Naga needs it. (I manage it by using a with
database, and once Iām done, I replay the accumulated transactions with transact
)
@quoll: fascinating, I agree it would have been nice to have a standard for transactions.
Especially when the original intent of RDF was to provide semantic linkages (hence the name, āSemantic Webā)
Also, on some specific questions:
> implemented languages as equality-distorting aspects of strings literals
Languages change the value. You can consider that as āequality-distortingā, but it can be avoided. For instanceā¦
> If I am to query for my own name in an RSF resource how should I refer to it? āSimonā@en, āSimonā@da, and 6000 other entries?
Your query could include:
WHERE { ?me foaf:name ?name . FILTER(str(?name) = "Simon") }
A good implementation (and Iām not saying that your SPARQL store will be) could turn that FILTER
operation into an index lookup
Jena never used to do that, but they may have updated lately. This might be an excuse for me to check in and see how Andy is doing š
To push the argument further, the concept of equality is quite complex as you can see in https://clojure.org/guides/equality . RDF makes no special treatment regarding equality. AFAIK two terms are equal when they have the identical long notation. SPARQL being a query language makes some decisions regarding equality in some functions. To me it feels like a good compromise as the goal of the semantic web is to enable the articulation of arbitrary knowledge and data domains
this is a great example
Yeah itās the lexical form that strictly speaking should be used, in combination with the datatype uri, lang string etc
Though some stores will do some implicit coercions, e.g. stardog will by default canonicalise various numeric types e.g. xsd:byte
s into xsd:integer
unless you switch that off.
https://www.w3.org/TR/rdf-concepts/#section-Literal-Equality