rdf

EmmanuelOga 2020-04-30T03:27:43.056500Z

how so?

EmmanuelOga 2020-04-30T03:37:21.059100Z

last I checked Roam did not have any way of exporting content that I could find

EmmanuelOga 2020-04-30T03:37:50.059500Z

for that kind of tool, that's a feature that should exist from day one

teodorlu 2020-04-30T07:30:05.060200Z

There's working export to both JSON and markdown. Perhaps it's been added since you checked?

teodorlu 2020-04-30T07:31:57.062Z

I'm asking because I feel like creating a good information structure within roam seems to require skills in technology like RDF. It's also the first time I've felt that creating and consuming content as a user is as efficient as I'd like, without throwing possibility for automation overboard.

EmmanuelOga 2020-04-30T08:06:00.062300Z

cool, that means I need to give it a second look

2020-04-30T09:20:21.062400Z

Yeah, this is the problem with the broad semantic web vision. It’s hard enough to create consistent highly curated knowledge bases anyway; let alone when disparate groups of disconnected people try to do so. This was one of things we were eventually hoping to make a meaningful (academic) contribution towards solving in my early days developing multi-agent systems with defeasible logic. Essentially we had an agent communication language, based around two primary primitives inform and request. The semantics of inform were then based around a concept of mutual belief… so if you informed me of something (assuming a level trust) I would then believe that you believed it, and may then choose to believe it myself via other defeasible rules. In terms of security you could also then introduce notions of trust, credulity, lying etc; and have further defeasible rules to reason about such things… Those rules may include not believing you if you said inconsistent things, or if others said you’d said things that you then contradicted to me etc etc… It was an interesting way to think about protocol design etc.

2020-04-30T13:40:40.062600Z

Interesting! I gather it didn't pay the bills?

2020-04-30T15:28:13.062800Z

The joys of overly academic startups 😁

2020-04-30T15:35:20.063Z

I feel your pain.

2020-04-30T15:44:18.063200Z

My first interesting job was doing cognitive modeling in Common Lisp. US gov't reasearch grant. That was a great few years, then it was all venture-capital-driven from there, and things get significantly less interesting.

Cliff Wulfman 2020-04-30T16:25:19.064800Z

I'm very interested in learning how to use Grafter and Stardog. Any pointers to getting started with Grafter 2?

2020-04-30T17:10:14.065Z

I really need to provide clearer docs around usage etc… It has been on the list of jobs for a long time… What is it that you want to do? Grafter is mainly focussed on RDF I/O; essentially treating immutable triples as lazy-sequences — with some light tooling around SPARQL queries etc. There’s also https://github.com/Swirrl/matcha for querying RDF graphs in memory (which is roughly equivalent to datascript but for RDF). Other tools include: https://github.com/Swirrl/csv2rdf which is more or less a complete implementation of the W3C standards for CSVW (CSV on the web) — which can be used for transforming CSV files into RDF, via a jsonld metadata file.

Cliff Wulfman 2020-04-30T17:21:25.065400Z

This is very helpful, @rickmoynihan; thanks! I have a couple of projects that involve various sorts of prosopographical and bibliographic knowledge, and I've been using CIDOC-CRM & PRESSoo to capture it; now I want to start building actual knowledge-based systems. I'm also beginning to investigate ways to implement Web Annotations with IIIF. I find myself falling down a tooling rabbit-hole everytime I look at this! I'd be happy to help out with documentation, if you're looking for volunteers...