I’m particularly interested in how to apply the concept of views (SQL) to datahike.
The API provides [filter](https://cljdoc.org/d/io.replikativ/datahike/0.3.2/api/datahike.api#filter)
This seems useful for narrowing down queries for authorization. But in SQL you get to use views for other purposes as well, for example to simplify queries after applying business logic to a more generic data model.
My intuition is that you would use a combination of filter
and [db-with](https://cljdoc.org/d/io.replikativ/datahike/0.3.2/api/datahike.api#db-with).
Am I generally on the right track or am I missing something?
To me you are on the right track. First you filter
to get a view, then if needed, you manipulate that view with db-with
.
I’m eager to explore how much filter
& db-with
applied with runtime “facts” (events, route parameters, that kind of thing) can handle in terms of business logic. I assume that since datalog got rules, predicates and so on, we can do a ton of work declaratively before any lower level data manipulation is necessary.
I am not totally experienced with datalog but I saw exceptionally large queries and datalog is a logic programming language and so it should be possible to basically do most of the logic in the database. :thumbsup:
Hi, when I do datahike dumps these days a lot of the transactions seem to be missing. There should be :tx/instance data on them and I always put the transacting user on that too. Did I miss something that changed on the store config or something? When I do exports of a database that runs since a year old entries seem fine but newers are missing the transaction data.
And jes, I deleted the (not= (:a d) :db/txInstant) part from export-db so it shouln't be that.
Ah. I take that back. It works on current db.
I still called the export from the migrate namespace instead of my own.
But another question: If I import an export that contains transactions. Won't the old transaction-ids collide with the importing connection ids? If I put transacting User on the transaction. Does that have any chance of surviving a migration?
Hmm. My tests say it does. Sorry, I am confused.
After export and import the datoms seem to keep the original tranactions but the import itself produces a transaction that does not seem to be linked to anything, judging from jet another export.
#datahike/Datom [1 :db/cardinality :db.cardinality/one 536870913 true] #datahike/Datom [1 :db/ident :demo/name 536870913 true] #datahike/Datom [1 :db/valueType :db.type/string 536870913 true] #datahike/Datom [2 :demo/likes "Pizza" 536870916 true] #datahike/Datom [2 :demo/name "Mike" 536870914 true] #datahike/Datom [3 :db/cardinality :db.cardinality/many 536870915 true] #datahike/Datom [3 :db/ident :demo/likes 536870915 true] #datahike/Datom [3 :db/valueType :db.type/string 536870915 true] #datahike/Datom [536870913 :db/txInstant #inst "2020-12-04T13:24:07.569-00:00" 536870913 true] #datahike/Datom [536870914 :db/txInstant #inst "2020-12-04T13:24:14.868-00:00" 536870914 true] #datahike/Datom [536870915 :db/txInstant #inst "2020-12-04T13:24:20.918-00:00" 536870915 true] #datahike/Datom [536870916 :db/txInstant #inst "2020-12-04T13:24:25.745-00:00" 536870916 true] #datahike/Datom [536870917 :db/txInstant #inst "2020-12-04T15:18:42.030-00:00" 536870917 true] #datahike/Datom [536870918 :db/txInstant #inst "2020-12-04T15:19:11.147-00:00" 536870918 true]
Seems like a bug in the import function, thanks for bringing it up. There should be a check for the tx meta-data.
Are you planning to preserver meta-data when migrating?
Since we need to export and import when we upgrade datahike it would be nice if meta-data survived.
Do you need also the historical data?
I am torn on that. It is the best way to save which user transacted what into the database. To implement transacting User in the Business domain would be a real pain. It is much simpler if I can put it on the transaction. And if I query for "who the heck put his in there" the time-stamp should be there too.
And I would love to be able to use export-db as a backup.
On the other hand I see why one could argue that this is business domain stuff and should be build there, not in the db. But that would force me to reify the transaction within the business domain.
So if there are no good reasons to not have the meta-data I would love to at least have the option to keep it.
Also all my automatic routines that run over the db and up-date things put a mark in the metadata. So if one of them messes up I can query for that and retract it. It would suck if that was before the back-up and therefore no longer accessible.
Understood, for that we could extend the import/export with options to track history and meta data as well.
On the last datahike update, 3,2 I think. The db suddenly threw null-pointer exceptions on write. export and import cleaned that up, a Disk backup is rendered useless by this. I would rather not have to worry about loosing the meta-data in that case.
OK, we are planning to add database-migrations between datahike versions and evaluating different approaches there.
If I import a whole db. Loosing the Transaction-meta of the import is totally ok when compared to loosing tall the meta-on what is imported.
The meta of the import would only be valuable if it was a partial import.
Great!!!
At the moment I solve that like this https://markusgraf.net/2020-12-03-Datahike-export-schema-sort.html
Reads well, thanks for your time figuring that out. 🙂 If you like, I would let you know as soon as we have a new import/export ready to test.
Great!! Thanks!!!