Is it possible to save rules to a datomic database? I've noticed that datalog rules seem to only be used (in the examples in the docs) when scoped to a single query request
; use recursive rules to implement a graph traversal
; (copied from learndatalogtoday)
(d/q {:query '{:find [?sequel]
:in [$ % ?title]
:where [[?m :movie/title ?title]
(sequels ?m ?s)
[?s :movie/title ?sequel]]}
:args [@loaded-db
'[[(sequels ?m1 ?m2) [?m1 :movie/sequel ?m2]]
[(sequels ?m1 ?m2) [?m :movie/sequel ?m2] (sequels ?m1 ?m)]]
"Mad Max"]})
Is it possible to save a rule to a database so that requests do not need to specify all of their rules like that? I'm looking at modelling programming languages in datalog and so there will be a lot of foundational rules that need to be added and then higher-level ones that build on top of those.@conrad.p.dean you may want to read about the perils of stored procedures š But AFAICT, for your use case, you don't really need durable storage or rules, you merely need calling convenience. I suggest you either put all your rules in a Clojure Var, or use a library like https://github.com/vvvvalvalval/datalog-rules (shameless plug).
All that being said, datalog rules are just EDN data, nothing keeps you from storing them e.g in :db.type/string attributes.
gotcha so it's idiomatic to just collect rules that define various bits of business logic on the application side as a large vec or something and then ship that per request?
also -- i would love to read anything you recommend about the perils of stored procedures! I've gone back and forth quite a bit during my career about relying on a database to process your data, but since i now sit firmly on the side of "process your data with a database", i don't feel like discounting them wholesale. but in any case, since datalog rules are more closely related to views than stored procs, i kinda want them to be stored in the database the way that table views can be defined in a database. but, i'd love to read anything you have about how that feature might be bad and if it's better to force clients to supply their table views.
I have added a composite tuple to my schema in Datomic marked it as unique to provide a composite unique constraint on the data. The :db.cardinality
is set to :db.cardinality/one
and the :db/unique
is set to db.unique/identity
. When a unique constraint is set to db.unique/identity
on a single attribute, if a transaction is executed against an existing entity, upsert
will be enabled as described https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity. I would have expected the behavior to be the same for a composite unique constraint, provided the :db/unique
was set to :db.unique/identity
. However, that does not appear to be the case as when I try to commit a transaction against an entity that already exists with the specified composite unique constraint, a unique conflict
exception is thrown. AFAIK, this is what would happen in the single attribute example if the :db/unique
was set to :db.unique/value
. Am I missing something or misunderstanding how things are working? Iām new to Datomic and Iām assuming this is just a misunderstanding on my part.
Resolving tempids to entity ids occurs before adjusting composite indexes, so by the time the composite tuple datom is added to the datom set the transaction processor has already decided on the entity id for that datom
To get the behavior you want, you would need to reassert the composite value and its components explicitly every time you updated them
The reason itās like this is because thereās a circular dependency: to know what the composite tuple should be to update, it needs to know the entity to get its component values to compute the tuple, but to know thereās a conflict it needs to know the tuple value
philosophically datomic is very much on the side of databases being ādumbā and loosely constrained and having smarts in an application layer. The stored-procedure-like features that exist are there mostly to manage concurrent updates safely, not to enforce business logic. (attribute predicates being a possible, late, narrow exception)
(at least IMHO, I donāt speak for cognitect)
ah, that makes sense. It is relatively trivial to handle the exception and, in the application Iām working on, it is perfectly acceptable to just return an error indicating that the entity already exists. Any individual attributes on the entity that need to be updated can be done as separate operations.
Thanks for the explanation.
If thatās the case, consider using only :db.unique/value
instead of identity to avoid possibly surprising upserting in the future.
Just so Iām clear, that is under the assumption that the behavior we discussed above changes such that upserting works with composite unique constraints?
That makes sense to me, just want to make sure Iām understanding correctly.
I guess thatās possible, but I just mean :db.unique/identity is IMHO a footgun in general
if you donāt need upserting, donāt turn it on
gotcha. thanks.
Hi there. I was looking for a more straightforward doc on how to scale up my primary group nodes for my datomic cloud production topology, any of you guys could help me on that?
@schultzkaue do you mean make your instance(s) larger or add more of them?
I wanted more nodes
https://docs.datomic.com/cloud/operation/howto.html#update-parameter ^ this is how you choose a larger instance size - change the instance type parameter for increasing the # of nodes: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling Edit the AutoScaling Group for you primary compute group, set it larger
same approach as is used here: https://docs.datomic.com/cloud/tech-notes/turn-off.html#org7fdb7ff but you set it higher instead of setting it down to 0
neat! Thank you
yeah i'm finding a lot of clever things about its ideas of the data layer -- like, most large scale data systems do well when they enshrine immutability. the fact that datomic does that probably resolves a lot of issues around concurrency/transaction management when you allow append-only accretion of data and have applications know at what point in time a fact was true
it'd be nice to see if my guess is accurate in the reason for not storing datalog rules in the database, but maybe by keeping rules and complicated businesslogic they could implement out of the database means you avoid problems where a change to a rule would break a client that's old versus a newer client that expects the change. tracing data provenance when the definition of a view is allowed to change makes things difficult to reason about or trace where a number is coming from. By forcing the responsibility of interpretation on the client, it allows clients to manage the complicated parts and keep the extremely boring fact-persistence/data-observations in one place