Ask questions on the official Q&A site at https://ask.datomic.com!
cpdean 2020-07-30T13:31:09.010100Z

Is it possible to save rules to a datomic database? I've noticed that datalog rules seem to only be used (in the examples in the docs) when scoped to a single query request

; use recursive rules to implement a graph traversal
; (copied from learndatalogtoday)
(d/q {:query '{:find [?sequel]
              :in [$  % ?title]
              :where [[?m :movie/title ?title]
                      (sequels ?m ?s)
                      [?s :movie/title ?sequel]]}

     :args [@loaded-db
            '[[(sequels ?m1 ?m2) [?m1 :movie/sequel ?m2]]
              [(sequels ?m1 ?m2) [?m :movie/sequel ?m2] (sequels ?m1 ?m)]]
            "Mad Max"]})
Is it possible to save a rule to a database so that requests do not need to specify all of their rules like that? I'm looking at modelling programming languages in datalog and so there will be a lot of foundational rules that need to be added and then higher-level ones that build on top of those.

val_waeselynck 2020-07-30T14:47:57.010200Z

@conrad.p.dean you may want to read about the perils of stored procedures 🙂 But AFAICT, for your use case, you don't really need durable storage or rules, you merely need calling convenience. I suggest you either put all your rules in a Clojure Var, or use a library like https://github.com/vvvvalvalval/datalog-rules (shameless plug).

val_waeselynck 2020-07-30T14:50:38.010500Z

All that being said, datalog rules are just EDN data, nothing keeps you from storing them e.g in :db.type/string attributes.

cpdean 2020-07-30T16:34:20.010700Z

gotcha so it's idiomatic to just collect rules that define various bits of business logic on the application side as a large vec or something and then ship that per request?

cpdean 2020-07-30T16:41:22.010900Z

also -- i would love to read anything you recommend about the perils of stored procedures! I've gone back and forth quite a bit during my career about relying on a database to process your data, but since i now sit firmly on the side of "process your data with a database", i don't feel like discounting them wholesale. but in any case, since datalog rules are more closely related to views than stored procs, i kinda want them to be stored in the database the way that table views can be defined in a database. but, i'd love to read anything you have about how that feature might be bad and if it's better to force clients to supply their table views.

mafcocinco 2020-07-30T16:52:49.017Z

I have added a composite tuple to my schema in Datomic marked it as unique to provide a composite unique constraint on the data. The :db.cardinality is set to :db.cardinality/one and the :db/unique is set to db.unique/identity. When a unique constraint is set to db.unique/identity on a single attribute, if a transaction is executed against an existing entity, upsert will be enabled as described https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity. I would have expected the behavior to be the same for a composite unique constraint, provided the :db/unique was set to :db.unique/identity. However, that does not appear to be the case as when I try to commit a transaction against an entity that already exists with the specified composite unique constraint, a unique conflict exception is thrown. AFAIK, this is what would happen in the single attribute example if the :db/unique was set to :db.unique/value. Am I missing something or misunderstanding how things are working? I’m new to Datomic and I’m assuming this is just a misunderstanding on my part.

favila 2020-07-30T17:05:56.017100Z

Resolving tempids to entity ids occurs before adjusting composite indexes, so by the time the composite tuple datom is added to the datom set the transaction processor has already decided on the entity id for that datom

favila 2020-07-30T17:06:29.017300Z

To get the behavior you want, you would need to reassert the composite value and its components explicitly every time you updated them

favila 2020-07-30T17:07:56.017500Z

The reason it’s like this is because there’s a circular dependency: to know what the composite tuple should be to update, it needs to know the entity to get its component values to compute the tuple, but to know there’s a conflict it needs to know the tuple value

favila 2020-07-30T17:13:09.017700Z

philosophically datomic is very much on the side of databases being “dumb” and loosely constrained and having smarts in an application layer. The stored-procedure-like features that exist are there mostly to manage concurrent updates safely, not to enforce business logic. (attribute predicates being a possible, late, narrow exception)

favila 2020-07-30T17:14:37.017900Z

(at least IMHO, I don’t speak for cognitect)

mafcocinco 2020-07-30T17:26:00.018100Z

ah, that makes sense. It is relatively trivial to handle the exception and, in the application I’m working on, it is perfectly acceptable to just return an error indicating that the entity already exists. Any individual attributes on the entity that need to be updated can be done as separate operations.

mafcocinco 2020-07-30T17:26:07.018300Z

Thanks for the explanation.

favila 2020-07-30T17:31:07.018500Z

If that’s the case, consider using only :db.unique/value instead of identity to avoid possibly surprising upserting in the future.

mafcocinco 2020-07-30T17:32:14.018700Z

Just so I’m clear, that is under the assumption that the behavior we discussed above changes such that upserting works with composite unique constraints?

mafcocinco 2020-07-30T17:32:23.018900Z

That makes sense to me, just want to make sure I’m understanding correctly.

favila 2020-07-30T17:35:26.019100Z

I guess that’s possible, but I just mean :db.unique/identity is IMHO a footgun in general

favila 2020-07-30T17:35:37.019300Z

if you don’t need upserting, don’t turn it on

mafcocinco 2020-07-30T17:57:45.020800Z

gotcha. thanks.

kschltz 2020-07-30T17:57:50.021100Z

Hi there. I was looking for a more straightforward doc on how to scale up my primary group nodes for my datomic cloud production topology, any of you guys could help me on that?

marshall 2020-07-30T18:25:14.022100Z

@schultzkaue do you mean make your instance(s) larger or add more of them?

kschltz 2020-07-30T18:25:35.022600Z

I wanted more nodes

marshall 2020-07-30T18:26:37.023600Z

https://docs.datomic.com/cloud/operation/howto.html#update-parameter ^ this is how you choose a larger instance size - change the instance type parameter for increasing the # of nodes: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling Edit the AutoScaling Group for you primary compute group, set it larger

marshall 2020-07-30T18:27:23.024100Z

same approach as is used here: https://docs.datomic.com/cloud/tech-notes/turn-off.html#org7fdb7ff but you set it higher instead of setting it down to 0

kschltz 2020-07-30T18:27:43.024400Z

neat! Thank you



cpdean 2020-07-30T18:53:08.024700Z

yeah i'm finding a lot of clever things about its ideas of the data layer -- like, most large scale data systems do well when they enshrine immutability. the fact that datomic does that probably resolves a lot of issues around concurrency/transaction management when you allow append-only accretion of data and have applications know at what point in time a fact was true

cpdean 2020-07-30T19:08:01.024900Z

it'd be nice to see if my guess is accurate in the reason for not storing datalog rules in the database, but maybe by keeping rules and complicated businesslogic they could implement out of the database means you avoid problems where a change to a rule would break a client that's old versus a newer client that expects the change. tracing data provenance when the definition of a view is allowed to change makes things difficult to reason about or trace where a number is coming from. By forcing the responsibility of interpretation on the client, it allows clients to manage the complicated parts and keep the extremely boring fact-persistence/data-observations in one place