@lanejo01 About a week ago, you asked for feedback regarding the use of dev-local. It gave me an easy on-ramp to work with Datomic in the development of an app for the first time, one with an open-ended future that can easily be upgraded to a solo or production topology. Clojure and Datomic are generally regarded as languages for experienced programmers. What I'm finding is that Datomic is a heck of a lot simpler to use and understand compared to SQL and relational databases - once you get through the initial learning curve, which isn't that steep at all. And that's my experience already, after a few weeks of working with it a few hours a day. And there is a lot I haven't explored yet. I'm not smart enough to deal with the incidental complexity that relational databases introduce. The issues that folks run into with time and versioning in a relational database are well known. But what I'm experiencing is Datomic's simplicity, intelligent simplicity, well designed simplicity, at a more fundamental level. Relationships are mapped in a line, point to point. My monkey brain can grok that - swing from tree to tree to get to the bananas. So suddenly, I have the impression that Datomic is the right database for beginners, or anyone like me that has difficulty modelling complex, arbitrarily designed 2D/3D/4D relationships in their head. It is really hard for me to imagine going back to relational databases at this point, if I have any say in the matter. I should not have to think that hard to get to the bananas. So, dev-local is an on-ramp, that you know. But my feedback is it may be /could be a very broad on-ramp, simply because Datomic with datalog and pull is so much easier than a relational database with sql.
On-prem, is there ever a situation where transactions submitted with datomic.api/transact
that have been waiting longer than the peer's specified timeout purged before processing by the transactor in order to make room for new transactions, or is the queue effectively unbounded, where the transactor will continue working through the queue of transactions potentially much longer than peers will wait on them? From the docs:
> When a transaction times out, the peer does not know whether the transaction succeeded, and will need to query a recent value of the database to discover what happened.
... which makes sense given that a peer timeout set close to the amount of time it took the transaction to be processed could succeed, but doesn't seem to clarify whether it's ever the case that old transactions that peers no longer care about might be dropped by a queue and not processed by the transactor.
Hi, The security/compliance dep requires that all S3 buckets are server side encrypted and all CMK keys have a rotation enabled. By default, the datomic template does not do this. Can anyone confirm my expectation that making these changes to the CFTs ourselves will not break anything?
@matthijs.van.der.meij I treat the resources inside the CloudFormation stacks as implementation details. I would expect that things will break if you make this change.
What are valid values for this property? The documentation is not clear given this error message. https://docs.datomic.com/on-prem/system-properties.html#backup-properties
Caused by: java.lang.IllegalArgumentException: :db.error/invalid-config-value Invalid value '1' for system property 'datomic.backupPaceMsec'
Hey @tvaughan!! How goes it?
Sent you a DM. Sorry I can't help with this issue ๐
We've learned that any value >=5 is valid. 0, the "default", is not valid.
Hi @jeremy!
Can anyone return my faith in reality?
;; lets play a little bit: create empty database and get connection
(d/create-database (db-uri "test-database"))
(def dev-conn (d/connect (db-uri "test-database")))
;; transact some schema
(d/transact dev-conn [{:db/ident :order/type
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident :order/customer
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}])
;; transact some data
(d/transact dev-conn [{:order/type :a :order/customer 1}
{:order/type :b :order/customer 1}
{:order/type :b :order/customer 1}])
(def db (d/db dev-conn))
;; lets make trivial query
(d/q '[:find ?o ?c ?is-b
:where
[?o :order/customer ?c]
[?o :order/type ?o-type]
[(= :b ?o-type) ?is-b]]
db)
;; #{[17592186045420 1 true] [17592186045419 1 true]}
;; this is not what I expected
;; lets cure it by... changing = on clojure.core/=
(d/q '[:find ?o ?c ?is-b
:where
[?o :order/customer ?c]
[?o :order/type ?o-type]
[(clojure.core/= :b ?o-type) ?is-b]]
db)
;; #{[17592186045420 1 true] [17592186045419 1 true] [17592186045418 1 false]}
;; much better! lets cure it by changing where-clauses order!
(d/q '[:find ?o ?c ?is-b
:where
[?o :order/type ?o-type]
[?o :order/customer ?c]
[(= :b ?o-type) ?is-b]]
db)
;; #{[17592186045420 1 true] [17592186045419 1 true] [17592186045418 1 false]}
;; also fine!
;; how do you like it, Elon Musk?
;; Am I stupid? Or there is some rules which I violated?
;; How can I sleep after that and beleive my all other queries?
I'm actually surprised the 2nd and 3rd examples work. I would've expected [(clojure.core/= :b ?o-type) ?is-b]]
to fail on unification when false
yep, com.datomic/datomic-pro "0.9.6045"
Can you submit this as a support ticket to <mailto:support@cognitect.com|support@cognitect.com> ?
I would like to look into this tomorrow
Yep, thanks, I'l send this example on given email
Thanks
[Datomic Cloud]: The documentation seems to indicate that encryption at rest is automatic and I do see that the DynamoDB table is set to have encryption enabled .... however I've noticed that the S3 bucket and EFS instances that are created, are not set to be encrypted. Am I missing something, like a parameter somewhere? or do I need to manually enable encryption for some of these other resources?
Everything in storage is encrypted using a CMK (customer master key) automatically
This is done by datomic itself, instead of through the specific aws services
Extending on this, if we would like to also have SSE available on the S3 buckets from a company policy perspective, can datomic support this? Would this affect the way datomic performs? I've ran it in a sandbox environment and it looks like datomic can work with the SSE bucket and objects. Can you maybe confirm this @marshall? Our security department would likes to see that all buckets are encrypted by default, as this makes it from an auditing perspective slightly easier Altering the template is something we already have to do unfortunately to rum datomic in our managed accounts since we are required to implement a role boundary on our iam roles (which works perfectly fine, having it with an automated script.)
We havenโt tested the effects of enabling SSE
thanks @marshall , that's good to know - and I can see that CMK now also. - so does this datomic side encrypting happen for both S3 and EFS then?
Yep
perfect
Also, I noticed there is just a single (CMK) key named datomic, but I have 2 datomic systems in that region - I'm assuming they're just both sharing that same one? is this something that would be a concern if I'm trying to keep those two systems very separate (in terms of access with IAM roles)?