LOL! Sounds like me... I've been meaning to learn Datomic for "ages" but now that dev-local is available, I think I actually might.
I'm not yet at the stage of working with anything that complex, but so far I'm very happy, probably just because it all makes so much sense ...
How can I move the identity :test/id
from the entity 17592186045418
to the new entity (referenced by :test/ref
). Do I have to do this in two separate transactions? All I want to do is move the identity to a new entity in a single transaction. I understand why the temp ID resolution is taking place and resolving the temp ID to a conflict but how can I avoid it. How can I force a new entity here?
I imagine the same ref-resolution phase code applies.. I don't know the exact implementation details of course, but that's the picture I have in my head xD It would basically handle every datom against the immutable value prior to the transaction
interestingly, this implementation also implies that you can't provide two values of a :cardinality/one
field in the same transaction
(d/transact (:conn datomic)
[[:db/add "temp" :ent/displayname "hello"]
[:db/add "temp" :ent/displayname "bye"]]
)
:db.error/datoms-conflict Two datoms in the same transaction conflict
{:d1 [17592186045457 :ent/displayname \"hello\" 13194139534352 true],
:d2 [17592186045457 :ent/displayname \"bye\" 13194139534352 true]}
since it can't imply the db/retract
for the "hello"
value
/s/field/attribute, of course
Yeah, the application of transactions is unordered, so if you say add twice for the same attribute of cardinality one it cannot know which one you meant so it rejects the transaction.
ah, I see - so by that constraint, the same applies for retracting and to re-using identity on a new entity
what version of datomic are you using?
and cloud or on-prem?
@marshall It's actually datomic-free-0.9.5703.21
so maybe this isn't a problem elsewhere
i believe this was fixed in https://docs.datomic.com/on-prem/changes.html#0.9.5390
but its possible this is unrelated
hehe, that description seems to fit my problem very well. oh well. Thanks for letting me know.
@john.leidegren do you have a starter license? can you try it in starter and/or with Cloud?
i can also look at trying to reproduce
Thanks but I'm just fooling around. As long as I know this isn't the intended behavior that's fine. I know what to do now.
:thumbsup: weโll look into it anyway
Hey @john.leidegren, Marshall tasked me with looking into this and I wanted to clarify that this is indeed intended behavior and not related to the fix Marshall described. You already have the rough reason here: > Yeah, the application of transactions is unordered, so if you say add twice for the same attribute of cardinality one it cannot know which one you meant so it rejects the transaction. You cannot transact on the same datom twice and have it mean separate things in the same transaction. You have to split the transactions up to retract the entity then assert the new identity. Ultimately what you're doing here is cleaning up a modeling decision and in addition to separating your retraction and add transactions you could alternatively model a new identity and use that identity going forward, preserving the initial decision.
I know you were already past this problem, but I hope that clears things up.
@jaret Oh, thanks for getting back to me. I really appreciate it.
Interesting problem! I'm interested to see the solution for this.. I expect you'd have to split to two transactions since you're working with db.unique/identity though
Yeah. That's what I did. I don't like it because now there's a point in time where the database sort of has an inconsistent state. It's not the end of the world but I really want it to commit as a single transaction. For this to actually go though, the transactor, would have to somehow react to the fact that the identity is being retracted during the transaction and because of that, it mustn't be allowed to partake in temp ID resolution. (either that, or you tag the temp ID as unresolvable to force a new entity...)
it seems like a modelling problem if you need to get to a state where an entity has an 'identity', then loses it and gives it to another entity - so I could see why this could be unsupported behaviour
I'm fixing a data problem or rather I'm doing this because I'm revising the data model. I ran into this as part of a upgrade script I wrote.
I know Marshal has commented in the past on some of these transactional isolation behaviours. As to why it might need to work this way but I'm curious to what the reasoning for it is. As I can see a way to program around it but I can also understand that you might not want to just do that.
You could argue that I'm https://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html or some such.
I suspect that it's a bit of a trade-off - if you have this behaviour, it's simpler to reason about transactions, since it's likely implemented in a ref-resolution phase, then an actual write phase
but if you have clever transactions where you effectively mutate the state for each fact, then things get trickier to accurately reason about
I had this kind of issue for new schema, and schema that used the new schema:
[{:db/ident :ent/displayname}
{:db/ident :ent/something
:ent/displayname "Hello"}]
would complain that :ent/displayname
is not part of the schema yetso I had to write a function that checks existence of the properties and then split the schema assertion into multiple phases
Yeah, so this rule applies to attributes. Which I sort of understand. You cannot refer to schema before it exists but for data though. Are the same constraints equally valid?
Hello, I'm new to Clojure and Datomic. I'm using the min
aggregate to find the lowest-priced product, but can't seem to figure out how to get the entity ID of the product along with it -
;; schema
(def product-offer-schema
[{:db/ident :product-offer/product
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :product-offer/vendor
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :product-offer/price
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :product-offer/stock-quantity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
])
(d/transact conn product-offer-schema)
;; add data
(d/transact conn
[{:db/ident :vendor/Alice}
{:db/ident :vendor/Bob}
{:db/ident :product/BunnyBoots}
{:db/ident :product/Gum}
])
(d/transact conn
[{:product-offer/vendor :vendor/Alice
:product-offer/product :product/BunnyBoots
:product-offer/price 9981 ;; $99.81
:product-offer/stock-quantity 78
}
{:product-offer/vendor :vendor/Alice
:product-offer/product :product/Gum
:product-offer/price 200 ;; $2.00
:product-offer/stock-quantity 500
}
{:product-offer/vendor :vendor/Bob
:product-offer/product :product/BunnyBoots
:product-offer/price 9000 ;; $90.00
:product-offer/stock-quantity 15
}
])
;; This returns the lowest price for bunny boots as expected, $90:
(def cheapest-boots-q '[:find (min ?p) .
:where
[?e :product-offer/product :product/BunnyBoots]
[?e :product-offer/price ?p]
])
(d/q cheapest-boots-q db)
;; => 9000
;; However I also need the entity ID for the lowest-priced offer, and
;; when I try adding it, I get the $99.81 boots:
(def cheapest-boots-q '[:find [?e (min ?p)]
:where
[?e :product-offer/product :product/BunnyBoots]
[?e :product-offer/price ?p]
])
(d/q cheapest-boots-q db)
;; => [17592186045423 9981]
I think I might see what's going on - it's grouping on entity ID, and returning a (min ?p)
aggregate for each one (so basically useless). But I'm not sure how else to get the entity ID in the result tuple... should I not be using an aggregate at all for this?datalog doesnโt support this kind of aggregation (neither does sql!)
you can do this with a subquery that finds the max, then find the e with a matching max in the outer query; or, do it in clojure
:find ?e ?p
then (apply max-key peek results)
(for example)
the reason datalog and sql donโt do this is because the aggregation is uncorrelated: suppose multiple ?e values have the same max value: which ?e is selected? the aggregation demands only one row for the grouping
(you still have that problem BTW--you may need to add some other selection criteria)
Ah I see, thank you!
Hello I am playing with dev-local datomic. When I try to create a database I get error.
java.nio.file.NoSuchFileException: "/resources/dev/quizzer/db.log"
....
Here is the full code
(ns quizzer.core
(:require
[datomic.client.api :as d]))
(def client (d/client {:server-type :dev-local
:storage-dir "/resources"
:system "dev"}))
;; Creating a database
(defn make-conn [db-name]
(d/create-database client {:db-name db-name})
(d/connect client {:db-name db-name}))
(comment
(d/create-database client {:db-name "quizzer"}))
Any ideas? ๐does /resources/dev/quizzer exist?
or more simply, does /resources exist?
I placed it in the root directory. Here is project structure
.
โโโ README.md
โโโ deps.edn
โโโ resources
โย ย โโโ dev
โย ย โโโ quizzer
โย ย โโโ db.log
โโโ src
โย ย โโโ quizzer
โย ย โโโ core.clj
โโโ test
โโโ quizzer
โโโ core_test.clj
7 directories, 5 files
And the deps.edn
structure
{:paths ["src" "resources" "test"]
:deps {org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/dev-local {:mvn/version "0.9.195"}}
:aliases {:server {:main-opts ["-m" "quizzer.core"]}
:test {:extra-paths ["test/quizzer"]
:extra-deps {lambdaisland/kaocha {:mvn/version "0.0-529"}
lambdaisland/kaocha-cloverage {:mvn/version "1.0.63"}}
:main-opts ["-m" "kaocha.runner"]}}}
"/resources" is an absolute path
I assume that's in your ~/.datomic/dev-local.edn
Absolute path was the problem, thank you @alexmiller
I have a datomic cloud production topology, which shows the correct number of datoms in the corresponding CloudWatch dashboard panel..... however, the datoms panel for my other solo topology never shows any datoms, no matter how many I transact into the system
I know solo reports a subset of the metrics, but according to https://docs.datomic.com/cloud/operation/monitoring.html#metrics solo should report that datoms metric. > Noteย In order to reduce cost, the Solo Topology reports only a small subset of the metrics listed above: Alerts, Datoms, HttpEndpointOpsPending, JvmFreeMb, and HttpEndpointThrottled. Not sure what's going on. I'm seeing the same on our solo stacks though @jake.shelby.
Even the solo https://docs.datomic.com/cloud/operation/monitoring.html#dashboardsshows the datoms metric.
thanks for checking your system @kenny, what version is yours? (I just launched mine last week, so it's the latest version
โถ datomic cloud list-systems
[{"name":"core-dev", "storage-cft-version":"704", "topology":"solo"},
{"name":"core-prod",
"storage-cft-version":"704",
"topology":"production"},
Same version
Does anyone have an example on how tx-report-queue is used
https://docs.datomic.com/on-prem/transactions.html#monitoring-transactions
See also: https://docs.datomic.com/on-prem/javadoc/datomic/Connection.html#txReportQueue--