Does Datomic Cloud support attribute type :db.type/bytes?
I don’t see it in the valueTypes https://docs.datomic.com/cloud/schema/schema-reference.html#db-valuetype
I considered that. What I ended up doing was using tuples for session key / value pairs.
Unfortunately, db.type/bytes
is not supported in cloud or analytics. In supporting this value type in on-prem we saw a number of problems due to the java semantics which we discuss here: https://docs.datomic.com/on-prem/schema.html#bytes-limitations
Alright thanks.
If this is a feature you need I'd be happy to share the use case with the team if you want to provide details. If we can't provide that type perhaps we can provide another solution that meets your needs.
I’m using a java lib for managing sessions and I’d like to store them in datomic. The sessions instances have an attribute map <object, object>. I wanted to be able to serialise the attribute map and store that in a session entity.
Basically a container of data that is semantically opaque. 😛
Might have to look at using a different storage mechanism for sessions.
Unless you have a different suggestion @jaret
Hello Datomic/Clojure experts, I am trying to pull all the relevant information regarding Employees in one query. First I get a vector of all the Employee maps. Then using specter/transform or clojure.walk/postwalk, I process the vector of Employee maps and get the full maps using :db/id 's. The ref attributes are not defined as component attributes. But I need to have similar functionality. For this, I use a (d/pull db '[*] db-id) inside the specter transform function. (or with a postwalk function). But my pull with the above pull statement takes nearly 10 seconds or above to fetch the whole employee maps. The questions are: 1 - Why it is taking so much time? I have, may be 200 employees at the moment. It is a SOLO stack. 2 - Is there any better/faster way to get the full maps with the :db/id's? Thank you for any suggestions. See the code below: I have removed irrelevant lines. (let [ employees [ #:employee{:email "<mailto:haroon_789@yahoo.com|haroon_789@yahoo.com>", :last-name "smith", :emplid "PLM0015", :job #:db{:id 101155069757724}, :full-time? true, :first-name "Haroon", :employee-type #:db{:id 79164837202211}, :gender-type #:db{:id 92358976735520}, } #:employee{:email "<mailto:frazer765@yahoo.com|frazer765@yahoo.com>", :last-name "smith", :emplid "PLM0025", :job #:db{:id 10115506975245}, :full-time? true, :first-name "Farhan", :employee-type #:db{:id 79164837202211}, :gender-type #:db{:id 92358976735520}, } ....................] ] ;;job :db/id is 101155069757724 ;; (d/pull db '[*] 101155069757724) ) (specter/transform [ALL] (fn [each-map] (let [db-id (:db/id each-map)] (d/pull db '[*] db-id) )) employees) ;;I apply the above logic only for the map values with :db/id's. )
If this is datomic cloud, this is slow because it is 600 blocking request+responses in a row
This looks like employees
already came out of a pull. why not just pull everything in one go?
[* {:employee/job [*] :employee/employee-type [*] :employee/gender-type [*]}
I'm facing a few recurring issues with datomic cloud write latencies and index memory usage. In our current setup we are transacting one event at a time, throttling them to avoid overcharging our transactor. I was wondering if we would benefit from grouping our events before transacting, or that is not necessarily the case?
Rule of thumb is to aim for transaction sizes of 1000-2000 datoms if you actually can control how changes are grouped
when you say “issues”, what problem are you facing?
transactions failing from time to time due to "busy indexing"
could your transactor just be undersized for your rate of novelty? is this a regular thing or something you only encounter during bulk operations?
We are running the biggest machines available
and ensuring a delay of 50ms between each transact call