Ask questions on the official Q&A site at https://ask.datomic.com!
stuartrexking 2021-01-14T12:43:10.000500Z

Does Datomic Cloud support attribute type :db.type/bytes?

stuartrexking 2021-01-14T12:44:24.000900Z

I don’t see it in the valueTypes https://docs.datomic.com/cloud/schema/schema-reference.html#db-valuetype

stuartrexking 2021-01-19T12:19:17.027500Z

I considered that. What I ended up doing was using tuples for session key / value pairs.

jaret 2021-01-14T12:49:47.001Z

Unfortunately, db.type/bytes is not supported in cloud or analytics. In supporting this value type in on-prem we saw a number of problems due to the java semantics which we discuss here: https://docs.datomic.com/on-prem/schema.html#bytes-limitations

stuartrexking 2021-01-14T12:51:53.001200Z

Alright thanks.

jaret 2021-01-14T12:52:25.001400Z

If this is a feature you need I'd be happy to share the use case with the team if you want to provide details. If we can't provide that type perhaps we can provide another solution that meets your needs.

stuartrexking 2021-01-14T12:54:30.001600Z

I’m using a java lib for managing sessions and I’d like to store them in datomic. The sessions instances have an attribute map <object, object>. I wanted to be able to serialise the attribute map and store that in a session entity.

stuartrexking 2021-01-14T12:55:13.001800Z

Basically a container of data that is semantically opaque. 😛

stuartrexking 2021-01-14T12:55:38.002Z

Might have to look at using a different storage mechanism for sessions.

stuartrexking 2021-01-14T12:56:57.002200Z

Unless you have a different suggestion @jaret

hkrish 2021-01-14T20:48:18.006200Z

Hello Datomic/Clojure experts, I am trying to pull all the relevant information regarding Employees in one query. First I get a vector of all the Employee maps. Then using specter/transform or clojure.walk/postwalk, I process the vector of Employee maps and get the full maps using :db/id 's. The ref attributes are not defined as component attributes. But I need to have similar functionality. For this, I use a (d/pull db '[*] db-id) inside the specter transform function. (or with a postwalk function). But my pull with the above pull statement takes nearly 10 seconds or above to fetch the whole employee maps. The questions are: 1 - Why it is taking so much time? I have, may be 200 employees at the moment. It is a SOLO stack. 2 - Is there any better/faster way to get the full maps with the :db/id's? Thank you for any suggestions. See the code below: I have removed irrelevant lines. (let [ employees [ #:employee{:email "<mailto:haroon_789@yahoo.com|haroon_789@yahoo.com>", :last-name "smith", :emplid "PLM0015", :job #:db{:id 101155069757724}, :full-time? true, :first-name "Haroon", :employee-type #:db{:id 79164837202211}, :gender-type #:db{:id 92358976735520}, } #:employee{:email "<mailto:frazer765@yahoo.com|frazer765@yahoo.com>", :last-name "smith", :emplid "PLM0025", :job #:db{:id 10115506975245}, :full-time? true, :first-name "Farhan", :employee-type #:db{:id 79164837202211}, :gender-type #:db{:id 92358976735520}, } ....................] ] ;;job :db/id is 101155069757724 ;; (d/pull db '[*] 101155069757724) ) (specter/transform [ALL] (fn [each-map] (let [db-id (:db/id each-map)] (d/pull db '[*] db-id) )) employees) ;;I apply the above logic only for the map values with :db/id's. )

favila 2021-01-14T21:19:25.006300Z

If this is datomic cloud, this is slow because it is 600 blocking request+responses in a row

favila 2021-01-14T21:20:04.006500Z

This looks like employees already came out of a pull. why not just pull everything in one go?

favila 2021-01-14T21:21:21.006700Z

[* {:employee/job [*] :employee/employee-type [*] :employee/gender-type [*]}

steveb8n 2021-01-14T21:23:47.007100Z

I'm with John on this. If it was a reversible flag in case some lib doesn't work, that would add a lot of confidence when trying this migration

kschltz 2021-01-14T21:58:33.011500Z

I'm facing a few recurring issues with datomic cloud write latencies and index memory usage. In our current setup we are transacting one event at a time, throttling them to avoid overcharging our transactor. I was wondering if we would benefit from grouping our events before transacting, or that is not necessarily the case?

favila 2021-01-14T23:36:21.011700Z

Rule of thumb is to aim for transaction sizes of 1000-2000 datoms if you actually can control how changes are grouped

favila 2021-01-14T23:40:03.011900Z

when you say “issues”, what problem are you facing?

kschltz 2021-01-14T23:40:41.012100Z

transactions failing from time to time due to "busy indexing"

favila 2021-01-14T23:58:35.012300Z

could your transactor just be undersized for your rate of novelty? is this a regular thing or something you only encounter during bulk operations?

kschltz 2021-01-14T23:59:11.012500Z

We are running the biggest machines available

kschltz 2021-01-14T23:59:33.012700Z

and ensuring a delay of 50ms between each transact call