datomic

Ask questions on the official Q&A site at https://ask.datomic.com!
wegi 2021-05-06T08:14:30.469200Z

Thanks for the answers :thumbsup:

danm 2021-05-06T08:26:22.473400Z

Can anyone give any rough expected timings on Datomic Cloud operations (primarily datomic.client.api/q and datomic.client.api/transact). We're seeing about 60ms for q and 40ms for transact in the best case, which doesn't seem unreasonable given the overhead of an HTTP connection from the client to transactor etc, especially given we're going over a VPC boundary via a VPC endpoint pointing to the NLB (we didn't want to have to recreate all our existing infra, and Datomic Cloud templates require creating a new VPC, plus we're expecting to need to distribute the DBs across multiple clusters in future anyway). But it'd be useful to know if that was correct or if most people are seeing much lower timings. Especially if the VPC endpoint link is a probable cause for that.

hanDerPeder 2021-05-06T10:05:00.475300Z

whats the idiomatic way of modelling an ordered collection? an attribute with many cardinality does not enforce order, right? do you model each item in the collection as an entity with next/prev attributes? any helpers for this or do people just roll their own linked list?

tatut 2021-05-06T10:14:08.475400Z

there’s a thread about this https://forum.datomic.com/t/handling-ordered-lists/305/3

1🙏
tatut 2021-05-06T10:15:00.475800Z

but I don’t think there’s a “one size fits all” solution, it depends on what you need

hanDerPeder 2021-05-06T10:16:35.476Z

thanks, just wanted to double check I wasn’t reinventing the wheel here

tatut 2021-05-06T10:17:24.476200Z

I find that users usually want things either alphabetically or chronologically (or sorted on some column for table listings)… so explicit order needs are luckily rare in my experience

hanDerPeder 2021-05-06T10:20:27.476400Z

my use case is a task-list the user has prioritised herself. so order is kind of the point 🙂 storing an index/priority attribute seems to be the way to go

joshkh 2021-05-06T11:40:41.483100Z

i've stood up a new Query Group to be used for Datomic Analytics, but after the stack updated i no longer see my catalogue in presto. we were previously using the default Analytics Endpoint and our catalogue was available for queries. 1. created a new Query Group called test-analytics 2. opened the compute stack CF template and entered the Query Group name test-analytics for the Analytics Endpoint field value 3. saved the stack and waited for the deployment to complete

presto> select * from system.metadata.catalogs;
 catalog_name | connector_id
--------------+--------------
 system       | system
(1 row)
the catalogue existed when Analytics Endpoint value was empty (defaults to system name), and queries worked fine. i've also tried restarting the gateway and resynchronizing the metaschema with no luck. i can see that the the gateway itself is running via SHOW TABLES FROM system.runtime; any tips for troubleshooting this? thanks!

Joe Lane 2021-05-06T11:54:02.484200Z

@joshkh You need to pass the endpoint url for the QG not the name of the QG.

joshkh 2021-05-06T12:28:32.484300Z

thanks, i was thrown off by the parameter description that says to provide the query group name. > Provide the name of a query group if you'd like analytic queries to go to a different endpoint. Defaults to system name. the endpoint url the EndpointAddress parameter from the query group CF output, right?

<http://entry>.&lt;query-group-name&gt;.&lt;region&gt;.<http://datomic.net:8182|datomic.net:8182>
i've tried this as the Analytics Endpoint value in the compute stack template and still have the same problem of the missing catalogues.

Joe Lane 2021-05-06T12:36:43.485400Z

Have you done the Cli sync dance? Those nodes wouldn’t have the catalogs in them unless you did.

joshkh 2021-05-06T12:37:32.485600Z

i'll try again, maybe i did things out of step this time 🙂

joshkh 2021-05-06T12:49:53.485800Z

we tried resyncing the metaschema again, no luck

xceno 2021-05-06T12:58:41.492Z

I need some advice regarding storage: I've got some multi-dimensional vectors holding intergers or doubles (dtype-next tensors to be specific) and I wonder how to save those in Datomic. What I thought about is to save a tuple of the dimensions, say :some-ns/shape [3 5 4] and store the raw byte-buffer of a tensor along with it. I don't need to query the contents of it and would disable the history for it too. Is that a viable idea or should I rather serialize it and store the blob? If it's the latter: Are there examples of using an S3 bucket from inside a datomic ion?

xceno 2021-05-07T14:24:03.003Z

Okay I went with S3 and aws-api. That was way easier than I thought it would be, thanks!

Joe Lane 2021-05-06T13:09:41.492100Z

Hmm. Can you open a support case so we can track this on official channels?

joshkh 2021-05-06T13:12:17.492300Z

absolutely, thanks for the first attempt Joe!

kenny 2021-05-06T15:01:47.492600Z

The canonical answer here is "it depends" 🙂