Ask questions on the official Q&A site at https://ask.datomic.com!
tatut 2021-05-07T05:22:24.493400Z

the java sdk or cognitect aws-api work for s3 access

tatut 2021-05-07T05:22:49.493600Z

just give the compute group ec2 role permissions to the bucket

danm 2021-05-07T09:21:23.497600Z

Are there docs anywhere about the expected CPU use of queries vs transactions? Our current setup doesn't yet have query groups, and we're performing a lot more writes (i.e. transacts) than we are queries. I'm seeing CPU hitting 98+% on the transactors, and then everything falls over. I'm curious if creating a query group to offload the queries could/would drop CPU on the transactors a lot more than the ratio of queries/transacts would suggest, because maybe queries are a lot more CPU intensive?

danm 2021-05-07T10:47:00.499500Z

Also, is there documentation anywhere on all the standard graphs on the Datomic Cloud dashboard? Like, TxBytes. Is that a per second average or an aggregate of all the data transmitted since the last datapoint? I'm assuming the latter, as changing the dashboard period, and therefore the interval between datapoints, alters the value significantly.

danieroux 2021-05-07T13:27:27.002900Z

A wish question (I wish-and-hope-this-exists): Does anyone have something that allows me to edit a Datomic cloud database as a spreadsheet? Or as a simple CRUD app? We have a bunch of static information that we display to the internal users on Metabase - and they want to change the values they see.

xceno 2021-05-07T14:24:03.003Z

Okay I went with S3 and aws-api. That was way easier than I thought it would be, thanks!

respatialized 2021-05-07T14:41:40.004600Z

https://github.com/hyperfiddle/hyperfiddle This may be what you're looking for!

mafcocinco 2021-05-07T14:43:34.006200Z

In Datomic, what is the best-practice way to model this relationship: object A contains references (i.e. many instances) of object B and we want a field in object B to be unique within the context of object A. From the documentation, it does not seem like :db/unique (either with :db.unique/identity or :db.unique/value), by itself, is appropriate. Wondering how to correctly model this constraint within the Datomic Schema.

Joe Lane 2021-05-07T14:55:39.006600Z

@mafcocinco Look into using :db.unique/identity tuples for this, either heterogeneous or composite. Also, depending on how many "many instances" is, maybe B should point to A ?

mafcocinco 2021-05-07T14:57:00.006900Z

True. It doesnt matter which direction the index points and that would probably be easier.

Joe Lane 2021-05-07T14:58:26.007100Z

How many is "many instances"? The answer to which direction it should go depends on the required selectivity of the access patterns. Again, all predicated on "many instance" πŸ™‚

kenny 2021-05-07T17:53:39.007600Z

Is there a way for me to know which Datomic Cloud query group node a client api request went to?

ghadi 2021-05-07T17:56:38.007900Z

xy problem

ghadi 2021-05-07T17:57:08.008500Z

groans "what are you actually trying to solve?"

ghadi 2021-05-07T17:57:17.008700Z


kenny 2021-05-07T17:58:51.009500Z

Actually lol'ed πŸ™‚ Knew this was coming.

Joe Lane 2021-05-07T17:59:03.010200Z

I'm sensing a new precursor to "Everybody drink"

kenny 2021-05-07T18:00:55.011600Z

We are receiving ~20 datomic client timeouts all on the exact same d/pull call within a 3 minute window, which is surprising because that call doesn't actually pull that much data. I was curious if the node those client api requests went to was overwhelmed.

Joe Lane 2021-05-07T18:02:24.012100Z

Check your dashboard, do you have any throttle events?

kenny 2021-05-07T18:02:47.012500Z

Not at that time. The query is set to a 15s timeout and it's hitting that on every one of those calls.

Joe Lane 2021-05-07T18:03:58.012700Z

I thought it was a pull?

kenny 2021-05-07T18:04:51.013200Z

It's a query with a pull. e.g.,

(d/q {:query   '[:find (pull ?p [* {::props-v1/filter-set [*]}])
                 [_ :customer/prop-group1s ?p]]
      :args    [db]
      :timeout 15000})

Joe Lane 2021-05-07T18:09:29.014700Z

Were these against the same database?

kenny 2021-05-07T18:11:09.015300Z

All but 2.

ghadi 2021-05-07T18:11:22.015800Z

does that same exact pull call happen at other times of the day?

kenny 2021-05-07T18:11:56.016400Z


kenny 2021-05-07T18:11:57.016600Z

That query will always return a seq of 3 maps with < 20 total datoms.

ghadi 2021-05-07T18:12:23.017400Z

how long does it ordinarily take outside the problem window?

kenny 2021-05-07T18:12:40.017700Z

< 200ms

ghadi 2021-05-07T18:12:53.018Z

cool cool...

kenny 2021-05-07T18:12:53.018200Z

avg maybe 50ms.

ghadi 2021-05-07T18:14:39.019500Z

can you launch that pull concurrently (futures / threads) and reproduce the issue?

Joe Lane 2021-05-07T18:16:27.020500Z

Try ^^ against a different QG of size 1 and look at its dashboard.

ghadi 2021-05-07T18:17:04.021Z

one of the lovable perks of infinite read scaling

ghadi 2021-05-07T18:17:37.021300Z

that will at least tell you if the synchronicity is significant

Joe Lane 2021-05-07T18:20:40.023500Z

Maybe your on-demand DDB table wasn’t provisioned for that demand?

kenny 2021-05-07T18:50:27.026700Z

From looking at the query group dashboard, I can see that the group was overwhelmed at the time. min cpu of 99 & max of 100. There were only 2 nodes in the group. I also observe that at lease one other query resulted in 50.4k count. The overwhelmed system simply manifests itself in those frequency, but small queries. Thinking the fix is to scale the system up at the time of the 50.4k query. Separately, does the Query Result Counts graph show the number of datoms a query returns or something else?

Joe Lane 2021-05-07T18:53:19.027400Z

That graph show the number of results not datoms. A result can be many datoms

Joe Lane 2021-05-07T18:53:56.028300Z

Instead of scaling the qg up, can you make a separate QG for that other query so they don't affect each other?

kenny 2021-05-07T18:54:37.028400Z

So if that query is pull'ing in the :find, it could actually be some scalar * the reported number?

Joe Lane 2021-05-07T18:56:10.029500Z

Assuming all the results are uniform, yes, that many datoms would be returned. Datoms isn't really the right measurement here though.

kenny 2021-05-07T18:56:19.029900Z

Yes, that is an option. I'd like a bit more data on which queries are causing that huge result set. I have a couple ideas but need more data to know how to split. Why would you tend to prefer splitting over scaling?

kenny 2021-05-07T18:56:26.030Z


Joe Lane 2021-05-07T18:57:06.030200Z

Yep, but beyond that, these sound like different kinds of workloads.

kenny 2021-05-07T18:57:42.030400Z

"that many" is scalar * reported number, assuming uniform?

mafcocinco 2021-05-07T18:58:32.030600Z

10 or less.

Joe Lane 2021-05-07T18:59:05.030800Z

If I know each pull returns exactly 3 datoms, then the returned datoms is: reported number * 3 = "that many datoms"

mafcocinco 2021-05-07T18:59:28.031Z

as a guess. A is an environment for our testing platform and B is the meta data for each service that will be tested in that environment. Our platform currently consists of ~8 services and I don’t see that number going up significantly.

kenny 2021-05-07T18:59:39.031200Z

Yeah, they kind of are.

Joe Lane 2021-05-07T19:00:43.031800Z

Then performance doesn't matter here and you should do whatever is most convenient for you. That entire dataset will fit in memory, yay!

kenny 2021-05-07T19:01:02.032300Z

Another option I've been considering is "filling out" my query group with spot instances. It's likely that would solve this problem as well, at a fraction of the cost.

Joe Lane 2021-05-07T19:01:14.032400Z

Is one of them a scheduled batch job? You can always spin the QG up just for that job πŸ™‚

Joe Lane 2021-05-07T19:01:59.032800Z

"this problem" <- you know what I'm going to ask.

kenny 2021-05-07T19:04:49.033800Z

Getting timeouts due to hitting peak capacity.

kenny 2021-05-07T19:05:19.034500Z

e.g., cpu spikes to near 100, some small number of queries timeout, then the event is over.

Joe Lane 2021-05-07T19:09:33.037Z

> Getting timeouts due to hitting peak capacity ^^ That is a symptom, and we still don't know why it occurred do we? FWIW, a shorter timeout on your pulls with retry wrapped around it would also alleviate the above symptom because the request would (eventually, but how unlucky can you be?) be routed to a different node.

kenny 2021-05-07T19:11:26.037800Z

So I can reproduce the query result by calling count on the result of d/q?

kenny 2021-05-07T19:11:39.038200Z

Fair. My hypothesis is those 50.4k queries. I'm betting there are multiple of them.

kenny 2021-05-07T19:12:20.039Z

& there's only 2 nodes in the group at the event time. So if both nodes are processing 1+ 50.4k queries, perhaps pretty unlucky.

Joe Lane 2021-05-07T19:12:26.039100Z


Joe Lane 2021-05-07T19:14:16.040100Z

So there are only 2 nodes in the QG and there are 2 queries returning 50.4k results being issued at the same time?

kenny 2021-05-07T19:50:25.041100Z

I don't know for certain since I don't have that data instrumented right now but, yes it is likely. There's up to 5 queries that could all run in the same 10s window that are of that size.

kenny 2021-05-07T19:50:49.041200Z

d/pull is not included then?

kenny 2021-05-07T23:40:27.042200Z

Datomic Cloud currently uses the older launch configuration setup in creating ASGs so a mixed group of Spot & On-Demand is not possible 😒 I created a feature request here: https://ask.datomic.com/index.php/607/use-launch-template-instead-of-launch-configuration.