graphql

Matt Butler 2020-04-28T12:11:42.055500Z

Is there a way to get a better idea of what's happening during execute-query, other than the :timings key? It's returned value appears to be a lot shorter than the overall execution time (based on the gap between logs from pedestal/query-executor interceptor and check-terminators) for large data sizes.

hlship 2020-04-28T15:02:07.056700Z

You're not seeing the time to convert EDN data to a JSON byte stream, for example.

Matt Butler 2020-04-28T15:12:28.057400Z

Does that not happen inside the json-response interceptor?

hlship 2020-04-28T15:32:31.059300Z

When query-executor is done, you have a large data structure in memory; I don't believe you'll have any unresolved lazy seqs left at that point, because Lacinia is walking lists to perform validation. In any case, that's the time that Lacinia takes to execute, but another chunk of processing, which won't show up in timings, is how long it takes to convert that chunk of data into a bytestream and pump it down to the client.

Matt Butler 2020-04-28T16:39:37.060700Z

Is there a way the measure just the query executor time then?

Matt Butler 2020-04-28T16:41:49.063900Z

My suspicion is that because I've got 100k objects each with multiple fields, I'm losing time on as you say, field validation etc. But it would be nice to prove it, rather than just assume that's the diff between the :timings key value and the time between the pedestals query-executor interceptor and the following interceptors logs.

Matt Butler 2020-04-28T16:43:29.066Z

Unless I'm misunderstanding and Lacinia itself is doing some json conversion (rather than a lacinia.pedestal interceptor).

hlship 2020-04-28T17:30:07.066700Z

lacinia does not do any IO or encoding, beyond what's necessary to convert scalar values back and forth.

hlship 2020-04-28T17:31:43.068500Z

In our stack, we have a logging interceptor around our the lacinia pipeline (but also includes a few other things, like some auth) and the numbers seem to line up pretty nicely. The thing with timings is that not everything is captured (by design, since that would be a lot of noise) and if things are happening in parallel, the way it should, then the total of the elapsed time of the timings will not add up to the total time entry to exit.

Matt Butler 2020-04-28T19:13:53.075800Z

So In my case there is only 1 custom resolver, which returns a long seq of maps (100k entries). Therefore the only resolver I see in the timings is my custom one. Which says it takes around 500ms. Whereas the total time the query appears to be executing for is ~3s (measuring from the Lacinia pedestal interceptors). Is it reasonable to assume this is Lacinia doing type checking on the return value and the associated overheard or are there other overheads one should consider when dealing with large data structures in Lacinia.

Matt Butler 2020-04-28T19:14:00.076Z

Thanks for the help btw 👍 🙂

Matt Butler 2020-04-28T19:14:06.076200Z

Much appreciated

hlship 2020-04-28T23:29:49.076900Z

You should probably put your own interceptor in, to take the measurements you need.

hlship 2020-04-28T23:30:44.077900Z

Like I said, once Lacinia is done with its work, there's a bit more work to turn it into a HTTP response. Depending on where you do your measuring, you can figure out what part is where.

hlship 2020-04-28T23:31:27.078600Z

Generally speaking, once Lacinia is done resolving and selecting, there's nothing left to be lazy (all lazy seqs will have been fully walked for various reasons).