We use it just to look at timings by hand, not out into production just yet.
I hope the new version, in 0.36.0-alpha-1, will be even easier to use. If I get a chance, I want to try and create a visualizer script for the data.
Would there be any way to log the various timings in a structured way, so I could use for example CloudWatch or some other log parser to see meaningful data? Would that be something I could/should add to an pedestal interceptor?
I am using the timing. We track the total time for graphql operations as sort of the key metric for this app, and when that spikes up my first question is: are all graphql operations spiking or just some big outliers. So far it mostly hasn't been big outliers and the big spikes are the result of other issues, but I am tracking the timings
We're mostly interested in seeing if resolution is async the way it should be.
I haven't noticed any blocking, but I have the callback executor bound to 16 thread executor, so I am not sure if I would
I've been chasing a race condition all day that appears to actually have been a bug in our tests where a callback was executed on a terminated thread pool executor.
After a series of yak shaves, I've ended up with an interesting change to Lacinia that supports setting a timeout on the execution. I'm interested in what people think of this PR: https://github.com/walmartlabs/lacinia/pull/301
for the execution changes: other then the timeout, I more or less run lacinia like this already, because I am calling execute-parsed-query-async directly and not immediately derefing the returned value
but that also means the timeout feature is added at the layer above how I am interacting with the library, so really no change for me, which is great 🙂