Hi everyone ๐
I have a query that takes 4ms to run on Postgres (according to EXPLAIN ANALYZE) yet my server that queries the db says it takes 600ms to run the same query. Both are hosted on GCP in the same region (`us-central`) although on separate projects. Any thoughts on why there's such a large discrepancy? It's unlikely to be network time right?
Thanks in advanceย ๐
PS: Our server simply times the start and end of next.jdbc/execute!
so there could be other stuff going on contributing to the dependency
(jdbc/execute!
ds
query
{:builder-fn njrs/as-unqualified-lower-maps})
@ronny463 Does it return a large number of rows? What is ds
? Are you using connection pooling or just making that datasource from a db-spec hash map?
I'm sure you're not surprised to hear this but connection pooling took off 480 out of the 600ms. Thanks again for your help!
@ronny463 I am not surprised ๐ Also, if you are able to rewrite your process as a reduce
or transduction over a call to plan
instead execute!
you will be able to avoid all the overhead of building Clojure hash maps from ResultSet
rows.
Hi Sean, we're returning about 1400 rows, around 150kb.
ds
is created from jdbc/get-datasource
To be honest I'm not sure if we're using connection pooling
What is get-datasource
called on?
If youโre not using connection pooling, then the execute!
call is going to have to set up a database connection, run the query, convert all the rows to Clojure hash maps, and then tear down the database connection.
ah, we either use the db-spec hash map or we generate a JDBC url (depending on the environment)
in this particular case I was using the db-spec hash map
So thatโs probably what the extra time is being spent on.
okay, got it. We were leaning towards the time being spent on converting the rows but it sounds like the more likely culprit is creating the connection?
Easy enough to make the connection first and then just time the execute call ๐