sql

All things SQL and JDBC...
Ronny Li 2021-04-28T20:16:02.125600Z

Hi everyone ๐Ÿ‘‹ I have a query that takes 4ms to run on Postgres (according to EXPLAIN ANALYZE) yet my server that queries the db says it takes 600ms to run the same query. Both are hosted on GCP in the same region (`us-central`) although on separate projects. Any thoughts on why there's such a large discrepancy? It's unlikely to be network time right? Thanks in advanceย ๐Ÿ™‚ PS: Our server simply times the start and end of next.jdbc/execute! so there could be other stuff going on contributing to the dependency

(jdbc/execute!
  ds
  query
  {:builder-fn njrs/as-unqualified-lower-maps})

seancorfield 2021-04-28T20:23:22.126400Z

@ronny463 Does it return a large number of rows? What is ds? Are you using connection pooling or just making that datasource from a db-spec hash map?

Ronny Li 2021-04-30T01:46:16.132800Z

I'm sure you're not surprised to hear this but connection pooling took off 480 out of the 600ms. Thanks again for your help!

seancorfield 2021-04-30T03:35:30.133Z

@ronny463 I am not surprised ๐Ÿ™‚ Also, if you are able to rewrite your process as a reduce or transduction over a call to plan instead execute! you will be able to avoid all the overhead of building Clojure hash maps from ResultSet rows.

๐Ÿคฏ 1
Ronny Li 2021-04-28T20:24:32.126500Z

Hi Sean, we're returning about 1400 rows, around 150kb. ds is created from jdbc/get-datasource To be honest I'm not sure if we're using connection pooling

seancorfield 2021-04-28T20:26:17.126700Z

What is get-datasource called on?

seancorfield 2021-04-28T20:27:23.126900Z

If youโ€™re not using connection pooling, then the execute! call is going to have to set up a database connection, run the query, convert all the rows to Clojure hash maps, and then tear down the database connection.

๐Ÿ‘ 1
Ronny Li 2021-04-28T20:27:24.127100Z

ah, we either use the db-spec hash map or we generate a JDBC url (depending on the environment)

Ronny Li 2021-04-28T20:27:32.127300Z

in this particular case I was using the db-spec hash map

seancorfield 2021-04-28T20:27:57.127500Z

So thatโ€™s probably what the extra time is being spent on.

๐Ÿ™ 1
Ronny Li 2021-04-28T20:28:55.127700Z

okay, got it. We were leaning towards the time being spent on converting the rows but it sounds like the more likely culprit is creating the connection?

seancorfield 2021-04-28T20:55:55.128200Z

Easy enough to make the connection first and then just time the execute call ๐Ÿ˜Š

๐Ÿ‘ 1