sql

All things SQL and JDBC...
practicalli-john 2021-05-24T20:32:24.026600Z

Given I generate a vector of a large number of hash-maps in a Clojure application, with keys and values that map to a record in a database table. If I want to insert/update the whole data structure into a relational database table Is there a preferred next.jdbc function to look at? e.g. insert-multi, plan (is this just for results) Or would it be more appropriate to iterate over the collection using clojure.core and call insert! High performance is not an issue at this time, although it shouldnt blow up the heap 😉

practicalli-john 2021-05-25T21:31:32.030Z

I create a simple function with insert! inside a doseq

(defn persist-all-subscription-results
  [subscription-results]
  (with-open [connection (jdbc/get-connection db-spec)]
    (doseq [result subscription-results]
      (sql/insert! connection :analysis result))))
This does insert the first hash-map in the database. If I read this correctly, it seems postgres does not like the result and generates an error
org.postgresql.util.PSQLException
   ERROR: syntax error at or near "-" Position: 30

    QueryExecutorImpl.java: 2553  org.postgresql.core.v3.QueryExecutorImpl/receiveErrorResponse
    QueryExecutorImpl.java: 2285  org.postgresql.core.v3.QueryExecutorImpl/processResults
    QueryExecutorImpl.java:  323  org.postgresql.core.v3.QueryExecutorImpl/execute
          PgStatement.java:  481  org.postgresql.jdbc.PgStatement/executeInternal
          PgStatement.java:  401  org.postgresql.jdbc.PgStatement/execute
Any pointers ?? Did I do something silly?

dpsutton 2021-05-25T21:33:59.030200Z

shooting from the hip you have snake-case and need to underscore into snake_case

seancorfield 2021-05-25T21:35:50.030400Z

@jr0cket If you’re using the latest next.jdbc, there should be a next.jdbc/snake-kebab-opts that you can pass into insert! as a final argument to do that automatically.

practicalli-john 2021-05-25T21:57:02.031Z

Oh yes, I should add the snake-kebab-opts thing as another argument to insert! I seemed to have though next.jdbc did it automatically, wooops...

practicalli-john 2021-05-25T22:03:22.031200Z

Success. Thank you all.

seancorfield 2021-05-25T22:08:11.031400Z

I think c.j.j did too many automatic adjustments to column names both going in and out so next.jdbc defaults to doing nothing and leaves it up to users. If everything you are doing is with kebab-case <-> snake_case then you can use with-options to create a connectable that automatically applies those options to operations (caveat: with-transaction and get-connection return unadorned Java objects that need wrapping via with-options again in those contexts).

seancorfield 2021-05-24T20:37:45.026900Z

The simplest way would be to just doseq over the vector and call insert! on each hash map — but it won’t be fast.

seancorfield 2021-05-24T20:39:46.027100Z

If you want a bit more speed, you could use insert-multi! but you’d have to transform the data a little: you can use (keys (first data)) to get the column names and then (mapv #(select-keys % cols) data) to get the vector-of-vectors (the rows’ values) in the same order as the column names.

practicalli-john 2021-05-24T20:41:04.027400Z

just seen for-insert-multi, would that be a possible approach too?

seancorfield 2021-05-24T20:41:35.027600Z

That’s just the SQL builder that insert-multi! uses.

practicalli-john 2021-05-24T20:42:18.027800Z

ah yes, that makes sense. Thank you Sean.

seancorfield 2021-05-24T20:42:28.028Z

(some folks asked for the SQL builders to be exposed as public, documented functions so they can customize the output/use it outside the friendly SQL functions)

practicalli-john 2021-05-24T20:46:50.028200Z

I see there could be some SQL size limit with a very large number of hash-maps when using insert-multi! but yes I see that it would be more efficient. Its most likely a batch-like job, so doseq should be fine Thanks again.

dpsutton 2021-05-24T20:46:59.028400Z

and a bit of an understandable error that can happen, watch out for optional keys. As indicated by the (keys (first data)), any keys that aren't in the first map will not get added from later maps.

practicalli-john 2021-05-24T20:48:05.028600Z

the hash-maps "should" be all the same, but a very good point. Thanks.