hugsql

richiardiandrea 2021-02-01T02:51:11.008100Z

I ended up with this monstrosity...I am still wondering if I am missing a better way here:

(defn append-files
  [db-spec-or-tx files]
  (doseq [file files]
    (let [{:keys [name mime-type byte-size content]} file
          generate-sqlvec (-> sqlvecs :append-file-blob-sqlvec :fn)
          ;; For BYTEA, we need to use PreparedStatement.setBinaryStream.
          ;; Note that you *must* pass the (possibly pre-computed) correct stream size. See:
          ;;   <https://jdbc.postgresql.org/documentation/head/binary-data.html>
          ;;
          ;; Additionally, it would be too invasive too extend all InputStreams, next.jdbc might
          ;; have a better solution to this problem.
          query (-&gt;&gt; file (generate-sqlvec) first)
          prepared-statement (jdbc/prepare-statement (jdbc/get-connection db-spec-or-tx) query)]
      (jdbc/query db-spec-or-tx
                  ;; the order needs to match query VALUES
                  (doto ^PreparedStatement prepared-statement
                    (.setString (int 1) ^String name)
                    (.setString (int 2) ^String mime-type)
                    (.setInt (int 3) ^int byte-size)
                    (.setBinaryStream (int 4) ^InputStream content ^int byte-size))))))

seancorfield 2021-02-01T03:44:15.008400Z

@richiardiandrea That's with clojure.java.jdbc?

seancorfield 2021-02-01T03:45:30.009300Z

I would expect this to be a lot easier with next.jdbc which has a whole namespace of coercion functions to influence how parameters are set (`next.jdbc.types`).

👍 1
richiardiandrea 2021-02-01T16:30:26.009500Z

yeah I was thinking about the move, but I am inheriting a code base and I am not yet ready for a port...I definitely am planning to do that though