Does JDBC sneak in timezone offsets when inserting postgres timestamp (without timezone) fields? I seem to be storing wrong values of datetimes when I’ve an app server inserting records into a db server running in different timezones. FYI I’m using next.jdbc + datagrip
@achan961117 You need your DB, your server, and your JVM to all be UTC time and NTP-sync’d.
The postgres "session" (the connection) has its own timezone. You can set this via issuing some SQL command, I will need to look up the details.
Here's an overview: https://www.postgresqltutorial.com/postgresql-timestamp/
And another https://tapoueh.org/blog/2018/04/postgresql-data-types-date-timestamp-and-time-zones/
so whats happens when they’re not? for instance if the DB server is in UTC, but the JVM client is in UTC+8, does that mean timestamp fields will be written onto the DB with -8 offset? and does queries from the DB apply a +8 offset? I’m asking as to try and better understand Postgres x JDBC’s behaviours in these situations
We ran into some issue when trying to execute a JDBC statement with 56000 parameters 🙂 The query was something like delete from foo where id = ANY(?, ?, ?.....)
. I think this is probably a Postgres error though it's hard to diagnose.
My good colleague played around and realized you could fix this "properly" by creating an SqlArray
and passing that in as a single parameter -- but for some reason creating SqlArrays require having access to the connection, which breaks the abstraction of keeping queries as data until the last moment.
We're using honeysql, and I was wondering if we could create some sentinel SQL Array value that honeysql leaves alone, and then just before formatting the query into an SQL vec, we walk the whole data structure and replace these sentinel values with the proper array type... Has anyone done think similar?
Hmm so this is issue spans across next.jdbc, hikari-cp and integrant. In books.clj when I try to create my books table it complains about the :dbtype keyword then also complains about a classname not being present. I'm sure I'm just missing something small but I can't seem to catch it. https://gist.github.com/v3gal0g/59b8dfa98c9795683a19b69ecde08a03
Hi, has anyone managed to call a Postgres stored function with a jsonb argument from next.jdbc? I'm getting function f(unknown) does not exist
I'm using pgjdbc-ng alongside next.jdbc. I've tried it with both jsonb and text arguments:
create or replace function f_json(JSON_data jsonb) ...
create or replace function f_txt(JSON_str text) ...
With associated calls from Clojure:
(jdbc/execute! ds ["select f_json(?)", {:a 1 :b 2}])
(jdbc/execute! ds ["select f_txt(?)", (str (->json {:a 1 :b:2}))])
Get the same result in both cases the argument type doesn't seem to be recognised and matched to the stored function.You need to look into next.jdbc.prepare/SettableParameter
I need to run but I have some examples lying around
Next.jdbc has a small blurb about this in the docs https://cljdoc.org/d/com.github.seancorfield/next.jdbc/1.2.659/doc/getting-started/tips-tricks#postgresql The reason that failed was because PG uses a smallint for parameter count which limits you to something like 16k parameters.
Try this
https://cljdoc.org/d/seancorfield/next.jdbc/1.2.659/doc/getting-started/tips-tricks#postgresql
using Java primitive array.
In that case only one ?
parameter placeholder used.
I don't remember exactly but there was some trick to pass an array as named {:param ... } to sql/format
function.
@orestis Thanks for the pointer. Managed to extend the protocol and still got the same result - because the function was in it's own schema :man-facepalming: Much muppetry on my part!
Hello, I have a problem. I use latest next.jdbc
and it looks like (next.jdbc.date-time/read-as...)
don't have any effect at all:
ws180.db.gc.common> (next.jdbc.date-time/read-as-default)
nil
ws180.db.gc.common> (:integrations/expiration (first (with-open [conn (connection-pool)] (ws180.db.gc.integrations/integrations-get-all-by-user-id conn 104))))
#object[java.time.LocalDateTime 0x4786fe1c "2021-06-09T16:46:07"]
ws180.db.gc.common> (next.jdbc.date-time/read-as-instant)
nil
ws180.db.gc.common> (:integrations/expiration (first (with-open [conn (connection-pool)] (ws180.db.gc.integrations/integrations-get-all-by-user-id conn 104))))
#object[java.time.LocalDateTime 0x3b3e0f53 "2021-06-09T16:46:07"]
ws180.db.gc.common> (next.jdbc.date-time/read-as-local)
nil
ws180.db.gc.common> (:integrations/expiration (first (with-open [conn (connection-pool)] (ws180.db.gc.integrations/integrations-get-all-by-user-id conn 104))))
#object[java.time.LocalDateTime 0xd4463a9 "2021-06-09T16:46:07"]
ws180.db.gc.common>
Am I doing something wrong?
I use MySQL database and column has type datetime
In HoneySQL v2 there’s a :lift
annotation that tells HoneySQL to “lift” a data structure out as a parameter (so it doesn’t try to interpret, say, a vector as a function call DSL).
@decim I suspect you’re trying to pass a (next.jdbc-style) db-spec hash map to the hikari-cp make-datasource
function which is why it isn’t working.
My recommendation would be to stop using hikari-cp and follow the instructions in next.jdbc
for connection pooling with HikariCP (the Java library).
A couple of things: first, per the namespace docstring “The expectation is that you will call at most one of these, at application startup, to enable the behavior you want.“; second, those functions only affect java.sql.Date
and java.sql.Timestamp
— depending on the vendor/version of the JDBC driver you’re using, datetime
is going to come back as java.util.Date
or java.time.LocalDateTime
I expect.
I see, so driver itself returns java.time.LocalDateTime
instead of java.sql.Timestamp
. So, to make it work I should implement my own extend-protocol rs/ReadableColumn
, right?
yes, the following worked out for me, thank you!
(extend-protocol rs/ReadableColumn
java.time.LocalDateTime
(read-column-by-label [^java.time.LocalDateTime v _] (ldt/to-instant v zone-offset/utc))
(read-column-by-index [^java.time.LocalDateTime v _2 _3] (ldt/to-instant v zone-offset/utc)))