hi all! When I use sql/insert!
and pass a map to it, I have to "rename" my map field and convert "-" to underscore, what would be the approach to make it be automatically converted?
I use honeysql with next.jdbc, and the kebab to snakecase happens for me automatically
Oi
anyone implemented Readable column for tsrange in psql?
Hm, nevermind, that feels wrong to do, tbh. Any implementation would lose the idea of "range".
@seancorfield yes I am using next.jdbc
, lack that info. didn't know this keyword !! thanks
Read "All the Options" in the documentation -- may be other options you haven't seen yet that would help you
hello, I use next.jdbc
-- is it possible to see the sql generated from e.g. sql/update!
? currently having a hard time debugging
@bfabry thanks for the tip, but I don't quite understand what you mean. either way, will try @seancorfield suggestion first ๐
reimplement this as a wrapper for your Executable/connectable but with logging? https://github.com/seancorfield/next-jdbc/blob/master/src/next/jdbc/protocols.clj#L33
aside: @seancorfield why's update! and execute-one! call the first parameter "connectable" but then call "Executable" protocol functions on it?
@saikyun You can call next.jdbc.sql.builder/for-update
with the table key-map where-params opts
arguments you are passing to update!
(without the connectable
one).
aha, thanks. I'll try it out! ๐
That will return the vector of SQL string and the list of parameter values that will be passed to JDBC (via the execute-one!
function).
@bfabry Because "naming is hard" and it's a bit more descriptive of what's really happening: the Executable
implementations need to obtain a Connection
first before they execute the SQL. So you can pass a db-spec hash map, a JDBC string, a DataSource
, or a Connection
-- or a Statement
, and only the latter is directly "executable".
(but it's a very valid question -- initially the transact
/`with-transaction` functions took a connectable
but that was renamed to transactable
via a community PR).
@seancorfield Hi Sean, using next-jdbc, I am querying a table containing a BLOB field, and am trying to get a column reader parsing that. I have almost got it working, but am a bit perplexed about the object type in the result set for the corresponding column. I the case of H2 database it seems to indeed implement java.sql.Blob
, but when using the MySQL adapter the type seems to be a byte array [B
.
Have I screwed up somehow or is it just the way the MySQL adapter works?
@andreas179 Hard to tell without seeing code. I've never tried to use BLOB with MySQL so it's entirely possible it returns a byte array instead of java.sql.Blob
-- have you looked at the MySQL docs for the connector?
Yeah, just tested it locally and it looks like the MySQL connector returns BLOB as byte[]
JDBC is notoriously non-portable once you get away from the basics... ยฏ\(ใ)/ยฏ
I guess I'll update the CLOB & BLOB section of the Tips & Tricks documentation to mention that...
Looking in the docs (which I probably should have done before asking) reveals that the return value of GetColumnCalssName
should return byte[]
as you confirmed. However, the docs also states that the BLOB sql type always can be converted to java.sql.Blob
among other. Oh well, now that I know whatโs happening I can work around it. Thanks for a great library, docs are extensive and easy to follow as well! :)
Thanks. The one thing I will say about JDBC is that after maintaining Clojure wrappers around JDBC for nine years, I am still constantly learning "fascinating" new things about JDBC ๐
https://github.com/seancorfield/next-jdbc/commit/2574e7e37f7807507085b47eb140d1127b4c8129
Awesome! At least weโre getting something battle-tested, once oneโve worked around all quirks:sunglasses:
I can tell you that clojure.java.jdbc
and next.jdbc
are in extremely heavily daily usage where I work, with MySQL, and I get a lot of feedback from folks using PostgreSQL, so those are probably the two most battle-tested. But I test against a broad range of databases as a matter of course, including MS SQL Server, and I know there are folks using it with many other databases (including some I'd never heard of...).