Hello all, do you think the problem about charset in next.jdbc
came from jdbc driver?
{:chamado "241740",
:descricao
"LOJA 39 - copia de seguranca do backup do servidor ********** falhou.",
:categoria "Bay.Banco de Dados.Backup com Falha",
:abertura "26/04/2020 08:53",
:fechamento "02/05/2020 22:04",
:local "39 **** FLORIANÃ"POLIS IGUATEMI", <----}]
is there any way to fix this?
it is very unlikely to be something with next.jdbc
but character encoding is tricky
it could even be due to a mismatch of your terminal and the jvm's default character encoding
do you think I have to encode after result?
@fabrao You may need to specify :characterEncoding "utf8"
in your db spec or ?characterEncoding=utf8"
in your JDBC URL, depending on your database and how you are setting up your data source.
well, it didn´t work
maybe it caused by SQLServer Collate
I don't know if what I suggested would work with MS SQL Server. You'll need to read the relevant docs for that DB and figure it out.
It could be failing to handle the character encoding at any number of points along the way... It could be the data insertion point, it could be the data retrieval point, it could be how your JVM is configured on your system, it could be your terminal...
Yes, I´m trying to figure out where is the problem.
the same results goes to GraphQL result
"local": "39 FLORIAN�POLIS IGUATEMI"
If you have some native SQL Server "workbench" you should use that to verify the data that is actually in the DB.
in sql management studio is ok for this
Is there an example how to handle postgresql arrays in result set?
rspecially varchar[] and integer[]
How to pass a "collection" to next.jdbc@postgres?
(jdbc/execute! conn ["SELECT * FROM app_todo WHERE id IN (?)" [1]])
something like this
so there is no way to do this? how should I "get many vals from many inputs" in next.jdbc?
Sure, the docs tell you how to deal with additional data types in general in result sets. There's just no specific example for PostgreSQL array types.
The Getting Started guide provides an outline: https://cljdoc.org/d/seancorfield/next.jdbc/1.0.424/doc/getting-started#working-with-additional-data-types
(and it links to deeper discussions and some examples -- but it would be nice to have specific examples for array types in the PostgreSQL Tips & Tricks section if someone is willing to write them up: I do not use PostgreSQL!)
(if so, the answer is in the Getting Started section I pointed to since that covers both input parameter types and output result set column types)
I thought you were trying to answer his question...
@seancorfield I also don't know SQL enough to know where i'm searching it's about "input parameter type"?
@souenzzo The two use cases are: * ReadableColumn protocol (for converting database-specific types to Clojure values -- reading from a JDBC result set back into Clojure data * SettableParameter protocol (for converting Clojure values to database-specific types -- passing Clojure data as input parameters into JDBC operations Does that help?
SettableParameter
is the thing. i will search. tnks 🙂
@maxp is asking about the first case -- turning PostgreSQL array column types into Clojure vectors; I think you're asking about the second case -- passing Clojure collections into JDBC (presumably as PostgreSQL arrays)?
If you extend SettableParameter
to Clojure vectors, the = ANY(?)
trick will stop working I suspect. But you may not need to extend SettableParameter
for your use case in PostgreSQL: simply using ANY(?)
instead of just ?
may be sufficient for the PostgreSQL JDBC driver to accept a vector type and convert it to an array type for the column...
e.g., ["INSERT INTO my_table (id, array_data) VALUES (?, ANY(?))" 1, [2 3 4]]
I've no idea whether that would work (like I say, I don't use PostgreSQL at all) but given how the IN
/`= ANY(?)` trick works, I suspect it might.
@souenzzo WHERE id = ANY(?)
should work -- which is the example in the docs.
Note: =
rather than IN
.
I thought you were asking about passing a collection into a single column for an insert or similar.
I'm surprised the solution needs to be that complicated...
...oh, because you are deliberately deferring the expansion of the array data from JDBC? What's the reasoning behind that @serioga?
It seems like any operation you perform on it will expand the whole array anyway, and it seems like you would run the risk of the call to .getArray
happening after the connection closes with your code, which would fail in most cases I think?
I think the minimum version would be something like:
(extend-protocol rs/ReadableColumn
Array
(read-column-by-label [^Array v _] (vec (.getArray v)))
(read-column-by-index [^Array v _ _] (vec (.getArray v))))
the reason is to avoid expensive (.getArray a)
if I don't access data in result column.
["SELECT *
FROM app_todo
WHERE author = ANY(?)"
(int-array [1 3])])
This works! But without int-array
I get
Execution error (PSQLException) at org.postgresql.jdbc.PgPreparedStatement/setObject (PgPreparedStatement.java:978).
Can't infer the SQL type to use for an instance of clojure.lang.PersistentVector. Use setObject() with an explicit Types value to specify the type to use.
> you would run the risk of the call to .getArray happening after the connection closes let me check 🙂
Yes, that's expected.
(and that's exactly what the documentation says to do)
> What does this mean for your use of next.jdbc
? In plan
, execute!
, and execute-one!
, you can use col = ANY(?)
in the SQL string and a single primitive array parameter, such as (int-array [1 2 3 4])
. That means that in next.jdbc.sql
's functions that take a where clause (`find-by-keys`, update!
, and delete!
) you can specify ["col = ANY(?)" (int-array data)]
for what would be a col IN (?,?,?,,,?)
where clause for other databases and require multiple values.
usually I always work with result after connection close, so I think data is kept in java.sql.Array
instance internally...
If you use plan
and don't reference the array column, it wouldn't actually be read from the result set anyway. And if you use execute!
/`execute-one!` the assumption is that you should get a fully-realized Clojure datastructure that is completely detached from JDBC (so don't select array columns if you don't need them 🙂 ).
It feels like you're fighting against the simple path here... but, hey, if it works for your use case.
Yes, I just try to avoid to convert all array values if code does not work with them. I feel sick about wasting CPU time for useless conversion 🙂
> If you use plan and don't reference the array column, it wouldn't actually be read from the result set anyway. @seancorfield is it so if I fetch row but ignore column?
ReadableColumn
only applies if plan
's reduction references that specific column by name (actually by label
)
In execute!
/`execute-one!`, every column mentioned in the result set, i.e., select
'd in the SQL, is read (by index
).
Both of those build a full hash map from each row, with all columns. plan
doesn't even build a hash map if you only reference columns directly by label and don't try any map-like operations on the row itself. That's why plan
is so efficient.