I wonder if user
is treated as USER
which is PG's session variable and "user"
is treated as user
which matches the (lowercase) table you created?
(our old DBA had a rule that you never create a plural table -- always singular)
MySQL's information_schema has all CAPITALIZED and nearly all plural table names, just as a data point.
I went with singular names after reading the reasoning here: https://stackoverflow.com/a/5841297/1257775
The "irregular plurals" is a really good reason to stick with singular names.
I am trying my luck in building a variation of find-by-keys
... just find
with the keys being optional and with limit and offset support. That would cover the majority of select use cases.
next.jdbc.sql.builder
has a bunch of SQL string construction functions that might help you there (and they handle all the quoting stuff too).
limit
and offset
are MySQL specific aren't they? I know SQL Server has a different syntax. I don't think SQL-92 has a standard for those (I should look!)...
Note that :suffix
can be provided in options to the query/find/get functions to provide a SQL string to append so :suffix "limit ? offset ?"
would probably already let you do that...
They are are available in Postgres: https://www.postgresql.org/docs/12/queries-limit.html
It was added originally to allow for FOR UPDATE
to be appended to a SELECT
.
Oh, good to know. Maybe I should make them core options in the "friendly" functions?
Thanks, will look into it. Maybe I’ll cheat and use Honey SQL in the background. Still not sure :)
They would be certainly helpful. Almost every product have some sort of pagination.
Looks like the more standard approach is offset
/ fetch
-- and although PG supports limit
it is non-standard (and like MySQL's version)
I see. Always used it via ORMs that’s why I got attached to the friendly functions quickly.
This means that (sql/find-by-keys :table :all {:offset (* page page-size) :fetch page-size})
will let you page through all rows in TABLE
and the query will be SELECT * FROM table OFFSET ? ROWS FETCH NEXT ? ROWS ONLY
with the offset and fetch values added as query parameters.
Instead of :all
you can still pass a hash map (query by example) or a vector ("SQL String" param1 param2 ..) to do a limited query.
(I wasn't planning to enhance and of the next.jdbc.sql
functions... but...)
Ah just saw the above. I was building this lol:
(defn find
[connectable table {:keys [fields where offset limit] :or {fields :*}}]
(execute! connectable (honey/format (-> {:select fields
:from [table]}
(cond->
where (assoc :where where)
limit (assoc :limit limit)
offset (assoc :offset offset)))
:quoting :ansi)))
Will test the official implementation tomorrow.I didn’t realize how nicely HoneySQL plays with next.jdbc when I wrote the above. I ended up using an extremely thin wrapper to query my DB cleanly:
(db/query ds (db/fmt {:select [:id :name :role]
:from [:user]}))
Just pushed an update that supports :columns
in the options passed to find-by-keys
. It supports basic aliases and computed expressions (if you provide an alias).
But, yes, I'd mostly suggest using HoneySQL for anything beyond the basic operations. And I personally prefer using the honeysql.helpers
so I'd write
(db/query ds (h/fmt (-> (select :id :name :role) (from :user))))
We use HoneySQL very heavily at work (one of my former colleagues gave a talk at Clojure/West about it, years ago).
I took over as maintainer of HoneySQL a while back and Justin (the original author) recently transferred it to my account so you can guarantee that next.jdbc
+ HoneySQL will always work well together.
Thanks for the background info. Any technical for preferring honey’s helper functions, or just a matter of taste?
I wrote the same thing twice, the data version was 181 chars and the fns version was 186
With the data version, I tend to forget when things are nested in vectors vs just a vector of things.
And I find the fn-based code easier to read when composing, since it looks more like SQL than a whole bunch of assoc
/`update` calls.
For example:
(-> (cond-> (-> (select [:u.id :id])
(from [:user :u])
(where [:= :u.username username])
(user-status :u.id :has-user? true))
(not= username (u/username self))
(user-gender (:id self))
(seq exclude)
(merge-where [:not-in :q.id exclude]))
(add-order-by descriptor)
(sql/build :limit page-size :offset offset)
(sql/format :quoting :mysql))
That would be a lot uglier with assoc
/`update` calls all over.especially for things like user-status
which is this logic:
(-> query
(cond->
has-user?
(merge-where [:and
[:= :u.id id-col]
[:= :u.suspended false]
[:= :u.deleted false]])
(not has-user?)
(merge-join [:user :u]
[:and
[:= :u.id id-col]
[:= :u.suspended false]
[:= :u.deleted false]]))
(merge-left-join [:approvedtadpole :tp]
[:= :tp.userid id-col])
(merge-where [:or
[:= :u.statusid 1]
[:and
[:= :u.statusid 2]
[:<> :tp.userid nil]]])))
Gotcha
The only thing "missing" from the official version is a list of fields -- it's always SELECT *
. I'll have a think about that.
user
is a reserved word in PostgreSQL. (actually a SQL definition). In all cases I use users
, the only occasion, part from groups
where I use plural table names.
select users.username, groups.name from users join groups on users.group_id = groups.id
There is a better way. Doing a select with limit and offset requires the db to load in the entire table and scan and filter - which can be very expensive.
This explains it, with some references to other examples:
https://blog.jooq.org/2016/08/10/why-most-programmers-get-pagination-wrong/
I'm not advocating not doing your #124, simply that there are better ways, more efficient ways 🙂
Perhaps borrowing the concept of a seek
function
https://www.jooq.org/doc/latest/manual/sql-building/sql-statements/select-statement/seek-clause/
@dharrigan If folks care about performance, they shouldn't be using the "friendly SQL functions" in the first place 🙂 and next.jdbc
is not the "SQL police": folks who want to write SQL that does fast pagination (such as using a where
condition that includes an id
or timestamp
field based on the "largest" value already seen), then they can do that via execute!
(or better still using plan
).
Indeed 🙂 Just pointing out (for others benefit who may be only aware of offset/limit) that there are alternatives 🙂
A nicely crafted SQL query goes a long way 🙂
@daniel.stephen.lee There are two things at play with clojure.java.jdbc/insert-multi!
: 1) if you specify a sequence of hash maps, c.j.j does a separate insert for each map (because they could be different) 2) if you use array of column names followed by array of rows (array of array of column values) then c.j.j attempts to use a single batch statement for the insert...
...however, some databases still do multiple round-trips for batch inserts 😞 so you need a database-specific option at connection time (in the JDBC URL string, effectively) to tell the driver to rewrite the batch to a single insert.
The c.j.j docs don't really cover that level of detail but the replacement for c.j.j does: https://cljdoc.org/d/seancorfield/next.jdbc/1.0.409/doc/getting-started/prepared-statements#caveats
"In particular, PostgreSQL requires the :reWriteBatchedInserts true option "
would I put :rewriteBatchedStatement true'` in the db-spec then?
or :reWriteBatchedInserts true
rather for postgres
Yes, if you're using the array approach.
See my comment above.
(in the message where I mentioned you to invite you in)