In next.jdbc
is it possible to use builder-fn
globally? Maybe at a datasource level?
Not sure, but what I have done is have, in my sql functions, an arity that will supply the builder function
so, if you say (my-sql-ns/insert foo)
, I have this (defn insert [statement] (insert statement {:builder-fn foobarbaz}) [statement builder-fn] (jdbc/execute!.....)))
something like that
yeah, I use the same strategy, just create your own db namespace and wrap the jdbc functions with the opts parameter
:thumbsup:
@danisam17 @vachichng I'm working on a wrapper/middleware approach that will let you augment a DataSource
with something that allows for global options to then apply. It's complicated because a) the library relies directly on JDBC types for performance b) adding this must not decrease the performance of existing usage for other people.
Generally, I work hard to rely on the default behavior -- because it's the fastest option -- and lean into qualified column names because I believe such keywords are better practice. In other words, try to take another look at your problem and solve it without relying on :builder-fn
if possible.
thanks for your work, I really appreciate it, I'm satisfied relying on :builder-fn
since it does make the code more idiomatic, haven't measured the impact on performance though
Not sure how using :builder-fn
produces "more idiomatic" code -- what transformation are you doing on column names?
well I enjoying having kebab namespaced keywords everywhere
Ah, I see what you mean. So you have _
in the DB and -
in your code. Fair enough. Are you using camel-snake-kebab
as a library for that?
yeah, exactly. No, just did a trivial transformation function, but not the best performant for sure haha
I'm toying with the idea of detecting if CSK is on the classpath and automatically providing additional builders, built-in, with support for it. Also for :table
/`:label`. Haven't decided yet how best to handle that.
that's an interesting one. I wonder if its possible to use the navify thing to do some magic like creating models from it for basic CRUD
sort of what Toucan library does
I'm not a fan of anything that smells like ORMs...
...that said, what we've done at work is to write specs for database table formats, and then we have a macro that takes the spec and some additional information, and generates CRUD functions from the spec.
I'm more inclined to go with that approach (but, right now, what we have at work is heavily tied into our environment so I'd probably have to build it from scratch in order to open source it).
how do evolve the database? do you have also a custom migration tool that generates the changes from spec?
No. We do write migrations as SQL files, but any corresponding changes in code have to be made manually, and we try very hard to ensure that we can run the latest one or two SQL migrations without having to change the code at the same time, in order to make deployments simpler.
So new columns are always added as nullable or with a default value. New tables are easy. Sometimes we need to run a SQL migration on production, then deploy updated code, then run another SQL migration to complete whatever update we need (but that level of coordination is rarely needed).
cool! I always wondered how to apply new changes to schemas at production without downtime and data loss
so actually you have to do like in two steps
first create the new schema with lax constrains and copy the data, then in another update, put the actual constrains and new logic
thanks for the insights Sean
It depends on the changes needed. Most of the time we just have a single SQL migration that updates a table in place, if we can.
and I guess that you take down the different replicas/nodes before all that, so one single node is serving the whole thing? until the update is applied in all of the nodes?
For larger tables, what we often do is a) create the new table b) update the code to read/write to that table by preference but have reads fall back to the old table c) kick off a background data migration to copy old data across (making sure it's compatible with the on-demand copying in step b), then once the table is fully migrated: c) update the code again to remove the read fallback and finally d) drop the old table at some point.
Primary/secondary replication of changes is generally fast enough we don't have to worry much about coordinating across the cluster.
We have two huge Percona (MySQL) servers, running on SSD-backed virtual servers. All our reporting runs off the secondary. All our transactional queries (and all updates) run against the primary.
SQL migrations run against the primary as well (obviously) so that's where we are careful about performance impacts of changes.
But the way we have the system set up, we have a migration
project that builds a JAR containing all of the SQL migrations, and we can auto-deploy that to an "admin" node in our cluster to kick off any new SQL migrations. We can do that independently (and ahead of) any code changes to the customer-facing nodes.
and what about the application server? I'm guessing that you have different copies of it as well for load balacing, how do you apply the updates there without downtime and data loss?
Yup, three customer-facing servers behind a load balancer, and staggered auto-deployment. Again, we just take care to make changes that can be rolled out node by node (so any change must be able to run in production on one node while other nodes are still on the old code).
We use feature flags for some things where we need to have updated code deployed everywhere before we turn a feature on.
Occasionally, we need a full down maintenance window but it's pretty rare these days.
Any last Qs before I head out for a while @vachichng?
no, that's all, thanks a lot @seancorfield!
have a good day!