sql

All things SQL and JDBC...
nachos 2020-06-17T09:15:57.250200Z

In next.jdbc is it possible to use builder-fn globally? Maybe at a datasource level?

dharrigan 2020-06-17T10:35:16.250800Z

Not sure, but what I have done is have, in my sql functions, an arity that will supply the builder function

dharrigan 2020-06-17T10:36:49.252500Z

so, if you say (my-sql-ns/insert foo), I have this (defn insert [statement] (insert statement {:builder-fn foobarbaz}) [statement builder-fn] (jdbc/execute!.....)))

dharrigan 2020-06-17T10:36:54.252700Z

something like that

Vachi 2020-06-17T14:33:25.254800Z

yeah, I use the same strategy, just create your own db namespace and wrap the jdbc functions with the opts parameter

dharrigan 2020-06-17T16:34:31.255Z

:thumbsup:

seancorfield 2020-06-17T18:11:15.256900Z

@danisam17 @vachichng I'm working on a wrapper/middleware approach that will let you augment a DataSource with something that allows for global options to then apply. It's complicated because a) the library relies directly on JDBC types for performance b) adding this must not decrease the performance of existing usage for other people.

seancorfield 2020-06-17T18:12:33.258300Z

Generally, I work hard to rely on the default behavior -- because it's the fastest option -- and lean into qualified column names because I believe such keywords are better practice. In other words, try to take another look at your problem and solve it without relying on :builder-fn if possible.

Vachi 2020-06-17T18:16:03.260300Z

thanks for your work, I really appreciate it, I'm satisfied relying on :builder-fn since it does make the code more idiomatic, haven't measured the impact on performance though

seancorfield 2020-06-17T18:17:40.261100Z

Not sure how using :builder-fn produces "more idiomatic" code -- what transformation are you doing on column names?

Vachi 2020-06-17T18:18:15.261700Z

well I enjoying having kebab namespaced keywords everywhere

seancorfield 2020-06-17T18:19:19.262600Z

Ah, I see what you mean. So you have _ in the DB and - in your code. Fair enough. Are you using camel-snake-kebab as a library for that?

Vachi 2020-06-17T18:20:05.263200Z

yeah, exactly. No, just did a trivial transformation function, but not the best performant for sure haha

seancorfield 2020-06-17T18:21:33.264700Z

I'm toying with the idea of detecting if CSK is on the classpath and automatically providing additional builders, built-in, with support for it. Also for :table/`:label`. Haven't decided yet how best to handle that.

😟 1
Vachi 2020-06-17T18:23:54.265900Z

that's an interesting one. I wonder if its possible to use the navify thing to do some magic like creating models from it for basic CRUD

Vachi 2020-06-17T18:26:15.266800Z

sort of what Toucan library does

seancorfield 2020-06-17T18:26:35.267100Z

I'm not a fan of anything that smells like ORMs...

seancorfield 2020-06-17T18:27:25.268100Z

...that said, what we've done at work is to write specs for database table formats, and then we have a macro that takes the spec and some additional information, and generates CRUD functions from the spec.

seancorfield 2020-06-17T18:28:07.269400Z

I'm more inclined to go with that approach (but, right now, what we have at work is heavily tied into our environment so I'd probably have to build it from scratch in order to open source it).

Vachi 2020-06-17T18:28:24.269700Z

how do evolve the database? do you have also a custom migration tool that generates the changes from spec?

seancorfield 2020-06-17T18:29:52.271300Z

No. We do write migrations as SQL files, but any corresponding changes in code have to be made manually, and we try very hard to ensure that we can run the latest one or two SQL migrations without having to change the code at the same time, in order to make deployments simpler.

seancorfield 2020-06-17T18:31:26.273200Z

So new columns are always added as nullable or with a default value. New tables are easy. Sometimes we need to run a SQL migration on production, then deploy updated code, then run another SQL migration to complete whatever update we need (but that level of coordination is rarely needed).

👍 1
Vachi 2020-06-17T18:32:29.274100Z

cool! I always wondered how to apply new changes to schemas at production without downtime and data loss

Vachi 2020-06-17T18:32:38.274400Z

so actually you have to do like in two steps

Vachi 2020-06-17T18:34:05.275600Z

first create the new schema with lax constrains and copy the data, then in another update, put the actual constrains and new logic

Vachi 2020-06-17T18:36:21.276300Z

thanks for the insights Sean

seancorfield 2020-06-17T18:36:40.276700Z

It depends on the changes needed. Most of the time we just have a single SQL migration that updates a table in place, if we can.

Vachi 2020-06-17T18:38:28.279500Z

and I guess that you take down the different replicas/nodes before all that, so one single node is serving the whole thing? until the update is applied in all of the nodes?

seancorfield 2020-06-17T18:39:25.280400Z

For larger tables, what we often do is a) create the new table b) update the code to read/write to that table by preference but have reads fall back to the old table c) kick off a background data migration to copy old data across (making sure it's compatible with the on-demand copying in step b), then once the table is fully migrated: c) update the code again to remove the read fallback and finally d) drop the old table at some point.

👍 1
seancorfield 2020-06-17T18:40:09.281100Z

Primary/secondary replication of changes is generally fast enough we don't have to worry much about coordinating across the cluster.

seancorfield 2020-06-17T18:41:01.282200Z

We have two huge Percona (MySQL) servers, running on SSD-backed virtual servers. All our reporting runs off the secondary. All our transactional queries (and all updates) run against the primary.

seancorfield 2020-06-17T18:41:31.283100Z

SQL migrations run against the primary as well (obviously) so that's where we are careful about performance impacts of changes.

seancorfield 2020-06-17T18:42:47.285700Z

But the way we have the system set up, we have a migration project that builds a JAR containing all of the SQL migrations, and we can auto-deploy that to an "admin" node in our cluster to kick off any new SQL migrations. We can do that independently (and ahead of) any code changes to the customer-facing nodes.

Vachi 2020-06-17T18:42:51.285800Z

and what about the application server? I'm guessing that you have different copies of it as well for load balacing, how do you apply the updates there without downtime and data loss?

seancorfield 2020-06-17T18:44:19.287200Z

Yup, three customer-facing servers behind a load balancer, and staggered auto-deployment. Again, we just take care to make changes that can be rolled out node by node (so any change must be able to run in production on one node while other nodes are still on the old code).

seancorfield 2020-06-17T18:44:57.287900Z

We use feature flags for some things where we need to have updated code deployed everywhere before we turn a feature on.

👍 1
seancorfield 2020-06-17T18:45:40.288400Z

Occasionally, we need a full down maintenance window but it's pretty rare these days.

seancorfield 2020-06-17T18:47:54.289100Z

Any last Qs before I head out for a while @vachichng?

Vachi 2020-06-17T18:48:46.289500Z

no, that's all, thanks a lot @seancorfield!

1
Vachi 2020-06-17T18:49:18.290Z

have a good day!