polylith

https://polylith.gitbook.io/ and https://github.com/polyfy/polylith
seancorfield 2021-05-30T03:17:16.016900Z

On Friday, we started pulling apart some of our subprojects into components. Definitely going to be a long, slow path but it's going to be interested watching the info grid grow 🙂

đź’Ż 2
3
tengstrand 2021-05-30T16:55:06.017500Z

That’s really cool!

jumar 2021-05-31T07:18:27.018100Z

Definitely interested in hearing more about your experiences. A blog post, at some point, would be very appreciated 🙂

2021-05-31T15:45:37.018500Z

For us it’s especially the CI / deploys that needs a bit of work upfront to adapt to this monorepo structure. And also the non-homogeneous dependencies versions used across our different projects.

seancorfield 2021-05-31T17:35:25.018700Z

Being able to just run the subset of tests needed since our last successful CI run is going to be valuable, but that means getting our whole repo restructured — which will be a huge amount of work. Our current monorepo has around 40 subprojects but those are very coarse-grained. Our entire CI pipeline — starting from scratch, building the DB up to current status (about 800 SQL migrations at this point) and running all tests for all projects — takes about 35 minutes and then building the API docs and the dozen+ uberjars for deployment takes about another 10 minutes. So our cycle time from commit to automated deployment on our staging server takes about an hour — we’d love to bring that down to 20-30 minutes.

seancorfield 2021-05-31T17:38:01.018900Z

We have our own build shell script which can calculate subproject dependencies and run tests for a given subproject, a given subproject and everything that depends on it, or a given subproject and everything it depends on, so that helps a lot during dev/test locally, but being able to just rely on poly test to run tests from the last stable point would make that better.

seancorfield 2021-05-31T17:40:37.019100Z

Also — and this is currently mostly supposition on my part — having finer-grained components should allow for fewer dependencies to be dragged into projects which should reduce our artifact sizes (currently a few of our subprojects drag in a big pile of 3rd party libs and other subprojects that depend on those are “forced” to accept all of those deps too, even though they often don’t use the code that actually depends on a large 3rd party lib).

2021-06-01T14:46:33.023600Z

I don’t know your constraints / CI of course but can’t you use a structural dump of your mysql database instead of starting from zero? (with the migration version stored in the dump)

seancorfield 2021-06-01T15:24:09.025500Z

@cyppan We could but then we would need some process to aggregate SQL migrations into some one-off “dump” as they have been applied across all tiers. With the current approach, we have just one mechanism to “build” the DB in dev/test/CI and update the DB in staging/production.

seancorfield 2021-06-01T15:25:54.025900Z

Since we also have to populate Elastic Search from scratch in CI, using a process that analyzes the DB contents anyway, We wouldn’t save very much by doing that aggregation and the complexity vs speed tradeoff isn’t really worthwhile.

2021-06-01T15:30:52.028500Z

ok, a pattern I’ve seen is regularly dumping the production database schema into a tagged mysql docker container (for instance), and use this as the starting point in the tests envs, the migration scripts would then run the missing migration scripts by comparing to the last migration version stored into a specific database table (a single row migration_version or something like that)

seancorfield 2021-06-01T15:34:45.029300Z

Well, you’d still need to load all your test data that has to be compatible with whatever state your production schema is in, and you’re introduced a manual step that needs to be performed “periodically” and then integrated back into version control… It’s still a lot of added complexity for a (potentially small) speedup in dev/test/CI…

2021-06-01T15:40:42.029500Z

yes I understand, it might not be worth it (I’ve also seen fixtures data been put into migration scripts too to solve a part of what you say).