docker and clojure ..... making uberjars is slow and causes a big layer for every change
anybody have any better / more efficient ideas or options?
what about separating uberjar building from image building?
eg. make uberjars via CI and grab the jar from docker?
or if you don’t use CI, you can use multi-stage builds for a similar effect.
We had this problem in my last company -- A complex c++ application (raytracing engine) that depended on a sizable set of dynamically loaded libraries also written in-house. Various attempts were made to break up the docker build into layers, for example adding the lib*.so first, then adding the main app last. But a typical changeset for a feature included not just changes in “main” app but also one or more of the dynamic libraries. So there was no obvious ordering for the libs that would prevent excessive layers. While we could have done an analysis to come up with some heuristic ordering based on the libraries most likely to change, it just wasn’t worth it. The correct thing to do was break up the application into smaller parts communicating via protocols rather than all dynamically linked together, and that’s the direction we planned for instead.
that sounds like a smart design actually - even if you weren’t using docker 😄
Too true. Yes, in fact, the architecture was already broken into logical components. But the legacy build system (this is c++ remember) didn’t explicitly track the .so dependencies [edit: to be completely fair, it sort of did, but not granularly enough], it just sequentially built all .so’s into a directory, then compiled and linked all the apps. So the first attempt to deploy this code resulted in copying the entire directory of so’s which was huge. Definitely a self-inflicted injury. By the end, though, we had overhauled the parent company’s build system, inserted explicit dependency tracking like any java or clojure dev would automatically expect, and then we had what was needed to clean up the docker builds.
@gonewest818 "Huge" as 1GB, 100MB or 10MB?
over 1GB
I know there are ways to force C++ compilers to produce exponentially large expansions of templates, but these examples are contrived.
counting the parent company’s teams as well as ours, it was a big effort.
it wasn’t really about template expansions or anything like that.
A real gigabyte of binaries. Wow, just wow.
it was the quantity of functionality like Maya, or other similar commercial and/or open source CGI tools. For example Blender is about 450MB installed.
Maya is reported to have tens of millions of lines of code.