I’m just looking at our uberjar size, and it seems lacinia (via antlr4) brings in the icu4j library which clocks at roughly 35MB 😞 is that a necessity even for a static schema?
Yeah, we just finally moved to making an uberjar with all our dependencies, and suddenly you see “how the sausage is made” with tons of unknown code in there 🙂
In lacinia
, is it possible to insert middleware-like functions, similar to hugsql
or ring
? Specifically, I would like to apply a function to all field argument maps and resolved value maps prior to them being passed to the resolve
function of the current field. The specific application I’m thinking of is converting the input to kebab-case
. In addition, I would like to convert it back to camelCase
on the way out of the resolver. This would reduce a decent amount of redundancy in my resolver code while also insuring that resolver code receives and provides consistently formatted keys without the author of the resolver having to remember the format of the keys being passed in or what lacinia
expects back out, eliminating a specific class of errors.
Anltr is used to parse query documents, so I doubt we can get by without Antlr’s dependencies.
Why are you concerned with Uberjar size?
Because the schema is a data structure, you can iterate over the nodes and wrap each :resolve. There’s no need for the framework to do so (though a helper function might make it easier).
I’m worried the helper function might be necessary, or perhaps I need some more knowledge of ResolverResult
. On the input side, this approach works fine as the inputs I care about, specifically the arguments map and (possibly) the parent object container. However, on the output side, things are a little more dicey as the result of resolver is not always a map but is sometimes a ResolverResult
, which does not seem to provide a function in the protocol to get at the resulting data structure (which I need in order to transform the keys). Perhaps it would be possible to accomplish via the interface of the concrete class but this seems like a bad idea in the long term.
That is, iterate over the input schema, before passing it to schema/compile
.
I was uploading via my local machine on a slow connection, but in reality it’s a non issue. Sorry for the noise.
I might be confused but this is more about the input (and parent results) that are passed to each field resolver.
I think the suggestion is to wrap your existing resolvers with a high-order function that will convert input and output. You can automate this via walking the schema data structure before compiling.
Ah, I see. That makes sense. Thanks
Not noise, though usually dependencies are a concern about conflicts, rather than size.
People used to do amazing things with 128 bytes of ram. https://mitpress.mit.edu/books/racing-beam