graphql

orestis 2020-10-13T08:43:31.028Z

I’m just looking at our uberjar size, and it seems lacinia (via antlr4) brings in the icu4j library which clocks at roughly 35MB 😞 is that a necessity even for a static schema?

orestis 2020-10-14T08:10:28.039Z

Yeah, we just finally moved to making an uberjar with all our dependencies, and suddenly you see “how the sausage is made” with tons of unknown code in there 🙂

mafcocinco 2020-10-13T14:38:57.032300Z

In lacinia, is it possible to insert middleware-like functions, similar to hugsql or ring? Specifically, I would like to apply a function to all field argument maps and resolved value maps prior to them being passed to the resolve function of the current field. The specific application I’m thinking of is converting the input to kebab-case. In addition, I would like to convert it back to camelCase on the way out of the resolver. This would reduce a decent amount of redundancy in my resolver code while also insuring that resolver code receives and provides consistently formatted keys without the author of the resolver having to remember the format of the keys being passed in or what lacinia expects back out, eliminating a specific class of errors.

hlship 2020-10-13T15:02:00.032400Z

Anltr is used to parse query documents, so I doubt we can get by without Antlr’s dependencies.

hlship 2020-10-13T15:02:07.032600Z

Why are you concerned with Uberjar size?

hlship 2020-10-13T15:03:29.033700Z

Because the schema is a data structure, you can iterate over the nodes and wrap each :resolve. There’s no need for the framework to do so (though a helper function might make it easier).

mafcocinco 2020-10-19T20:00:29.040Z

I’m worried the helper function might be necessary, or perhaps I need some more knowledge of ResolverResult. On the input side, this approach works fine as the inputs I care about, specifically the arguments map and (possibly) the parent object container. However, on the output side, things are a little more dicey as the result of resolver is not always a map but is sometimes a ResolverResult, which does not seem to provide a function in the protocol to get at the resulting data structure (which I need in order to transform the keys). Perhaps it would be possible to accomplish via the interface of the concrete class but this seems like a bad idea in the long term.

hlship 2020-10-13T15:04:13.034200Z

That is, iterate over the input schema, before passing it to schema/compile.

orestis 2020-10-13T16:00:10.035200Z

I was uploading via my local machine on a slow connection, but in reality it’s a non issue. Sorry for the noise.

mafcocinco 2020-10-13T17:32:40.035800Z

I might be confused but this is more about the input (and parent results) that are passed to each field resolver.

orestis 2020-10-13T18:53:29.037900Z

I think the suggestion is to wrap your existing resolvers with a high-order function that will convert input and output. You can automate this via walking the schema data structure before compiling.

mafcocinco 2020-10-13T21:09:19.038100Z

Ah, I see. That makes sense. Thanks

hlship 2020-10-13T23:43:40.038500Z

Not noise, though usually dependencies are a concern about conflicts, rather than size.

hlship 2020-10-13T23:44:13.038700Z

People used to do amazing things with 128 bytes of ram. https://mitpress.mit.edu/books/racing-beam