So after writing a few graphql resolvers that do a whole bunch of destructuring of input objects, there seems to be an obvious type system - shaped hole in how the code looks: If the GraphQL schema changes (e.g. a field is renamed or moved around etc etc), the destructuring code doesn’t know.
I wonder if there is a way to write a macro that does GraphQL aware destructuring — so in addition to having :keys
it also knows the graphql input object that it destructures, so it can check (at compile time!) that all keys are present and correct.
(if the graphql schema changes, you’ve got bigger problems than destructuring code inside the server, b/c you just broke the api for your clients)
(Facebook’s way of resolving that is a “never remove, just add new things” though it can get unwieldy for a smaller company that doesn’t have to handle clients of all the versions)
I’m my own client 🙂
convenient 😄
I mean, sure, once you go public you need to be very careful about these things. But during development where the API is malleable and might get renamed as we get better at naming things…
then the problem converges to wanting an ORM, ie. “if I change my DB schema, my queries stop working”
One step at a time 🙂 (or, we use MongoDB, what is that Schema you’re talking about?)
Hey, we are sleeping in the gutter but we are at least looking at the stars 😛
(fwiw I’m not a fan of multiplying layers, I just like having schema somewhere for reference 😄 )
I’m overstating it, we do have a schema, but of course it relies on a lot of manual work to keep it up to date. So that’s why since we’re using GraphQL now I’d like to at least make sure we don’t have a second source of bugs.
An example of what I mean:
(defn my-resolver [context args value]
(let [{:keys [foo_id bar_id flag_present]} args]
...))
All these keys correspond 1-1 to my graphql schema definition. I’d like to have some warm fuzzy feeling that the computer made sure these keys actually exist in that schema and I didn’t mistype anything.
@orestis I don't know of a macro to do this but it wouldn't surprise me if someone had written one to introspect the keys of a map spec and destructure for that
Alternatively, it kind of sounds like you want a combination of destructure+assert so as to fail fast if the data doesn't match up with the destructure keys. Destructuring in Clojure is now implemented with spec, so you might be able to borrow that spec and implement your own let
. Maybe something like assert-let
where it verifies that all destructured keys on the left side exist in data on the right side.
Good pointer! However I’d like to validate against the GraphQL schema, not the data on the right side (as these might be nil/missing anyway since arguments can be optional).
D E C E P T I O N 😄
because trust!
(I love how the guy is a new meme, but includes normal people as well as internet weirdos)
I would hope, but don't know, that Lacinia would ensure that every key and value are present in input objects and that missing ones are added with nil
values. If that's the case, you could validate against the data directly because Lacinia would ensure the correct shape based on the schema and you would only have to check for missing keys.
Oh, I wouldn’t expect this by Lacinia, but it’s a good point — if some layer above could do it then validation becomes more straightforward. I wonder what about nested stuff though — and it’s still going to be a runtime validation, whereas I’d prefer if I could do this at compile time.
Yeah — we got a new code with my wife from him 🙂
I’m working on couple of prototypes of an internal api, where for the foreseeable future, all clients are going to be clj/cljs. I’m exploring using venia to render the graphql queries themselves so I can work with edn representations pervasively, but I wanted to pause and take stock: have other folk used graphql in an all clj environment, and liked or disliked the experience?
@bpiel implementing graphql/lacinia directly may be not as convenient implementing the Clojure native alternative EQL via pathom then providing an extra graphql endpoint as a bridge https://github.com/denisidoro/graffiti
by using eql endpoint with clojurescript client you don't need an extra layer like venia/artemis while still having graphql endpoint for non-cljs clients
@donaldball Graphql's cons from a Clojure perspective: 1. static typing 2. string based therefore not very composable. Also, Timothy discussed it here https://www.reddit.com/r/Clojure/comments/as16f6/transcript_of_timothy_baldridge_and_me_chatting/
@myguidingstar I'll check it out. Thanks
i also wrote a blog post on my experiences https://juxt.pro/blog/posts/through-the-looking-graph.html
@myguidingstar Thanks for the link to that conversation, it was extremely interesting.
Started a new project in Jan. I've been using this fork of venia so that I can can send edn (as transit) from cljs over websocket to an internal graphql api (powered by hasura, but probably switching to lacinia soonish). I definitely prefer it over anything else I've done in the past (REST etc) https://github.com/district0x/graphql-query
The wins over REST endpoints for queries seem obvious on a few fronts. I’m a bit less sure about the mutations though. Do you or anyone else have any good or bad patterns for those to share?
Ah yeah, I'm actually not using graphql for mutations. Not necessarily for any good reason though.
This is my first experience with graphql. I guess I just felt like I wanted to focus on the query/schema aspect first.
I found representing queries in EDN/venia somewhat difficult to work with. Ended up using strings. I haven't found a great need for changing what fields to dynamically add/remove from a query.
I like being able to copy paste a query from graphiql
Haven’t needed so far to make a dynamic query at all
For my team, it’s almost certainly going to be less about the dynamism than the ability to use our editors’ structural manipulation capabilities, but those are good perspectives, thanks for sharing.
We've been using https://github.com/workframers/artemis/. We use transit-json for the serialization part. The biggest PITA is translating back and forth from namespaced keys to non-namespaced ones, IMHO, but adding a translation layer at both sides also lets you shape the data into a conventient form
I’m using Venia with re-graph for reads and mutations. I find re-graph much easier to grok than Artemis. Venia providing structural editing is a win for me too