pathom

:pathom: https://github.com/wilkerlucio/pathom/ & https://pathom3.wsscode.com & https://roamresearch.com/#/app/wsscode
wilkerlucio 2020-11-30T13:15:57.354500Z

wilkerlucio 2020-11-30T13:18:20.354700Z

I don't think at this time. I was thinking about it, users could do a "query subscription" (kind like Fulcro does), and mutations could have a "affected attributes" declaration, from that, we could use Pathom index to figure out also all the computations affected by those attributes, and in case there is a subscription to any of that, refresh it

2020-11-30T17:15:32.355500Z

Question… is it possible to dynamically generate input and output properties / resolvers with pathom?

2020-11-30T17:17:54.357100Z

Essentially I need to discover the attributes for my properties at run time. I think I would need to query my system to discover them dynamically before I resolve the queries.

2020-11-30T17:18:36.357900Z

Is there a recommended way to do such a thing with pathom? From what I’ve seen it looks like properties must be coded up front, rather than discovered from your data

2020-11-30T17:32:04.358400Z

ok just seen there is a function resolver

1👍
2020-12-01T10:14:12.376400Z

> hmm, I guess “have to know all the attributes ahead of time” can be a frown for rick, Yeah it looks like this might be a problem for us 😞 > but generating once at app start should be fine This won’t work for us as they’d have to be rebuilt on every data update, and that really isn’t practical for a number of reasons. It would be far easier to discover them at query time Essentially our problem is that our platform hosts data that can essentially be anything, with arbitrary predicates that our users can coin and supply themselves. Obviously a generic platform can’t do anything bespoke without knowledge of the data, so we target the meta-level (vocabularies) which provide schemas around the shape of the data. However the vocabularies are a level removed from the actual properties themselves… For example one of the main vocabularies we use is the W3C RDF Data cube ( https://www.w3.org/TR/vocab-data-cube/ ) which essentially models multidimensional statistical data into dimensions, attributes, measures and observations. An observation is essentially just a cell in an N-dimensional spreadsheet triangulated by it’s dimensions… they usually include area and time but also arbitrary other dimensions specific to the cube, for example homelessness data might include dimensions on gender / age, but if someone loaded a trade dataset it would have imports, exports and dimension properties like “chained volume measure” etc. We can’t feasibly know all of these when we write the software; but we can rely on them being formally described in that vocabulary; so we can discover what they are for any given cube at query time; either through the datasets Data Structure Definition (DSD) or via the fact that all dimension predicates are of type DimensionProperty. The DSD bit of the cube vocabulary is essentially a meta-schema for describing cube-schemas; but they are described in amongst the triples/data itself. So in order to use pathom I was hoping to be able to dynamically generate a subset of the resolvers at query time.

2020-12-01T10:20:50.376600Z

So essentially I’d need to provide functions for things like ::pco/input and ::pco/output instead of hard coded vectors

2020-12-01T10:20:59.376800Z

(pco/defresolver fetch-all-observations [{:keys [cube/id]}]
  {::pco/input  [:cube/id]
   ::pco/output #(lookup-cube-dimensions id)}
  (fetch-observations id))

souenzzo 2020-11-30T17:33:35.358600Z

I already generated resolvers from a wired//custom rest API/specification. Worked really well

2020-11-30T17:35:43.358800Z

yeah I think this would be sufficient… I haven’t played with pathom in any depth yet; just trying to assess at a high level whether I could make it work. My underlying data model is already open-world/RDF so in many ways it’s a natural fit; though I’d need to discover applicable attributes through meta-level queries first.

2020-11-30T18:58:09.359200Z

computing the graph indexes is rather expensive, so I guess it'll help if they can be generated in meta-level phase to minimize overhead in subsequent requests

wilkerlucio 2020-11-30T18:59:20.359400Z

I dont think they are expensive to generate, but surely you dont wanna do it once before each query, but generating once at app start should be fine

1👍
2020-11-30T19:09:49.359700Z

for RDF kind of problem I guess the open-world is just too big (infinite?) to be indexed at start time but instead must be "lazily" expanded as the client asks for more. I can imagine keeping track of several (dynamically generated) child resolvers for each session (or even "garbage collecting" them somehow?)... Of course I don't know that much about rick's use case 🙂

2020-11-30T19:11:02.359900Z

also, maybe it'll be an interesting case for pathom's query UI where the whole graph is not known beforehand?

wilkerlucio 2020-11-30T21:05:02.360900Z

you can always cache the index if gets too heavy

wilkerlucio 2020-11-30T21:05:19.361100Z

and distributed environments (local or remote) could get somethink like expanding graph

wilkerlucio 2020-11-30T21:05:37.361300Z

but figuring index at planning stage is not something I see viable

wilkerlucio 2020-11-30T21:11:16.361500Z

for RDF, as same as GraphQL, and Datomic, there will be a better support for dynamic resolvers, this is where you can improve things, but you still have to know all the attributes ahead of time, otherwise pathom can't tell when to start processing or not

Björn Ebbinghaus 2020-11-30T21:26:21.365400Z

Am I right that the viz full graph doesn’t show edges on a to-many relationship?

wilkerlucio 2020-11-30T21:41:24.366Z

not the nested parts, it connects with the attribute that has the list, but not the items (the indirect things)

JAtkins 2020-11-30T21:48:15.371300Z

Maybe this is crazy/impossible, but can dynamic dependencies work? use case could be something like total-book-cost

pallet-cost, pallet-count, book-cost, book-count

total-book-cost:
  if present (pallet-cost, pallet-count) (* pallet-cost pallet-count)
  if present (book-cost, book-count) (* book-cost, book-count)
  else err

(total-book-cost {:pallet-cost 1 :pallet-count 2}) => 2

(total-book-cost {:book-cost 4 :book-count 4}) => 16

JAtkins 2020-11-30T21:49:17.372500Z

It would (maybe?) be slow (having to trace if book-cost can be calculated from other params), but it could be useful if possible.

2020-11-30T21:59:57.372600Z

hmm, I guess "have to know all the attributes ahead of time" can be a frown for rick, but let's wait for his confirmation. IIrc, usually you connect to one RDF endpoint and may get references to other RDF endpoints, and the number of endpoints is kinda infinite, that's why you can't know beforehand. Anyway, this also need confirmation

souenzzo 2020-11-30T22:15:53.373700Z

@jatkin It's possible and AFIK, it's pretty optional performance

JAtkins 2020-11-30T22:17:48.374Z

Oh, duh. I'm an idiot 🙂. I thought there was much more ceremony here.

JAtkins 2020-11-30T22:18:16.374200Z

Thanks very much for taking the time to write this out!

souenzzo 2020-11-30T22:33:23.374600Z

+pathom3 example on the same gist Pathom3 is way more simpler 🙂

wilkerlucio 2020-11-30T22:35:31.374800Z

just a warning with this approach, in case the entry has access to all 4 attributes, the result becomes uncertain

wilkerlucio 2020-11-30T22:38:28.375Z

one interesting thing that this case brings to mind is that you can also make a function to return resolvers, so another way to implement this:

2🦜
wilkerlucio 2020-11-30T22:38:29.375200Z

(defn attr-total-cost-resolver [attr]
  (let [cost-kw       (keyword (str attr "-cost"))
        count-kw      (keyword (str attr "-cost"))
        total-cost-kw (keyword (str "total-" attr "-cost"))
        sym           (symbol (str "total-book-cost-from-" attr))]
    [(pc/resolver sym
       {::pc/input  #{cost-kw
                      count-kw}
        ::pc/output [total-cost-kw]}
       (fn [_ input]
         {total-cost-kw (* (cost-kw input) (count-kw input))}))
     (pc/alias-resolver total-cost-kw :total-book-cost)]))

(let [;; Relevant part: the resolvers
      registers [(attr-total-cost-resolver "book")
                 (attr-total-cost-resolver "pallet")]
      ;; pathom2 parser. Pathom3 is simpler
      parser    (p/parser {::p/plugins [(pc/connect-plugin)]})
      env       {::p/reader               [p/map-reader
                                           pc/reader2
                                           pc/open-ident-reader
                                           env-placeholder-reader-v2] ;; I backported pathom3 placeholders to pathom2
                 ::pc/indexes             (pc/register {} registers)
                 ::p/placeholder-prefixes #{">"}}]
  (parser env `[{(:>/pallet {:pallet-cost 1 :pallet-count 2})
                 [:total-book-cost]}]))

wilkerlucio 2020-11-30T22:39:46.375400Z

but still the same in consideration the same warning I said before

wilkerlucio 2020-11-30T22:41:57.375600Z

I think a proper solution to it requires optional inputs, that's something planned for pathom 3, so you can ask for all attributes (as optionals) e than make your logic inside the resolver, so you have more control

wilkerlucio 2020-11-30T22:44:04.375800Z

one way to archive this on pathom 2, is having a resolver that doesnt require any input, then from inside of it you call the parser again, and ask for the attributes (all of then), and work with this result, like:

(pc/defresolver total-book-cost [{:keys [parser] :as env} _]
  {::pc/output [:total-book-cost]}
  (let [result (parser env [:book-cost
                            :book-count
                            :pallet-cost
                            :pallet-count])]
    ...))