Podcast discussion about redux that’s interesting: https://overcast.fm/+IoAqtBpJU - hope to finish it soon.
There's a lot of complaints about redux around boilerplate and complexity. So we should aim to avoid those somehow.
I think https://github.com/riverford/compound would solve many of the issues people have with building up various indexes for their state.
I think by keeping out of the ui layer, we solve many of the re-xxx problems.
We will need to decide what our http solution is. Probably either promises or core async. While I'm reluctant to go heavy with core.async, libraries like cljs-http will integrate better out of the box that way.
What’s wrong with good old XHR?
I’ve stayed away from core async for sometime, esp. on the front end. The go macro seems a bit scary to me, not sure how it would look like in things like React dev tools.
js/fetch is okay, I like it. Cljs http was surprisingly short when I looked. It's mostly about content negotiation I suppose.
Core async wouldn't show up I don't think, as it operates at a different level.
A callback interface is good too, I think people quite enjoy playing with core async though.
Callback hell is probably less relevant in this context though
Ah, the joy of not having to support IE11 :)
I wanted to follow down https://mobile.twitter.com/dan_abramov/status/1246251834324516868 for some time. Cancellable XHR wasn’t on my radar.
In any case, http is async, so handling async events is the general case.
I also have to support ie11 at work, it's not joyous at all : ( : D
http://Polyfill.io saves some time. We took up bootstrap 4 for a css baseline but it’s still a pain.
Yeah, core async isn't too natural here... You can close the channels but if you have many that won't be pretty
Cancel friendly async seems like a difficult challenge
I tried polyfill io but it wasn't reliable. Since then we bundle the necessary polyfills and serve them with our app
I like what someone on twitter said recently... that when thinking about async processes, one should start with cancellation, since it's often ignored until later and really mucks with a lot of seemingly nice abstractions that model things like a sync pipeline
core.async you basically need a succeed/failure/cancel channel. using an event system like re-frame/redux is also complicated, see https://github.com/davidkpiano/useEffectReducer/issues/1
there's lots of tradeoffs
https://github.com/davidkpiano/useEffectReducer/pull/8 I think davidkpiano took the route I've been thinking of where you keep a reference to a process object and use that to control cancellation/cleanup, but it's sort of a side channel to the event-driven system you have before. it doesn't quite feel first class.
a sort of subtle thing that I'm not sure of in useEffectReducer
is when the cancellation occurs - or SHOULD occur:
• immediately as part of the event (e.g. click away, receive response from something else) - immediate cancellation, but render in relation to that might not be scheduled until later
• during the scheduled render - second fastest, but might be called multiple times due to renders being retried due to suspensions
• during use(Layout)Effect - safest in CM but maybe too late if you're coordinating cancellation with processes outside of the react tree??
i could see a network request being cancelled immediately on user interaction, but for animations you would want to maybe wait until flushing to the DOM?? I'm not sure
I've decided that the first problem I want to solve for this data fetching malarkey is normalization
it's the one that I can sort of understand the best (also it sounds fun)
fulcro, datascript, apollo, relay, etc. all have this custom machinery they've built to normalize their internal structure so that when you receive an update to an entity, all subscriptions see that update.
I want something more a la carte: a map-alike data type that auto-normalizes data given to it
Isn't something like https://github.com/stuartsierra/mapgraph sufficient?
haha, i figured there's probably prior art here but couldn't find exactly what I wanted. that looks p much what I'm building
https://github.com/den1k/subgraph Extends that to CLJS (cljc), adds recursive join queries and has an optional re-frame layer
there's a couple of features that I would personally like to have that aren't currently present in mapgraph:
1. Entities can have many lookup refs, e.g. [:account/id 123]
and [:account.contact/email "foo@bar"]
could both be unique ways of referring to the same entity
2. separate schema from db, have foreign keys and join dbs
interesting that I found subgraph but didn't find mapgraph in my search
Yup, I learnt about MapGraph on SubGraph's docs
I also think I could miss the recursive specific query bits, all I really want out of this is something that I can shove some data into and then get it back out fully hydrated
but having the mapgraph impl as a reference is very helpful
What about this https://github.com/keechma/entitydb
When I evaluated both, I found entitydb to have more ceremony around schemas. Subgraphs/(mapgraph?) considers every map as a possible entity, but only normalizes those containing "marker"/id keys
I looked at entitydb and decided I wanted a more map-alike API (get/assoc/dissoc/etc.) but maybe that's foolish
(More ceremony doesn't mean the schema is not useful - but the team found even subgraph's approach overwhelming)
I like entitydb's idea of declaring schema, but I'd prefer to do something like:
(def schema {:person/id {:db/unique :db.unique/identity}})
;; create a new map with associated schema
(def entities (entity-map/empty schema))
(-> entities
(assoc
{:person/id 123
:person/name "Will"
:friends [{:person/id 456
:person/name "Mallory"}]))
;; => {#{[:person/id 123]} {:person/id 123
;; :person/name "Will"
;; :friends [[:person/id 456]]}
;; #{[:person/id 456]} {:person/id 456
;; :person/name "Mallory"}]}
I dunno, I'm having fun. y'all should probably use entitydb, it actually exists and has tests 😂
or mapgraph
What's the reasoning behind the sets as keys to the entities? An issue I had with subgraph (compared to fulcro, for example) is that the store had the full ident as the keys instead of just the values:
{:user/by-id {[:user/id 10] {:user/id 10 :user/name "me"}}}
vs
{:user/by-id {10 {:user/id 10 :user/name "me"}}}
It's sounds silly, but inspecting the former on the browser console was a nightmare even with cljs-devtoolsassoc looks iffy to me. Suggests a plain map underneath. Stratified design would suggest more of an intentful api.
yeah I think the case I gave above actually would need to use something else (merge or probably a special add
fn like mapgraph)
the reasoning is to support multiple idents pointing to the same entity
but assoc should still work:
(assoc-in entities [(entity-map/ident :person/id 123) :friends] [{:person/id 456 :person/name "Mallory"}])
;; => returns the same as above
there's probably a better way of printing it, you're right, any big complex map like that gets pretty unwieldy to read
I wonder how fulcro allows {:person/id {123 ,,,}}
structure while allowing multiple idents to refer to the same entity
I think the trick is to shift from a map interface to a db interface. Maps etc are implementation details, there’s probably a few different data structures needed to power this under the hood. (I’m talking about the assoc/assoc-in API you showed above)
Might be just me, lots of times Clojure makes it so easy to keep using the “low level” stuff that I never consider a more apt API.
https://github.com/riverford/compound feels very lightweight
90% of my annoyance with just using a normal map is filter/first and jumping indexes.
If I just had a map-like data structure I could do, (conj db {:name "Fred"}) and (get-in db [:name "Fred"]). Create indexes on demand
This is a WIP thing I made more than a year ago in js https://gist.github.com/ashnur/f2fb2cf230d47aea9a63123aedb5926a
it has something like the Elm architecture, two ways to send a message, one for pure stuff and one for sideeffect, but since I didn't have the Elm kind of type theoretic tools to test, needed more brutish solutions
I don't quite understand why it is a problem that datascript optimizes stuff internally, I quite liked this about it
For graphql data, just a map usually doesn’t cut it. At least, that’s the whole raison d etre for Apollo. Now I’m very suspicious of that library and the actual gains you have by caching that aggressively, but I guess there must be a valid application.
Has anyone here used graphql in practice for anything serious? I tried once with a java/hybernate db backend that didn't really worked out as well as I hoped, and I always expected some way to use it for local data too
we used graphql heavily at my last job
I thought it worked well for the front end using apollo, but it was quite a bit of work on the back-end
My problem with caching is that I'm usually pretty imperative. I, human, know when I want to fetch something for the most part.
the key thing I've realized is the need to separate fetching from reading
typically you start fetching on some user action, but you read while rendering
or rather, you render when you read 🙂
due to CM/Suspense there's not a synchronous flow between completing fetch and rendering, so you need a cache
but this cache for me was datascript. I fetched into it, and then I rendered consistent entities from it
because of the schema, I could just pass in json objects from javascript
Caches are the hardest problem though
yeah I'd really like to use datascript but there are tradeoffs. there is a relationship between fetch and query that is hard to create with datascript. that's where GraphQL shines, is that the request send to fetch from the back-end is easily relatable to the query on the cache for what you need to render
I would say the two are not exclusionary, if someone wants to use both.
yeah maybe a good canned set of libs would be: • send queries as EQL with pathom on the backend • use datascript as front-end cache
https://github.com/noprompt/meander provides an alternative view on querying in indexed data structures
https://github.com/Lokeh/helix/blob/master/docs/pro-tips.md#dont-use-deep-equals so what we did plan a couple of times, and in theory I find it straightforward, but we should see 🙂, is that, every View (eg. react component) or state user (react component if it depends on remote state, or a service worker that manages a datascript instance or whatever you like to put your state in) has some kind of namespace registration (at boot time) over what it wants to read or write (this erases the difference between local and external state, in the sense that although everything is global, you can trivially encode read/write access between components so second registrar would warn/throw/be ignored and thus you have local state, that you still can load together with everything), this static query/namespace description can be any kind of query language, I of course prefer datalog, but it it doesn't matter, as long as you can replicate the queries 1-to-1 on the backend. Because this means that using some kind of deterministic hashing, you can always send the hash of the last result you got on the client, and then the backend can just look at the hashes on its own to know what do devalidate. (I am not saying this makes it easier, but it's at least a kind of straightforward way to understand it from the client side).
I don't think there is ever any reason to do a check like in the link though. Unless you do heave logic in the render, but why would you put heavy logic in the render? 🙂 If the react component's only work is to create Elements, the diffing of the reconciler will do the optimizing for you. Then all what is important is that when something happens that is buffered correctly. So I used channels and basically let messages drop as often as I could, so I don't feed React more than it was absolute necessary. I never had any kind of performance problem and I used datascript from javascript, with mori-js interface (the two were bundled together). And I wrote my datalog queries in javascript and generated GraphQL from them (in 2015).