@tony.kay taking a step back, this is consistent with where I eventually want to get: complete asynchrony between the mutations (commands) and the reads, not even returning the tempid mapping.
-- at least not in the response to the mutation command.
We eventually got there too with our stuff. I think we just removed the widget/load-data-stream
action in the example above and rely on push notifications via Sente now. Tempids are still useful there though - having a tempid that auto-resolves to a real id makes the push commands dead simple - we just listen on the channel for updates by id and swap those into place.
@therabidbanana yeah, I still want to have tempids 🙂 Just not returning them on the initial mutate command. Like you describe, passing the tempid maps in a separate push once the real ids have been generated on the server/back end.
Ah, we haven't had to do anything like that yet - all our commands return quick enough we're fine with tempids coming back in the initial mutate command
In some scenarios we do assign a randomly generated uuid to a blob of data that takes a background job and 5-10 second wait to fetch, so that we can return that id immediately
I'm thinking of a situation where you're writing the commands to an event queue (think Kafka), then consumers coming along and doing the actual work associated with the command.
And one job of the consumers would be to figure out how to update the client.
I'd probably still go with a tempid resolving in that scenario - resolving implies that the event was successfully written to kafka.
My app is nowhere near requiring that kind of system, but I have worked in environments like that, and I love the decoupling.
Then the real id is what's stored and communicated with via the command consumers. I'd handle it that way mainly because tempids are ephemeral and will disappear on refresh - maybe that's not a concern in your scenario though
That's something to think about, too.
So our background job scenario is one of those kinds of cases - we use the uuid as the primary key for the data stream so we can communicate it back immediately
And we set a "status" on that entity with pending so we know it's not actually there yet
That lets us refresh in pending state, you pull from the db that the stream is still pending, and have an id to listen to for further push events.
Interesting. What does your application/system do?
It's a cross-network advertising reporting dashboard (https://www.adstage.io/reporting/)
Wow! Have you guys been happy with the stack?
So far, yes, very happy. There have been a few rough edges but overall it feels simpler to work with than what our other products have used (frontend Ember / backend Ruby on Rails).
We had the advantage of being able use our existing platform and pull reports from there, I'm not sure if I'd have been as happy to build out the integrations with 5 separate networks in clojure land - there's a few good ruby gems for working with those APIs
I've historically been a backend/data guy. Nearly all of my front-end stuff has been nodejs/express. And while that's been really easy to get started with, I've found it's generally a mess going forward.
I like how Om helps me thinking about the overall structure of the application
I generally lean more backend too - but with Om/Clojurescript I don't want to pull my hair out debugging javascript
So I consider that another plus
Right. So your Ruby backend API handles Om-style reads and mutations?
Untangled's networking queue on top of that to force synchronous network requests where it makes sense (most of the time, generally), helps a lot with weird edge case type stuff we've seen in Ember too.
Our server for this product takes in the om reads/writes, and when we need a data blob from the Ruby platform, that's where we fetch the datastream in a background job
So you can build an entire report without talking to our Ruby app, it'll just all show with widgets in pending until those background jobs clear.
Nifty!
It let us iterate very quickly at the beginning, because we could just make a dumb worker that returned the same data every time
Picking the right boundaries/interfaces is clutch.
Then we got it talking and just pointed it to our production app, since it's read-only
(As far as the Ruby API is concerned)
Agreed - having the right boundaries helps a lot.
w00t! Just added a support viewer to my app 🙂
Nice! We got one set up a week or so ago - we're still waiting to catch our first session in the wild though. 😄
I dunno ... Musa with rabies sounds pretty wild to me...
Heh - yes indeed - but I don't use our production app much. Also I was not aware of the term Musa until this moment.
I looked it up 🙂
@grzm You're biting off quite a bit of complexity there. I like the event stream model, too, but reasoning about tempids becomes really difficult if you don't at least do some kind of id step ASAP.
It is most of the reason that Untangled does a sequential network queue for you.
(and includes tempid remapping ON the network pending requests)
Optimistically add an item. You have a tempid. Now the user can, at any time, delete it. Do you want the tempid going over the network as the id for that request?
With Untangled/Om (and tempid reassign on return and sequential processing) the "right thing" happens.
If you defer remapping, then you have to add extra logic on the server-side. Of course, that could be as simple as using the tempid as a permanent natural key...but that bloats your data a bit
Also remember that with Untangled you get optimistic UI updates, so the user gets immediate feedback, even if your backend takes a while to produce the result. So nothing says you can't do your event stream model and just don't respond until the remap is ready...assuming you can get the response in less than the network timeouts (e.g. 30 seconds)
I think there is room for expansion in Untangled's network stack as well. The sequential thing isn't always appropriate (e.g. blocking future reads because some sequence is pending inthe queue). The :parallel thing helps, but then you don't have sequencing on those reads (which is ok most of the time).
Om supports multiple remotes (e.g. :remote true
could instead be :remote :A
). When we add support for multiple remotes, this could give you a way to use "alternate queues" based on which remote....and the remotes could technically all point to one specific remote (you just get multiple queues)
lots of possibilities to explore 🙂
You guys have been so awesome and generous today. Thank you 🙂
The CQRS stuff I'm not looking at doing any time soon, but something I want to keep in mind as I consider different architectural decisions.
welcome. The stack is new enough that some of these things are still being explored.
Some have not been added because no one has yet shown the actual need, but that doesn't mean they are not needed for a fully general solution.
supporting multiple remotes for queries (via a parameter to load-data) would also give these same benefits.
I'll look into that, actually.
Also for commands.
On the sequencing thing @tony.kay - what we've seen is that what we'd want :parallel true to do when we use it is go into the queue and when it comes up, then go unblocking.
If that makes any sense
Often we'd want to do a slow read on something after a mutation that creates that thing (like data streams), and we'd make that read parallel
@therabidbanana Yeah, the "correct" behavior is supported by what we have, but it is obvious that (usually for optimization) there are other queue cases we should support.
It's all in front of an easy abstraction, so it should be pretty easy to add whatever is needed. Just keeping the API clean and simple is the main concern.
Yeah, it's not really a problem for us now because we just used websockets instead, but I was at one point considering trying to add some sort of support for that into the networking layer