untangled

NEW CHANNEL: #fulcro
grzm 2016-08-25T12:28:34.000869Z

@tony.kay taking a step back, this is consistent with where I eventually want to get: complete asynchrony between the mutations (commands) and the reads, not even returning the tempid mapping.

grzm 2016-08-25T12:29:12.000870Z

-- at least not in the response to the mutation command.

2016-08-25T15:41:22.000871Z

We eventually got there too with our stuff. I think we just removed the widget/load-data-stream action in the example above and rely on push notifications via Sente now. Tempids are still useful there though - having a tempid that auto-resolves to a real id makes the push commands dead simple - we just listen on the channel for updates by id and swap those into place.

grzm 2016-08-25T15:43:28.000872Z

@therabidbanana yeah, I still want to have tempids 🙂 Just not returning them on the initial mutate command. Like you describe, passing the tempid maps in a separate push once the real ids have been generated on the server/back end.

2016-08-25T15:44:29.000874Z

Ah, we haven't had to do anything like that yet - all our commands return quick enough we're fine with tempids coming back in the initial mutate command

2016-08-25T15:45:17.000875Z

In some scenarios we do assign a randomly generated uuid to a blob of data that takes a background job and 5-10 second wait to fetch, so that we can return that id immediately

grzm 2016-08-25T15:46:20.000877Z

I'm thinking of a situation where you're writing the commands to an event queue (think Kafka), then consumers coming along and doing the actual work associated with the command.

grzm 2016-08-25T15:46:43.000878Z

And one job of the consumers would be to figure out how to update the client.

2016-08-25T15:47:25.000879Z

I'd probably still go with a tempid resolving in that scenario - resolving implies that the event was successfully written to kafka.

grzm 2016-08-25T15:47:29.000880Z

My app is nowhere near requiring that kind of system, but I have worked in environments like that, and I love the decoupling.

2016-08-25T15:48:39.000881Z

Then the real id is what's stored and communicated with via the command consumers. I'd handle it that way mainly because tempids are ephemeral and will disappear on refresh - maybe that's not a concern in your scenario though

grzm 2016-08-25T15:49:30.000883Z

That's something to think about, too.

2016-08-25T15:50:08.000884Z

So our background job scenario is one of those kinds of cases - we use the uuid as the primary key for the data stream so we can communicate it back immediately

2016-08-25T15:50:55.000885Z

And we set a "status" on that entity with pending so we know it's not actually there yet

2016-08-25T15:51:49.000886Z

That lets us refresh in pending state, you pull from the db that the stream is still pending, and have an id to listen to for further push events.

grzm 2016-08-25T15:57:16.000887Z

Interesting. What does your application/system do?

2016-08-25T16:01:30.000888Z

It's a cross-network advertising reporting dashboard (https://www.adstage.io/reporting/)

grzm 2016-08-25T16:02:26.000890Z

Wow! Have you guys been happy with the stack?

2016-08-25T16:04:12.000891Z

So far, yes, very happy. There have been a few rough edges but overall it feels simpler to work with than what our other products have used (frontend Ember / backend Ruby on Rails).

2016-08-25T16:06:14.000892Z

We had the advantage of being able use our existing platform and pull reports from there, I'm not sure if I'd have been as happy to build out the integrations with 5 separate networks in clojure land - there's a few good ruby gems for working with those APIs

grzm 2016-08-25T16:06:33.000893Z

I've historically been a backend/data guy. Nearly all of my front-end stuff has been nodejs/express. And while that's been really easy to get started with, I've found it's generally a mess going forward.

grzm 2016-08-25T16:07:10.000894Z

I like how Om helps me thinking about the overall structure of the application

2016-08-25T16:07:35.000895Z

I generally lean more backend too - but with Om/Clojurescript I don't want to pull my hair out debugging javascript

2016-08-25T16:07:44.000896Z

So I consider that another plus

grzm 2016-08-25T16:08:37.000897Z

Right. So your Ruby backend API handles Om-style reads and mutations?

2016-08-25T16:08:39.000898Z

Untangled's networking queue on top of that to force synchronous network requests where it makes sense (most of the time, generally), helps a lot with weird edge case type stuff we've seen in Ember too.

2016-08-25T16:09:53.000899Z

Our server for this product takes in the om reads/writes, and when we need a data blob from the Ruby platform, that's where we fetch the datastream in a background job

2016-08-25T16:10:30.000900Z

So you can build an entire report without talking to our Ruby app, it'll just all show with widgets in pending until those background jobs clear.

grzm 2016-08-25T16:10:51.000901Z

Nifty!

2016-08-25T16:11:27.000902Z

It let us iterate very quickly at the beginning, because we could just make a dumb worker that returned the same data every time

grzm 2016-08-25T16:12:03.000903Z

Picking the right boundaries/interfaces is clutch.

2016-08-25T16:12:14.000904Z

Then we got it talking and just pointed it to our production app, since it's read-only

2016-08-25T16:12:29.000905Z

(As far as the Ruby API is concerned)

2016-08-25T16:13:24.000906Z

Agreed - having the right boundaries helps a lot.

grzm 2016-08-25T16:15:19.000907Z

w00t! Just added a support viewer to my app 🙂

2016-08-25T16:16:16.000908Z

Nice! We got one set up a week or so ago - we're still waiting to catch our first session in the wild though. 😄

grzm 2016-08-25T16:17:44.000909Z

I dunno ... Musa with rabies sounds pretty wild to me...

2016-08-25T16:21:09.000910Z

Heh - yes indeed - but I don't use our production app much. Also I was not aware of the term Musa until this moment.

grzm 2016-08-25T18:04:07.000911Z

I looked it up 🙂

tony.kay 2016-08-25T19:22:15.000912Z

@grzm You're biting off quite a bit of complexity there. I like the event stream model, too, but reasoning about tempids becomes really difficult if you don't at least do some kind of id step ASAP.

tony.kay 2016-08-25T19:22:36.000913Z

It is most of the reason that Untangled does a sequential network queue for you.

tony.kay 2016-08-25T19:22:50.000914Z

(and includes tempid remapping ON the network pending requests)

tony.kay 2016-08-25T19:23:25.000915Z

Optimistically add an item. You have a tempid. Now the user can, at any time, delete it. Do you want the tempid going over the network as the id for that request?

tony.kay 2016-08-25T19:23:56.000916Z

With Untangled/Om (and tempid reassign on return and sequential processing) the "right thing" happens.

tony.kay 2016-08-25T19:24:33.000917Z

If you defer remapping, then you have to add extra logic on the server-side. Of course, that could be as simple as using the tempid as a permanent natural key...but that bloats your data a bit

tony.kay 2016-08-25T19:47:44.000918Z

Also remember that with Untangled you get optimistic UI updates, so the user gets immediate feedback, even if your backend takes a while to produce the result. So nothing says you can't do your event stream model and just don't respond until the remap is ready...assuming you can get the response in less than the network timeouts (e.g. 30 seconds)

tony.kay 2016-08-25T19:51:10.000920Z

I think there is room for expansion in Untangled's network stack as well. The sequential thing isn't always appropriate (e.g. blocking future reads because some sequence is pending inthe queue). The :parallel thing helps, but then you don't have sequencing on those reads (which is ok most of the time). Om supports multiple remotes (e.g. :remote true could instead be :remote :A). When we add support for multiple remotes, this could give you a way to use "alternate queues" based on which remote....and the remotes could technically all point to one specific remote (you just get multiple queues)

tony.kay 2016-08-25T19:51:24.000921Z

lots of possibilities to explore 🙂

grzm 2016-08-25T19:51:25.000922Z

You guys have been so awesome and generous today. Thank you 🙂

grzm 2016-08-25T19:52:03.000923Z

The CQRS stuff I'm not looking at doing any time soon, but something I want to keep in mind as I consider different architectural decisions.

tony.kay 2016-08-25T19:52:32.000924Z

welcome. The stack is new enough that some of these things are still being explored.

tony.kay 2016-08-25T19:53:05.000925Z

Some have not been added because no one has yet shown the actual need, but that doesn't mean they are not needed for a fully general solution.

tony.kay 2016-08-25T19:54:16.000927Z

supporting multiple remotes for queries (via a parameter to load-data) would also give these same benefits.

grzm 2016-08-25T19:57:40.000928Z

I'll look into that, actually.

grzm 2016-08-25T19:57:49.000929Z

Also for commands.

2016-08-25T20:18:08.000930Z

On the sequencing thing @tony.kay - what we've seen is that what we'd want :parallel true to do when we use it is go into the queue and when it comes up, then go unblocking.

2016-08-25T20:22:25.000931Z

If that makes any sense

2016-08-25T20:56:02.000932Z

Often we'd want to do a slow read on something after a mutation that creates that thing (like data streams), and we'd make that read parallel

tony.kay 2016-08-25T21:03:21.000933Z

@therabidbanana Yeah, the "correct" behavior is supported by what we have, but it is obvious that (usually for optimization) there are other queue cases we should support.

tony.kay 2016-08-25T21:04:05.000934Z

It's all in front of an easy abstraction, so it should be pretty easy to add whatever is needed. Just keeping the API clean and simple is the main concern.

2016-08-25T21:05:28.000935Z

Yeah, it's not really a problem for us now because we just used websockets instead, but I was at one point considering trying to add some sort of support for that into the networking layer