I was thinking about the discussion here: https://github.com/cljfx/cljfx/issues/67
There’s probably things I didn’t fully appreciate because I’m still learning so much, but something that came to mind is the so-called synchrony hypothesis:
> External events are ordered in time and the program responds to each event as if instantaneously.
I first heard of it when learning about statecharts some years ago. The following is just for a bit more context, not trying to focus on statecharts per se:
If you have a state machine M that is waiting on an event to happen, i.e. it’s “idle” at present, then it can be said to have previously completed a macro step (or was in its initial state). So then an event E1 happens and that will kick-off a macro step that consists of a series of micro steps (a.k.a internal events and transitions); a way to think about it is that it results in a sequence of state changes S1 -> S1' -> S1'' -> … -> S2
. Until the micro steps have completed and M has settled to state S2, the machine will not respond to additional external events (will not start another macro step), and typically such events go into a queue. When E2 arrives, or if it was waiting in the queue, then another macro step is initiated.
For the synchrony hypothesis to hold, the micro steps should not add-up to run time beyond some threshold that would cause unacceptable lag or stutter. What that threshold is depends on the use case, of course; for a desktop UI maybe it would be somewhere in the milliseconds range, just guessing based on e.g. “60 fps”.
If you need to do some long-running computation or an asynchronous action, there is a notion of service invocation. One or more services may be invoked in one or more micro steps in a single macro step, but those service invocations should not block. When a service invocation has completed and needs to communicate back to M, it should fire an external event that will be processed in a future macro step.
Making sure one’s microsteps don’t exceed run time thresholds is a bit of trial-and-error, might have to do some profiling too. I’m guessing that assoc
/`dissoc` kinds of things and swap!
on a context, even if you have a chain of several effects, shouldn’t eat up too much time (I could be very wrong about that, of course).
Okay, so what I’m wondering, with respect to the discussion in #67
linked above, is whether when async event handling happens off the rendering thread, maybe the events could go into a queue, and then effects could essentially be like micro steps — so events are processed in sequence, and an event’s effects happen in sequence, and when the effects are done the combined effect gets !
’d back into the context. At that point the next event in the queue starts getting processed (next macro step) while the renderer updates. Also, if a an effect (micro step) needs to “invoke a service” then the invocation could be wrapped in something like clojure.core.async/thread
(so it returns right away and you go on to the next micro step, if any). If that service invocation has a result to give back when it’s done, it would do so by dispatching an event.
Sounds about right 🙂
Yes, I run longer-running computations (or network requests etc.) outside of event handler, and just create new events when necessary.
fx/wrap-async
runs events asynchronously and sequentially — with a queue.