core-async

2019-12-02T14:05:50.214700Z

Hi, I’m writing a channel producer which continuously gets collections with hundreds of items each and puts the items one by one in the channel. I’d like to avoid these one-by-one puts by using cat as xf and have a single put expand an entire collection in the buffer, but there’s an open bug where a single take will execute the next pending put, so for each single take the buffer will grow by hundreds and there will be no backpressure: https://clojure.atlassian.net/browse/ASYNC-210. Any advice? This breaks the backpressure semantics entirely.

vemv 2019-12-02T15:55:09.216900Z

Can alts!! be worse than (<!! (go (alts! ...))) ? I could imagine that only the latter makes use of IOC macros, so the former cannot block on N "alts" with the same fairness/etc

alexmiller 2019-12-02T16:05:31.217100Z

I don't understand the question

alexmiller 2019-12-02T16:06:01.217300Z

alts!! can block on N alts

vemv 2019-12-02T16:10:16.220300Z

> alts!! can block on N alts yes, I have always perceived it that way. but now I'm encountering some weird issue my thinking is: how does alts!! achieve that that multi-blocking? generally, regular threads cannot do that. so I assume alts!! does some magic. now, is that magic strictly equivalent, or somehow worse to doing the work in a go ?

alexmiller 2019-12-02T16:19:13.220700Z

I think it's the exact same code iirc

👍 1
alexmiller 2019-12-02T16:19:40.220900Z

so equivalent

alexmiller 2019-12-02T16:19:58.221300Z

I do not have all that context loaded in my head to answer definitively though

👍 1
alexmiller 2019-12-02T16:24:12.222100Z

yeah, both cases route to the impl function do-alts

vemv 2019-12-02T16:35:11.222200Z

ace, thank you!

ghadi 2019-12-02T17:15:15.223200Z

@vemv each op in an alts expr compete to flip a shared flag

ghadi 2019-12-02T17:15:52.224300Z

Exactly one op will win and resume the thread/go

💯 1
2019-12-02T17:43:36.229Z

https://wingolog.org/archives/2017/06/29/a-new-concurrent-ml describes a select operation which is pretty roughly equivalent to what alts does. It can be read as a sort of high level abstract overview of core.async internals even though it isn't about clojure or core.async

alexmiller 2019-12-02T17:44:54.229500Z

and incidentally, alts is imo one of the killer features of core.async

💯 2
2019-12-02T22:00:44.241400Z

What's everyone's general feeling on using core.async as a central message dispatch system, either via pub or mult? For example, having server sent events all come into one async channel to a clojurescript client, making one or more pubs/`mult`s on it, and passing those as a system component to ui components which can sub/`tap` as needed? My thought was that this might be able to replace something like re-frame's dispatch system. But I tried this out and found I end up getting tripped up a lot, and that the bugs have seemed particularly hard to track down and I'm curious if I'm "doing it wrong" or if this is just me having holes in my knowledge/experience/understanding with core.async to make these types of errors? My thinking now is that core.async is better to use as locally as possible, at the point where coordination between multiple streams are needed. Because in the alternative, taping locally a central mult. you make your local tap dependent on all other subscribers, because if any of them block so will it. Would anyone agree that exposing a mult or pub as a central system component is a bad idea? Or is this on me for not being diligent enough about my channels not blocking?

bertofer 2019-12-03T08:47:39.241600Z

In the UI, I prefer the re-frame dispatch system, it’s a layer on top of a queue system (that could be implemented on top of core.async, although it has it’s own queue implementation I think), and the re-frame subscriptions allow for derived data to be described more easily than manually taping to channels. In re-frame you don’t need to worry about blocking. When having a channel that receives all the events from the server (e.g. websocket), I would dispatch to reframe on a consumer for that channel, and keep re-frame for UI updates. If the server sends huge amounts of events that you might want to discard/buffer some of them, In that case it could make sense to have some core.async design with buffers for that, but I would still keep the UI updates for re-frame. In backend I usually have more need for the fine-grained control over the channels and buffers, in part because of multi-threading, In part because the number of events to handle is bigger, and I end up with designs similar to what you described.

Jan K 2019-12-03T14:06:08.241900Z

I'm using core.async pub/sub more on server-side. I did have trouble with subtle races and deadlocks for some time, but much less after I made simplified facade/API on top of core.async pub/sub which is closely tailored to my use-cases (rather than exposing the raw pub). I'm also using custom non-blocking buffers that log warnings when they start filling up, which helps debugging and prevents subs from blocking the publisher.

2019-12-03T17:31:57.242200Z

I like the idea of warning buffers.

2019-12-03T17:34:49.242400Z

And I feel the same about core.async feeling more natural server-side, though I can't quite put my finger on why and it doesn't seem fully explained by the obvious platform differences (ie threads).

bertofer 2019-12-03T21:17:54.243100Z

It might also be that the pattern of having a store as single source of truth for your app, as re-frame does, is more extended in the frontend and goes along what redux in js land does, while in the backend there are more moving pieces with their own “state”, decoupled from each other