core-async

2020-01-24T00:01:30.056400Z

Again, you must treat cljs and clj as different

2020-01-24T00:02:37.057700Z

Do vs each by itself won't make in a difference in clj for a number of reasons, in cljs it will

2020-01-24T00:08:15.065700Z

The difference in cljs is with a do the single is thread isn't yielded to run anything on the queue until the entire do is evaluated, but when you do it form by form in the repl the thread can run stuff from the queue between evals

2020-01-24T00:09:48.067900Z

Cljs will also, if I recall, run tasks without putting them on the queue up till the first channel operation

2020-01-24T00:10:18.068900Z

So keep-going is already false in the do case before anything is pulled from the queue and run

2020-01-24T00:10:46.069600Z

The queue in the cljs case may not even be a fifo queue

2020-01-24T00:10:54.069900Z

Cljs is terrible

2020-01-24T00:12:58.070800Z

(it is a fifo, it doesn't just call next tick naively)

2020-01-24T00:18:05.071700Z

Right I see. That all makes sense. But it does mean process in cljs can starve

2020-01-24T00:18:39.072400Z

Once a ping/pong loop is established, no other process gets a chance to run

2020-01-24T00:19:18.072700Z

Strictly speaking this is only pinging, no ponging. For ping-ponging to occur the second go loop needs to send a response to the first, and the first will have to wait for the response, and that will need another channel

2020-01-24T00:19:51.074300Z

I guess you can say that's up to the program to write cooperative processes :p, but still

2020-01-24T00:20:02.074900Z

I think you could disprove that by putting two ping/pong loops in one do block

2020-01-24T00:20:31.075600Z

Well, fair enough.

2020-01-24T00:20:39.075800Z

I dunno, so far he has failed to write one pingpong loop

2020-01-24T00:20:48.075900Z

But it still demonstrates my idea where something could starve

2020-01-24T00:22:16.078200Z

And as cljs is a wild west where people do all kinds of Ill concieved things to improve benchmarks

☝️ 1
2020-01-24T00:23:47.080700Z

Well, I'm also not hearing any rationale why it wouldn't be the case at a design level as well

2020-01-24T00:24:30.082800Z

1. Let there be two tasks P1 and P2, and channels C1 and C2
2. P1 writes to C1 and reads from C2 then loops
3. P2 reads from C1 and writes to C2 then loops
4. When P1 reads from C2 it gives up a thread because P2 hasn't run yet so there is nothing to do
5. When P2 read froms C1 it gives up a thread because P1 hasn't run yet so there is nothing to do
6. When the thread is given up it pulls a task from the front of the queue and begins executing it
7. When C1 is written to P2 is put on the end of the queue
8. when C2 is written to P1 is put on the end of the queue
9. If some other task T is introduced to the system, the task is either running somewhere or waiting to be run
10. A task running somewhere is not starved.
11. A task waiting to run is either outside of core.async's perview, or waiting on a channel, or waiting in the queue
12. A task waiting on a channel has nothing to do and cannot be starved.
13. A task waiting in the queue will eventual be at the front of the queue.
14. T will eventually run regardless of P1 and P2.

➕ 1
2020-01-24T00:24:50.083600Z

Its yielding back and forth, and the third piece of code isnt being run. I get that it didn't have a chance to register itself yet, and now the single thread is being canibalized.

2020-01-24T00:24:57.084Z

And the do solves that

2020-01-24T00:25:22.084800Z

if cljs core.async implements csp correctly, two processing sending messages to each other can't lock out all the other processes

2020-01-24T00:26:25.085900Z

cljs has does a number of things to avoid going to the queue, all of which make starvation possible

2020-01-24T00:26:35.086300Z

OK

2020-01-24T00:27:07.087Z

I keep saying "cljs is terrible and bad" and you keep pointing to cljs behavior and "going see, it proves bad things can happen"

2020-01-24T00:28:27.087100Z

Yea, assuming there's another thread that can perform #9. And that there's now an OS level pre-emptive scheduler to give that thread a chance at pushing a task to the queue

2020-01-24T00:29:00.087400Z

Which is missing in ClojureScript I believe

2020-01-24T00:29:39.088Z

No, I keep talking about a single threaded scenario

2020-01-24T00:29:54.088200Z

#9 holds for cljs as well

2020-01-24T00:30:25.088600Z

it has to, otherwise you can argue code that you haven't loaded is being starved because it isn't run, which is absurd

2020-01-24T00:30:49.089700Z

I am talking about a single thread scenario

2020-01-24T00:31:05.090300Z

you can run clj core.async with a single thread by setting that system property

2020-01-24T00:31:25.091400Z

but the multithreaded analysis reduces to the single threaded analysis above

2020-01-24T00:31:40.091900Z

I guess it all works if all process make themselves known before the only thread start executing any of them

2020-01-24T00:32:23.092800Z

it may be broken in cljs but not because of #9

2020-01-24T00:33:17.093700Z

cljs has historically had issues with #7 and #8

2020-01-24T00:33:19.093900Z

Well, #11 then. If the task is waiting outside the perview of core.async

2020-01-24T00:33:53.094700Z

then it doesn't belong in #core-async 🙂

2020-01-24T00:35:29.097400Z

if you type (+ 1 2) into the clojure repl and in is evaluated an run on the main thread it has nothing to do with the analysis of ping-poning core.async processes

2020-01-24T00:35:36.097700Z

Well, I guess that part needs some thought as well. How would you introduce the tasks to core.async. Do you get a chance to first push all process to it and then start it?

2020-01-24T00:36:38.098500Z

tasks are either started by something outside of core.async (another thread) or they are started by a core.async task (which in order to start a task by definition is running)

2020-01-24T00:37:01.099500Z

Seems in my case, yes, you can use a do block around them

2020-01-24T00:37:30.100100Z

you need to be clear about if you are running things in cljs or in clj

2020-01-24T00:38:19.101600Z

I guess neither. Right now I'm thinking single thread hypothetical good implementation of core.async

2020-01-24T00:38:41.102400Z

the do block makes no difference in the clj cases

2020-01-24T00:39:17.104Z

because in the clj case go blocks go on the queue and start executing immediately either way

2020-01-24T00:39:50.104900Z

Maybe I'm missing a detail then. What is evaluating the code ?

2020-01-24T00:40:02.105200Z

in the clj case?

2020-01-24T00:40:49.106400Z

In a single threaded environment

2020-01-24T00:40:55.106800Z

How would you bootsrap core.async

2020-01-24T00:41:13.107200Z

It assumes all the macros have to run first right?

2020-01-24T00:42:12.109100Z

ah, I see, when I say single threaded environment I have just meant a single thread servicing the core.async queue

2020-01-24T00:42:45.110500Z

My scenario is a block of code is evaled, starts a Go process, runs a task that waits for a value, now it puts itself on the task queue and yields... core.async grabs the first thing of the queue, which is the same process that just yielded...

2020-01-24T00:42:53.110800Z

you mean, what if you completely changed clojure's internals

2020-01-24T00:43:33.111400Z

no

2020-01-24T00:43:41.111800Z

yielding does not put a process on the task queue

2020-01-24T00:43:47.112100Z

I guess, but its more that I don't think I was aware of that initial stage, so my reasoning just has a gap which I think prevents me from understanding

2020-01-24T00:44:17.112900Z

when a process is waiting for a value it is waiting for a value from a channel, and it adds itself as a callback on that channel

2020-01-24T00:44:24.113200Z

it doesn't put itself on the queue

2020-01-24T00:44:54.113800Z

only when something is put on the channel does the callback run and put the task on the queue

2020-01-24T00:45:16.114100Z

Hum... okay I have to think about this part

2020-01-24T00:45:37.114500Z

the only thing that goes on the queue are tasks that can actually be run

2020-01-24T00:45:49.114900Z

tasks waiting to read or write values are not on the queue

2020-01-24T00:47:19.115700Z

Hum, ok. Ya that changes my understanding. So I have to think about this. That might address the concern I was having.

2020-01-24T00:47:32.116Z

And explain why it all works.

2020-01-24T00:47:35.116200Z

😄

2020-01-24T00:48:06.116500Z

Thanks for all the info

2020-01-24T00:49:08.117Z

Right, I think that makes sense.

2020-01-24T00:51:51.119900Z

So, if I'm back assuming there's only one thread for everything (not just core.async). The first go block would run until it takes, at which point, it registers itself as a callback tot he chan, and the thread is returned to continue evaluating the namespace. Thus the second go block will be evaluated, if it puts something, it will trigger the call-back, going back to the first block, which say it did loop, would register itself as a callback again and the thread would go back to evaluation, where if the second go block also loops, would execute the loop and put again. Rinse and repeat

2020-01-24T00:52:47.120500Z

but only if you violate CSP by failing to yield at each channel op?

2020-01-24T00:53:13.121Z

I don't see how CSP on a single threaded vm would work without every channel op doing a yield

2020-01-24T00:53:16.121100Z

Absurd... but could be what's happening in my example code? Could be a bug on cljs's part as well, or just I'm doing something wrong in my test

2020-01-24T00:54:54.122600Z

Well, they do yield, the problem is nothing knows that after these loop, I will be evaluating a third go block. So in such a case, my third go block is prevented from ever being evaluated. Again, this assumes evaluation and go process all run in the same single thread

2020-01-24T00:55:25.123400Z

no that can't be right, because the repl process needs to be registered to run when it has input

2020-01-24T00:55:39.124Z

you can't even do async with a repl otherwise

2020-01-24T00:55:53.124500Z

But I guess, if my second go block also just did a take on a chan, it would proceed the evaluation, hoping the third go block is the one to do a put, and then they'd all be properly yielding to each other.

2020-01-24T00:56:21.125100Z

by "yield" I mean coroutine yield, the way cooperative multithreading is done, it's the only way to do async in a single thread that I can recall

2020-01-24T00:57:01.126100Z

if core.async has a code path that can do multiple channel ops in a row without yielding to the parent async scheduler, I'd consider that a bug

2020-01-24T00:57:04.126300Z

If I understood hiredman, yield here would mean register a callback on the chan and return.

2020-01-24T00:57:13.126600Z

no, that's a core.async op

2020-01-24T00:57:25.127Z

I'm talking about interaction with the js vm (or whatever single threaded vm)

2020-01-24T00:58:13.128300Z

I'm speculating that the core.async cljs bug is that chan ops are being chained with no yield to the vm

2020-01-24T00:58:31.128800Z

The callback is the coroutine yield no? Everything that follows the take is re-written into a continuation function, and that function is registered with the chan, so when something is put on the chan the put will call that function when it is done ?

2020-01-24T00:58:50.129200Z

you're talking about core.async machinery, I'm talking about vm machinery

2020-01-24T00:59:08.129800Z

Hum... I didn't think core.async was leveraging any JS machinery

2020-01-24T00:59:40.130500Z

the vm doesn't have chans - it has a yield call (lets it call some other periodic / waiting thing that previously called yield)

2020-01-24T00:59:57.130900Z

Right, but I didn't think it was using that, could be very wrong here

2020-01-24T01:00:14.131200Z

then it's up to the developer to yield?

2020-01-24T01:00:43.132100Z

nb yield is generic coroutine terminology, probably not the term js uses

2020-01-24T01:00:53.132300Z

Well, it would just return, but the cljs code has put the rest of the function into an anonymous function and registered it on the chan object.

2020-01-24T01:01:29.132700Z

I might be very wrong here, that's just what I thought was happenin

2020-01-24T01:02:01.133400Z

maybe instead of a "yield" function you register your continuation to be called after a short delay?

2020-01-24T01:02:18.133500Z

which isn't starving anything because you haven't described something other thing that is being starved?

2020-01-24T01:02:40.133800Z

I guess I don't really understand js

2020-01-24T01:03:03.133900Z

assuming someone rewrote the clojure internals to work on a single threaded enviroment like this

2020-01-24T01:03:29.134100Z

So, this is where I might be assuming wrong, but I thought ClojureScript did run on such an environment

2020-01-24T01:03:32.134300Z

the reader would be a callback waiting on some nio channel, when you typed input into the repl it would fire, and schedule itself in the queue

2020-01-24T01:03:37.134500Z

not really

2020-01-24T01:04:07.134700Z

again the core.async cljs implementation has issues, and the cljs runtime and compiler don't depend on core.async

2020-01-24T01:04:22.134900Z

Also I guess this could be mitigated by the fact go is a macro running at compile time, so it could rewrite things possibly as well so this scenario doesn't happen

2020-01-24T01:04:43.135100Z

and where written before core.async, and generally assuming full control of where ever they are running and don't assume they are sharing it

2020-01-24T01:05:25.135300Z

True, it could very well be just issues with cljs. But now at least I think I have a better grasp in the core.async machinery... a little better 😛

2020-01-24T01:05:42.135600Z

Me neither 😛

2020-01-24T01:06:16.136200Z

it is absolutely what is happening in your example, but your example isn't a ping/pong loop as I keep telling you

2020-01-24T01:06:28.136600Z

It might not even really be fully single threaded like I'm talking about. For example, I know IO is run in a seperate thread, but user has no access to threads of their own.

2020-01-24T01:07:13.137500Z

when I say yield I don't literal mean some instruction yield or something, I mean like you get to the end of a function, and there is nothing more to execute, done

2020-01-24T01:07:35.137700Z

Ya, I agree there. In a ping pong, it would work I think, because of what you said. Since they both read, eventually we'll get to the third go block

2020-01-24T01:08:29.138800Z

like, if I do (future (println "whatever")) there is no yield instruction or function or method or whatever run, but my code the println runs, and when it is done the thread is yielding back to the threadpool

2020-01-24T01:09:13.138900Z

Or not, but, its gotten too complex for me to think about it in my head 😛 In any case, yea, I see I wasn't ping/ponging. Is ping/pong like a proper term here? Cause I was really just using it to mean going back and forth between the two process

2020-01-24T01:09:40.139100Z

this is correct

2020-01-24T01:10:13.139900Z

but you might say "core-async doesn't always yield" and that is correct, but in the case of ping-ponging processes it always does

👍 1
2020-01-24T01:10:22.140200Z

That's what I was using as well. I think noisesmith was asking if core.async on cljs uses an actual JS yield (aka coroutine) of some sort, under the hood, to achieve its behavior

2020-01-24T01:10:41.140400Z

https://gist.github.com/hiredman/2271c48e1f036253ce37913abd3a680a cml (which is like csp but more so) using js coroutines

2020-01-24T01:11:12.141Z

Alright, well I need to head out. But really appreciate all the discussion here. Learning a lot. Have a nice one

2020-01-24T01:15:46.142600Z

I guess the normal thing in js is to exit the body by returning, and instead of directly recurring you can set a timeout or whatever for your body to execute again

2020-01-24T01:16:20.143100Z

yeah, without a timeout of 0

2020-01-24T01:17:01.144300Z

some (maybe all now?) browsers have some kind of nextTick thing you can register to execute functions on

2020-01-24T01:17:14.144700Z

which is an optimization of the timeout case

2020-01-24T01:17:19.144900Z

timeout 0 case

2020-01-24T01:17:38.145600Z

and the google closure js libraries provide a polyfill for it, which is what cljs core.async uses

2020-01-24T01:17:47.146Z

timeout 0 almost looks like using a trampoline to recur without growing stack

2020-01-24T01:17:54.146200Z

cool

2020-01-24T01:18:08.146600Z

sure, or using future to recur without growing the stack

2020-01-24T01:18:57.147600Z

core.async actually keeps a fifo queue of tasks to run, and queues on nextTick a task to run tasks from that queue

2020-01-24T08:30:12.153100Z

This discussion makes me wonder, why the decision was made to make core.async rely on a shared global threadpool at all ? An alternative design could be to always run go blocks synchronously in the thread resolving the parking operation. Starvation would be impossible by definition, it would save a lot of context switching overhead and the application code could still introduce its own threadpool to improve parallelism when needed.

2020-01-24T08:32:01.153800Z

Wouldn't that thread possibly be gone?

2020-01-24T08:32:47.154400Z

Since it doesn't actually block the thread, everything will return, and the thread will stop and be garbage collected

2020-01-24T08:33:07.154800Z

You'd need a way to keep the thread around, and attach the callback to it

2020-01-24T08:33:48.155100Z

callbacks are attached to channels

2020-01-24T08:35:05.155900Z

I mean say I have a thread whose run method just does: (go (println (<! some-chan)))

2020-01-24T08:35:45.156600Z

the go block will yield, and the run method will return, killing the thread

2020-01-24T08:36:11.157200Z

not necessarily, it depends which thread it is

2020-01-24T08:36:31.157400Z

What do you mean?

2020-01-24T08:37:23.158100Z

if you run that in the repl, the repl thread won't stop, it will just wait for the next form to evaluate

2020-01-24T08:38:00.158800Z

Hum, right, but I mean, what if my code was in a custom thread, or a future ?

2020-01-24T08:38:09.159Z

How would you guard that?

2020-01-24T08:38:15.159300Z

that's fine

2020-01-24T08:38:39.160100Z

the thread will terminate, and the continuation will still be registered to some-chan

2020-01-24T08:39:02.160600Z

Also, I don't think you can send something to be run on a particular thread, can you? So you'd need to make the main thread into an event loop so it could pick-up the callbacks and run them no?

2020-01-24T08:39:14.160800Z

But then where do you run it?

2020-01-24T08:39:37.161100Z

it doesn't matter

2020-01-24T08:42:13.162800Z

if you're doing anything async, you necessary have some other thread waiting for an event elsewhere

2020-01-24T08:43:07.163600Z

when that thread resolves the parking operation, it can resume the parked go block

2020-01-24T08:44:10.164700Z

Right, you mean always running on the thread doing the put

2020-01-24T08:45:01.165100Z

put or take, depending which happens first

2020-01-24T08:45:13.165400Z

Ya, I think I see what you mean... hum

2020-01-24T08:46:29.165700Z

My best guess was copying Go 😛

2020-01-24T08:47:06.166400Z

So we can claim doing n:m and have awesome multi-core leverage for the CPUs ofthe future that havn't happened yet

2020-01-24T08:47:39.167Z

I feel when Clojure came out, an CSP, and Go, everyone thought by now we'd have like 60 cores CPUs

2020-01-24T08:48:03.167200Z

maybe

2020-01-24T08:49:22.168100Z

What does your lib do?

2020-01-24T08:49:44.168300Z

exactly that

2020-01-24T08:50:04.168500Z

So we have both!

2020-01-24T08:50:05.168700Z

😄

2020-01-24T08:51:30.169500Z

Okay, but, can't my starvation scenario still happen? To be fair, I'm still not sure if either I'm confused and wrong, or I'm not explaining it properly

2020-01-24T08:52:33.170300Z

I'm thinking of starvation in the sense that there could be code waiting to enqueue a task, that never get a chance to do so

2020-01-24T08:56:48.171900Z

in a synchronous model, there's no task queue, everything is run as soon as possible

2020-01-24T08:57:27.172600Z

so starvation is impossible

2020-01-24T09:01:13.174200Z

Ok, let me think... So for example, say we start and we do X which needs Y, parking, now because we parked, you say we'd run the next task which we get from the channel, is that it?

2020-01-24T09:02:39.174800Z

Or no, you mean it register a callback on the channel its waiting on, and the thread will just continue.

2020-01-24T09:02:56.175100Z

yes, that

2020-01-24T09:04:18.176200Z

Ok, so now say the current fn returns, and we run the next fn, which will put Y on the channel, on that put, it would park again?

2020-01-24T09:05:52.176900Z

I'm guessing ya, since it wait for the taker, so now would it call the callback?

2020-01-24T09:06:40.177600Z

it calls the callback if and only if the transfer is possible

2020-01-24T09:07:41.178600Z

in fact it would be the exact same behavior, but instead of scheduling the callback on threadpool, the callback is run right now

2020-01-24T09:13:21.179800Z

Ok, but, here's the thing. Now it runs the callback, say the callback parks waiting for Y again, so now it runs the callback, which puts another Y and parks, running the callback, etc.

2020-01-24T09:13:37.180100Z

So now we're going back and forth

2020-01-24T09:14:15.180900Z

But say there is another piece of code after, which would put something on the channel that makes the loops in the others stop.

2020-01-24T09:14:22.181200Z

That one never gets a chance to run

2020-01-24T09:18:08.183600Z

I get what you mean but I wonder how much contrived this example is

2020-01-24T09:18:46.184800Z

The threads in Clojure avoid that situation, because the main thread is its own thread, so it'll eventually go to run the third code which will add itself to the queue, and eventually get its turn.

2020-01-24T09:18:54.184900Z

in practice you always end up waiting for an IO event of some kind

2020-01-24T09:19:06.185200Z

Ya, I admit it is super contrived

2020-01-24T09:20:46.186600Z

and if your workload is CPU-bound, you will introduce another threadpool which will let room for another event to happen

2020-01-24T09:22:05.187300Z

I'm not really worried about this scenario, but I wanted more to see that I understood the machinery.

2020-01-24T09:24:33.189100Z

ok, going to sleep, but enjoying these convos quite a bit, good night

😴 1
2020-01-24T20:30:15.191Z

@leonoel looking at scrollback - you can easily make every op block the current thread by only using blocking calls and never using go blocks, the disadvantage is the overhead is a full thread for every parked blocked operation, and a full thread context switch between operations

2020-01-24T20:31:11.191800Z

which means more resource usage, slower performance, and you are basically just using queues, channel callbacks don't get made any more

2020-01-24T20:31:56.192400Z

but it's totally compatible with current core.async, just don't use go blocks and you're golden

2020-01-24T20:32:03.192600Z

my question was not about that

2020-01-24T20:33:25.194Z

so you mean instead of "block until put resolves" you'd have "put callback on chan on put, then the taker promises to run your callback"?

2020-01-24T20:33:47.194700Z

I think that's just a more convoluted way to do block on put, in terms of resource usage

2020-01-24T20:35:12.196300Z

also that seems like very weird scheduling behavior - maybe I'm misunderstanding though

2020-01-24T20:35:37.197600Z

I think the question was only focused on the async case, so the non blocking case

2020-01-24T20:36:04.198900Z

but how do you do non blocking, and not use a thread pool, and have a callback run? whose thread runs it?

2020-01-24T20:36:12.199300Z

the hypothetical model I described is still non-blocking, it just runs callbacks synchronously instead of scheduling it on a thread pool

2020-01-24T20:36:24.200Z

The actice thread runs it

2020-01-24T20:36:35.200600Z

the thread running the callback is the thread making the transfer possible

2020-01-24T20:36:45.201200Z

right, but then you force the consumer of the message to run your continuation before it can use your message - that seems pathological

2020-01-24T20:37:06.202200Z

Concurrently but not parallel, unles you parallelize it further yourself

2020-01-24T20:37:07.202300Z

unless you piggy back the continuation and run it next time the consumer parks?

2020-01-24T20:37:44.202800Z

why would it be pathological ?

2020-01-24T20:38:08.203800Z

a puts data on c, b consumes from c, resulting in resuming a

2020-01-24T20:38:18.204300Z

Thats pretty much what you want. You want to interweave operations so they are concurrently worked on

2020-01-24T20:38:44.205Z

so you don't want channels or CSP, you want coroutines with yield / resume

2020-01-24T20:39:12.206Z

technically, with the current implementation, a can still resume before b uses the value

2020-01-24T20:39:20.206400Z

in fact, you don't know

2020-01-24T20:39:25.206800Z

😉, well given leonoel wrote a coroutine library I'd say there's some truth to that

2020-01-24T20:40:10.207700Z

I'm not a CSP expert

2020-01-24T20:40:35.208900Z

I'm just wondering if the underlying threadpool is necessary to have the proper CSP semantics

2020-01-24T20:40:56.209900Z

But it comes to throughout and latency in the end. Its very possible in a lot of scenarios what leonoel suggests would end up having better throughout and maybe even latency

2020-01-24T20:40:58.210Z

it isn't, since you can do CSP with only one thread

2020-01-24T20:41:14.210300Z

or equivalently a threadpool of size 1

2020-01-24T20:41:30.210900Z

it's just far from ideal regarding resources / performance

2020-01-24T20:42:37.213100Z

@leonoel the detail I'm still thinking about is that in CSP if a writes to c, then b read s c, that wakes up a

2020-01-24T20:42:52.214100Z

I think that's the point of difference between CSP and coroutines

2020-01-24T20:42:52.214200Z

As I have 100 tasks to perform, scheduling them on n threads could be slower then doing all 100 back to back in the current thread

2020-01-24T20:43:42.216300Z

So having a: "run immediatly" scheduler could make sense

2020-01-24T20:43:50.216600Z

@leonoel the thing I was considering "pathological" was that to be strictly CSP, you need to resume a when you read from c, which means a's continuation runs before b can run, but that's just a scheduling question - you can safely wait and run a after b parks

2020-01-24T20:44:03.217100Z

but the problem is one of those orderings is a deadlock

2020-01-24T20:44:21.217800Z

the advantage of CSP is that if you follow its rules, no order of operations deadlocks or livelocks

2020-01-24T20:44:32.218300Z

I think

2020-01-24T20:46:07.221600Z

You could do, a fails to read, thus parks, now channel has no callback so continue normal execution, you run a put on chan, after the put, you check channel for callbacks, thus you execute it, which continues a thus succeeding the read and keep going

2020-01-24T20:46:17.221900Z

and this goes back to @didibus question from yesterday - the scheduling rules can cause CSP violation (your monopolization of cljs.async via two blocks acting like mutual coroutines)

2020-01-24T20:46:58.222600Z

@didibus right as long as you have an infinite queue buffer that works

2020-01-24T20:47:00.222900Z

eventually

2020-01-24T20:47:20.223700Z

(I think - async stuff still hurts my brain, so easy to miss cases that feel like they should be obvious)

👍 1
2020-01-24T20:48:21.225400Z

Yup, that's possible. I think the bigger risk is that you are more at risk of a infinite continuation loop

2020-01-24T20:49:47.227400Z

In cljs, my understanding is the main thread would need to become idle and only then you run a queued task. Since it.uses setTimeout

2020-01-24T20:50:15.228200Z

Without an event loop like that I don't know if all the guarantees are withheld

2020-01-24T20:51:46.230Z

Thats why my example didnt cause an infinite loop in cljs when wrapped in a do. Because the whole do block needs to finish executing before any other task get scheduled

2020-01-24T20:52:17.231Z

But if sending form at the repl, the main event loop never gives the repl a chance to run the third go block

2020-01-24T20:52:49.231900Z

Where as in Clojure, the repl has its own thread which allows the third block to be sent

2020-01-24T20:54:51.233400Z

So I feel if you use the thread evaluating the go block to run the continuations as well, then you can basically starve evaluation.

2020-01-24T20:55:28.234200Z

Unless evaluation itself is part of the pre-emptive machinery

2020-01-24T20:55:55.234900Z

Like if "load" itself was done in a go block

2020-01-24T20:58:02.238100Z

Same way I think, if the repl listener of the js repl was in a go block, I think then my example would work as well, since there would be a queued event in the JS event loop to read the socket again

2020-01-24T20:59:57.239200Z

But, the whole thing does hurt my brain. So this is all speculation from me

2020-01-24T21:03:36.239600Z

Hum..

2020-01-24T21:06:10.242800Z

But I feel it could work, if you could like have it so when you park, you run the next callback, but if inside that callback you park as well, you dont run that immediately, you register another callback and yield. So only top level would park+execute-task

2020-01-24T21:07:52.244100Z

Bah, I dont know 😂, gonna stop and go back to work haha.

2020-01-24T21:31:30.248Z

I have this (incomplete) delimited continuation library in the style of core.async's go macro that I was playing with for a while, and the test case for it is a simplified single threaded core.async where it does the "run it on the same thread without the threadpool" thing https://gist.github.com/74e1b1d88f2938f5cdddbf1eea4dfcf9

👀 1
2020-01-25T18:01:25.248200Z

that channel implementation doesn't look right to me. if a read and a write happen concurrently on the same fresh channel, both reader and writer can decide to yield without seeing each other

2020-01-25T18:09:47.248400Z

It isn't a full implementation, and as I said it is single threaded

2020-01-25T18:14:44.248700Z

ok

2020-01-25T18:18:11.248900Z

anyways thank you for sharing that, I'm still grokking the part about continuations and exploring the references