funcool

A channel for discussing and asking questions about Funcool libraries https://github.com/funcool/
Ben Sless 2020-04-13T07:21:49.011200Z

I have a question regarding propagating backpressure with promesa I have created this event function:

(fn [_ rec]
          (->> (p/promise rec)
               (p/map exec deserialize)
               (p/mapcat exec http-request)
               (p/map exec serialize)
               (p/map exec produce)))
And I invoke it in a loop
(loop [recs (poll)]
      (reduce event-fn nil recs)
      (recur (poll)))
(using reduce because recs are iterable and I want to go fast) this implementation incorrectly propagates backpressure to the poll function, which just returns an iterable of data. What did I miss here and how do I fix it? Thanks, promesa is cool

niwinz 2020-04-13T07:32:28.012300Z

if you execute event-fn in a loop without waiting it result, it can cause that event-fn can start again before the previous execution ends

niwinz 2020-04-13T07:32:47.012800Z

you have two options: use deref on the end for "block" and wait until terminated

niwinz 2020-04-13T07:33:14.013400Z

or use the promesa.core/loop and promesa.core/recur

niwinz 2020-04-13T07:33:18.013600Z

(undocumented)

Ben Sless 2020-04-13T07:35:50.015Z

Makes sense. But I want to get to 100% cpu usage, so would want to start execution before the previous execution ends, and only block on execution start when all the threads in the exec pool are busy

Ben Sless 2020-04-13T07:37:07.015900Z

What I ended up doing was

(loop [recs (poll)]
      @(p/all (reduce event-fn [] recs))
      (recur (poll)))
But it didn't get close to eating all the 100% utilization

Ben Sless 2020-04-13T07:40:55.016500Z

What would happen if I provided the executor at the beginning? (p/promise rec exec)

Ben Sless 2020-04-13T07:43:22.016900Z

Or perhaps I should use p/create ?

niwinz 2020-04-13T21:45:33.020200Z

It depends, not using 100% can have different causes. In any case, in normal situation you dont need to pass a custom executor service, and in your case it can cause this nrgative effect. The CompletableFuture, by default resolves all chained computations synchronously in the same thread where the first promise is resolved. Providing an specific executor service will cause additional task spwaning, adding more latency and being less efficient.

niwinz 2020-04-13T21:46:34.021500Z

Spawning a task for each small "partial" computation is usefull when you have 100000+ small tasks and you dont want to block a thread with a single chain of compuation

niwinz 2020-04-13T21:46:52.021900Z

in other words small tasks will enable cooperativeness at cost of latency

niwinz 2020-04-13T21:48:24.023200Z

In any case, promesa here is just a sugar syntax on top of completable future, if you have problems with backpressure or not using 100% cpu is probably a design problem than promesa problem.

niwinz 2020-04-13T21:50:42.025500Z

about your problem i think, using the default thread pool will be more efficient, you just need to use p/all without providing any specific thread pool. but without knowning all the intervinients (such that how http request is made, etc.. etc) i can't help more...