core-async

kwladyka 2020-01-15T13:45:42.203900Z

How can I read pool size and how many threads from pool size is currently used?

kwladyka 2020-01-15T13:46:09.204500Z

I want to check if app stop processing because of deadlock, because of too many threads

kwladyka 2020-01-15T14:02:12.204900Z

well actually pool size is easy to determine:

(delay (or (when-let [prop (System/getProperty "clojure.core.async.pool-size")]
               (Long/parseLong prop))
             8))

alexmiller 2020-01-15T14:03:25.205500Z

there isn't an easy way to see currently used, other than by getting all threads and inspecting their names

kwladyka 2020-01-15T14:03:52.205800Z

thank you, any hints how can I do it?

kwladyka 2020-01-15T14:04:06.206Z

example of code?

alexmiller 2020-01-15T14:05:32.206700Z

https://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html is the main api, unfortunately it dates way back in Java and is pretty creaky

kwladyka 2020-01-15T14:05:33.206800Z

I found this https://gist.github.com/DayoOliyide/f353b15563675120e408b6b9f504628a

alexmiller 2020-01-15T14:05:57.207Z

yeah, you can go that route too

kwladyka 2020-01-15T14:06:02.207200Z

๐Ÿ‘

kwladyka 2020-01-15T14:11:14.207800Z

hmm so I have 22 rows while pool-size is 8โ€ฆ How can I get any conclusions from that? :)

alexmiller 2020-01-15T14:11:24.208Z

rows?

kwladyka 2020-01-15T14:11:34.208200Z

yes

alexmiller 2020-01-15T14:11:41.208400Z

I don't know what you mean

alexmiller 2020-01-15T14:11:48.208600Z

what is a row?

kwladyka 2020-01-15T14:12:22.208900Z

alexmiller 2020-01-15T14:12:51.209300Z

all of the async pool threads have names like async-dispatch-N

alexmiller 2020-01-15T14:13:01.209600Z

so I think there are 2 there?

kwladyka 2020-01-15T14:13:39.210500Z

oh ok thx, now I have to try it with real issue case

kwladyka 2020-01-15T14:29:15.211500Z

If somebody is interested in:

(defn how-many-async []
  (let [thread-set (keys (Thread/getAllStackTraces))
        thread-data (mapv bean thread-set)]
    (->> (map :name thread-data)
         (filter #(.contains % "async-dispatch-"))
         (count))))
fast solution but it works, I didnโ€™t try to optimize it, because I need it only for debug now

alexmiller 2020-01-15T14:33:32.212Z

you could put that all in one big ->>

alexmiller 2020-01-15T14:34:10.212700Z

(->> (Thread/getAllStackTraces) keys (mapv bean) (map :name) (filter #(.contains % "async-dispatch-")) count)

kwladyka 2020-01-15T14:39:37.214100Z

๐Ÿ‘ true, but I was enough satisfy with whatever code which return me right value in this case ๐Ÿ™‚ Just sharing for others ๐Ÿ™‚

kwladyka 2020-01-15T14:40:37.215200Z

but still thinking how can I be sure if deadlock is because of pool size. The project is so complex to determine it. I found in all places the number of threads for async is 8

kwladyka 2020-01-15T14:40:47.215500Z

so still canโ€™t be sure, but it is my guess

alexmiller 2020-01-15T14:42:18.216100Z

are you using the new system property that will check for use of blocking async calls in go blocks?

kwladyka 2020-01-15T14:42:37.216600Z

not sure what do you mean?

alexmiller 2020-01-15T14:43:10.217400Z

core.async recently added a system property that will throw if you use a blocking core.async call, >!!, <!!, etc in a go block

alexmiller 2020-01-15T14:43:30.217600Z

-Dclojure.core.async.go-checking=true

alexmiller 2020-01-15T14:43:54.218100Z

that's just one subset of possible blocking calls of course

alexmiller 2020-01-15T14:44:27.219Z

if you look at the thread dump for those async-dispatch threads, it's usually pretty obvious if it's locked up in this way

kwladyka 2020-01-15T14:44:51.219500Z

I donโ€™t see too many >!! <!! if any in code ,but I see many alt! and &lt;!, &gt;!

alexmiller 2020-01-15T14:45:06.219800Z

well those are fine - those are parking ops

alexmiller 2020-01-15T14:45:42.220700Z

if they can't be satisfied, the go block is parked and not consuming a thread

kwladyka 2020-01-15T14:45:59.221100Z

but processing freeze for some reason and I think not always in the same place. At least I have this conclusion from println

alexmiller 2020-01-15T14:46:17.221500Z

if you can just do a thread dump and post it here, I'm happy to look at it

kwladyka 2020-01-15T14:46:22.221700Z

but mostly in the same place

kwladyka 2020-01-15T14:47:07.222600Z

yeah the worst thing is I canโ€™t run it in the REPL easy, because it needs to run cucumber lein cucumber but I will try to get from this something useful

kwladyka 2020-01-15T14:48:02.223200Z

> if they canโ€™t be satisfied, the go block is parked and not consuming a thread exactly so can be parked forever

alexmiller 2020-01-15T14:57:03.223600Z

yes, but that doesn't match what you're describing

alexmiller 2020-01-15T14:57:22.224100Z

in that case, you'll see 0 threads typically

kwladyka 2020-01-15T14:59:23.224900Z

0? I see 8/8 used in all places, so it can be the issue or not. Not sure why I should see 0?

alexmiller 2020-01-15T15:00:29.225200Z

if everything is parked, then there are no threads doing work

alexmiller 2020-01-15T15:00:57.225800Z

if everything is blocked (on IO or an async blocking operation), then you'll see 8 blocked threads

alexmiller 2020-01-15T15:01:24.226700Z

ie starvation or deadlock

kwladyka 2020-01-15T15:01:50.227500Z

I mean other threads can block 8 threads and wait for &gt;! in alt!

alexmiller 2020-01-15T15:01:51.227600Z

yours sounds like the latter to me (but of course, both can be true simultaneous too)

alexmiller 2020-01-15T15:02:01.227800Z

no, they can't

kwladyka 2020-01-15T15:02:13.228Z

hmm

alexmiller 2020-01-15T15:02:37.228700Z

&gt;! and alt! can only occur in go blocks, which only exist in the dispatch threads

alexmiller 2020-01-15T15:02:50.228900Z

and they don't block

2020-01-15T19:19:59.230100Z

@kwladyka you don't need a repl to get a stack dump - Ctrl-\ in the terminal running the process or jstack pointed at the jvm PID will also work

๐Ÿ‘ 1
alexmiller 2020-01-15T19:25:36.230400Z

or kill -3 on the pid

2020-01-15T19:27:39.231700Z

and to reiterate what @alexmiller already said above, your situation where all 8 threads are blocked can't be caused by &gt;! or alt!, you are doing some blocking operation in a go block (io, something extremely CPU intensive, a blocking channel op...)

๐Ÿ‘ 1