Morning! I'm trying to properly understand ring
's async support in combination with Jetty. We're using it with ring-jetty-adapter
, and are starting the server with :async? true
, so all our handlers have arity [request response raise]
, where response
and raise
are functions. We're doing this because some of our requests can be quite long-running, and we need to make a blocking I/O call to get the response data. The thing is, Jetty JMX metrics are showing us that a long-running request is still holding onto one of the threads from Jetty's threadpool while in progress, so during a load test we hit in-progress requests = threadpool max, the thread queue size spikes, and we see healthcheck requests start timing out (we think due to being stuck in the queue), even though they have no blocking calls etc to slow things down. I thought the whole point of async here was that that didn't happen.
I've found various overly-simplistic examples detailing Jetty async (e.g. https://www.baeldung.com/jetty-embedded and https://webtide.com/servlet-3-1-async-io-and-jetty/), which show setting up a WriteListener
with onWritePossible
method, but they just use 'content' to write as a string buffer or similar, not a blocking I/O call like we have. I can't find any equivalent to that within ring
. I was expecting something that (for example) spawned a future to run handler
within and used the response of that future to write the output response, but from https://github.com/ring-clojure/ring/blob/1.6/ring-jetty-adapter/src/ring/adapter/jetty.clj#L29-L41 it looks like any work we do in handler
to get the data and build the response-map
will just block the thread. Are we supposed to be doing work within our handler to create a future and build the response within it, then servlet/update-servlet-response
will make sure that is correctly matched up with the request? If so, is core.async/thread
a good thing to look at for this? (I know go
is not, because of its use of a small threadpool, so blocking I/O is a no-no)
Your handler should be handing off the work (along with the provided respond/raise functions), and then returning immediately. You can use core.async
for the co-ordination of this, e.g, something along the lines of
(defn my-handler [req respond _raise]
(a/go
(respond (a/<! (handle-work req))))
handle-work
would do something like putting the work into a worker queue, and you can then have some control over number of those threads you have processing the work. meanwhile the piece of code that is waiting for the work to finish is efficiently parked in the async threadpoolCool, ta. If I’m going to have a constrained thread pool anyway, which I set the size of, is there any disadvantage to a/go
as opposed to a/thread
? My reading was that blocking I/O was bad in go
specifically, because by default it uses a small thread pool (8?) and you could easily end up with all threads blocked and waiting on response data (so new requests would... what? Just sit waiting for one of the go
threads to become available for handling, but the main Jetty thread would be free to process another request?), but I don’t see the difference between using a/thread
and limiting the number of concurrent calls vs a/go
with a thread pool set to the same size
I would suggest asking in #core-async for advice from practitioners more experience than I, but my initial thought is that you can probably get away with just using a/thread
in this case. I'm not sure off the top of my head what will happen if you lock up the async threadpool, my suspicion is that you would not be able to park new requests there, though I could be wrong.
@carr0t ^^
Thanks :)