architecture

mbjarland 2020-11-12T16:14:30.050800Z

In the java world there seems to be a trend with a lot of buzz around reactive extension and implementations like rxjava/reactor. I have a proprietary and fairly large java api which uses this "reactive pattern" and I would like to write a clojure version of it. For context, the java api is really just a wrapper around a web service api and all actual actions result in http calls over the wire. As an example, we could have code on this shape (in this case using the jvm CompletableFuture class, but the shape applies to rx as well):

var pipe = userService.getUserByKey("some-user-key")
          .thenCombineAsync(
            userService.getUserGroupByKey("some-group-key"),
            (user, group) ->
              userService.assignUserToGroup(user, group))
          .thenComposeAsync(this::checkUserForFraud)
          .exceptionally(throwable -> {
            <http://logger.info|logger.info>("log some error", throwable)
          })

pipe.get()
where the last get would trigger the execution of the "asynchronous chain of operations" defined above (the get is contrived, just here to indicate that the pipe is only executed if somebody asks for the result). How would one go about translating this pattern into an idiomatic clojure api? Use core.async pipelines? Use plain core.async? Transducers? Plain old fp? Use something else entirely? Java interop with the completable future (yeah I'm not going there but figured I'd throw it into the competition)? Pros, cons? In a perfect world I would like to retain the ability to declaratively define chains of operations as in the above, but if the downsides drown you, living without it might be an option.

mbjarland 2020-11-13T08:59:16.125100Z

@ben.sless thanks for the pointer, this is useful. Will add it to my list of things to consider. Though I am starting to lean towards the downsides with the reactive pattern weighing more than the upsides, i.e. perhaps a clojure api would be best served staying away from this pattern.

Ben Sless 2020-11-13T09:30:11.125800Z

I share the feeling, however, until we get project loom (unless you want to run experimental JDK in production) your options are: - synchronous code - buy in to async world which colors all your code - buy into streaming abstraction (such as core async pipelines) which will dictate your entire architecture If all of this happens in a web handler perhaps you could try interceptors as well. I also saw you mentioned synchronous db access due to jdbc. You can try to square that circle with core async, or alternatively give the vertx client a try. I saw metosin used it in porsas but haven't tried it myself. (http://github.com/metosin/porsas)

mbjarland 2020-11-13T10:31:27.126800Z

The sync db comment was not from me. In my particular case for this particular api it would be making calls to rest apis to assemble a response to a client http request

mbjarland 2020-11-12T16:17:07.051500Z

We can for the sake of this example assume only single valued things in the pipe, i.e. not reactive streams like Flux et al

seancorfield 2020-11-12T16:18:15.052600Z

Looking at the code above, I'm struggling to understand how you benefit from async operations, if you're calling get straight after?

mbjarland 2020-11-12T16:18:28.052900Z

get is contrived

seancorfield 2020-11-12T16:19:43.055500Z

OK, so I'd probably just reach for future and deref at first, but core.async for coordinating results might be useful in larger examples.

mbjarland 2020-11-12T16:19:51.055800Z

so for example in micronaut which is a web service framework, you can return a reactive type (Flux or Mono, but the shapes look similar to the above) from a controller method and this lets the web server use the request thread for something else, have the lengthy operation run on some thread pool and get notified when the long running operation is done

mbjarland 2020-11-12T16:21:22.056900Z

ok

mbjarland 2020-11-12T16:23:20.057600Z

I get the sneaking feeling that we would somehow drop to a lower level of abstraction with future and friends vs let’s say project reactor

1☝️
seancorfield 2020-11-12T16:23:50.058400Z

We have some macros that wrap CompletableFuture https://github.com/worldsingles/commons/blob/master/src/ws/clojure/extensions.clj#L124-L150 but I don't think we're using them at work right now.

seancorfield 2020-11-12T16:24:15.059200Z

(those exist because the callback version of future never got added to the language)

mbjarland 2020-11-12T16:25:13.061100Z

my somewhat uninformed read is that the whole point of CompletableFuture and later the reactive extensions of which reactor and rxjava are implementations is the composability, i.e. we're not just saying here's a future value but defining a chain of operations which can be passed around

seancorfield 2020-11-12T16:25:35.061600Z

I think that unless you're genuinely getting performance benefits from async code, you're just making stuff more complex.

2πŸ‘
mbjarland 2020-11-12T16:25:54.062Z

yeah I'm leaning in that direction as well

emccue 2020-11-12T16:26:32.063900Z

The whole point is to juggle N tasks between M threads where N >> M

mbjarland 2020-11-12T16:26:37.064200Z

my exposure to the api is not large enough for me to say with certainty but the code looks horrible and what it is doing is really not rocket science

emccue 2020-11-12T16:26:49.064700Z

If you don't need that performance, don't do it

mbjarland 2020-11-12T16:27:32.066100Z

well this would be living in a cloud micro service environment so not tying a thread to each request might be relevant

seancorfield 2020-11-12T16:27:57.066600Z

If you mapped the above example onto futures and derefs, you'd get something like:

(check-for-fraud @(assign-user-to-group @(get-user-by-key "some-user-key") @(get-user-group-by-key "some-group-key")))
(assuming all those functions returned future values)

seancorfield 2020-11-12T16:28:17.067200Z

I find it hard to believe it's really worth running such low-level tasks on threads...

mbjarland 2020-11-12T16:28:19.067300Z

and error handling?

seancorfield 2020-11-12T16:28:38.067700Z

Just wrap it in try/`catch` πŸ™‚

seancorfield 2020-11-12T16:29:30.068600Z

When you have a reactive framework, the temptation is to write everything in that style which often doesn't make sense, IMO.

seancorfield 2020-11-12T16:30:09.069700Z

Some things definitely benefit from async execution but the granularity of that is important.

emccue 2020-11-12T16:30:28.070100Z

There are kinda two things going on here 1 - Doubt at you needing to do the reactive way 2 - How to do the reactive way anyways

mbjarland 2020-11-12T16:31:03.071200Z

right and I totally hear you with the doubt part, I have a healthy dose of it myself so could be it's where this ends up.

mbjarland 2020-11-12T16:31:47.073Z

But to get the options clear, going with 2 you would look at future as a building block with possible some async channels and coordination thrown in?

seancorfield 2020-11-12T16:32:00.073700Z

Given that I/O is going to be synchronous, you're going to either be blocked on that or use up threads to try to get them async. But are two simple "get" calls (presumably to a DB? or maybe an API, I guess) going to be so slow that using two threads to run them and then joining the results is faster than just running them sequentially?

emccue 2020-11-12T16:32:11.073900Z

well, that is one way

emccue 2020-11-12T16:32:31.074400Z

you can also just use the CompletableFuture api like the code already does

emccue 2020-11-12T16:32:40.074700Z

and just interop

mbjarland 2020-11-12T16:32:55.075200Z

right, yeah, that is an option...perhaps one I was hoping to avoid

seancorfield 2020-11-12T16:33:03.075600Z

(or some syntactic sugar to make it easier to read)

emccue 2020-11-12T16:33:09.075800Z

and there are two issues with doing that

emccue 2020-11-12T16:33:17.076300Z

1. The unavoidable async-ness of the code

emccue 2020-11-12T16:33:27.076600Z

2. The syntax

emccue 2020-11-12T16:33:33.076800Z

2 is eminently solvable

emccue 2020-11-12T16:33:40.077Z

1 isn't

emccue 2020-11-12T16:34:15.077800Z

what is really important to note regardless is that your database access is going to be synchronous

emccue 2020-11-12T16:34:21.078Z

no matter what you do

emccue 2020-11-12T16:34:32.078300Z

(thanks jdbc)

emccue 2020-11-12T16:35:10.079800Z

so if most of what your app does is talk to a db, there is reason to believe that the actual performance benefits of "async-ifying" small tasks like that are gonna be pretty small

mbjarland 2020-11-12T16:35:14.079900Z

in this case the api-to-be wraps a rest service and the java side decided to build it based on CompletableFuture

mbjarland 2020-11-12T16:35:54.081Z

so the scenario would be that there is a user request to a cloud server, the code we are talking about sits on that server and is used to talk to another web service over http. Scale could potentially be quite large so performance might come into the picture.

mbjarland 2020-11-12T16:36:45.081900Z

I'm simplifying but that's essentially the scenario

mbjarland 2020-11-12T16:38:17.083200Z

so most of what the app would do is talk to other web services and then serve a http result

seancorfield 2020-11-12T16:39:39.084800Z

Hitting a REST API for every "low-level" operation could be slow enough to warrant async/futures but that's what our obsession with microservices has led us to 😞

emccue 2020-11-12T16:39:46.085Z

and it is now that we all say a prayer for the swift arrival of project loom amen

1πŸ™
mbjarland 2020-11-12T16:40:32.085500Z

fibers?

seancorfield 2020-11-12T16:40:42.086100Z

Even with Loom, you're still going to have code that is full of threads and derefs. They're just cheaper.

emccue 2020-11-12T16:41:27.087200Z

I mean, yeah but in the context of a web service you can just thread-per-request and then the core logic wont need it

mbjarland 2020-11-12T16:41:39.087700Z

guess loom talks about tco as well but alas they seem to want to do most everything else first, tco last (totally beside the point here, I would just love to see tco)

emccue 2020-11-12T16:42:05.088200Z

anywho, thats probably not going to come before economic collapse from exponential global warming

emccue 2020-11-12T16:42:42.089Z

so you are stuck with futures/completable futures

emccue 2020-11-12T16:44:10.090200Z

slash core.async channels

mbjarland 2020-11-12T16:44:56.090900Z

so let's assume we are, how does the web server situation look in clojure? can you tell say ring to handle io things on a separate pool of threads somehow? How would you actually do that part or would you have to code that yourself?

mbjarland 2020-11-12T16:45:19.091300Z

(and apologies, I have little production exposure with clojure)

mbjarland 2020-11-12T16:45:55.092300Z

would you even run a clojure web server at scale?

emccue 2020-11-12T16:46:27.093300Z

oh i have no real world experience with clojure, i'm just a vocal idiot - but the "simplest" way is just to run a Jetty server

emccue 2020-11-12T16:47:12.094300Z

and your core logic works with the ring request map

emccue 2020-11-12T16:47:19.094600Z

and returns a ring response map

lukasz 2020-11-12T16:47:35.095100Z

FWIW Ring supports async handlers

mbjarland 2020-11-12T16:47:46.095500Z

right I was wondering about the "return a reactive type" or "return a future" and have the server realize what to do with it

mbjarland 2020-11-12T16:48:00.096100Z

@lukaszkorecki ah ok, yeah that is what I was fishing for

emccue 2020-11-12T16:48:20.096800Z

I know in pedestal (which is a few more layers of kool aid deep) you can have handlers return a core.async channel

mbjarland 2020-11-12T16:49:34.098900Z

and I guess in this context a core.async channel would be analogous (Rich will kick me in the nuts) to say a reactor Flux

emccue 2020-11-12T16:49:41.099200Z

the ring system seems slightly different, but fundamentally similar - you call a callback with the value at some point in the computation

lukasz 2020-11-12T16:49:58.099700Z

but! We run a service doing some DB calls and processing every single user request at 15ms 95p at 130 req/s. Not sure if that's a scale

seancorfield 2020-11-12T16:50:29.100600Z

The async model in Ring is kind of hard to use and is sort of "fake". But running Jetty in production -- via the default Ring adapter -- scales pretty well without needing fancy async stuff.

lukasz 2020-11-12T16:50:39.100900Z

☝️ this

mbjarland 2020-11-12T16:51:04.101400Z

well that is nice news, I have some mileage with jetty from previous lives

lukasz 2020-11-12T16:52:12.102500Z

Personally, I'm still dubious about core.async in general - maybe the code we write just doesn't need it or something, but our IO heavy services just use good old thread pools (with sugar provided by claypoole).

mbjarland 2020-11-12T16:53:35.103400Z

assuming your user requests can be either paritioned predictably or do not have any node affinity, you can always scale horizontally, not sure about the perameters here yet

emccue 2020-11-12T16:54:18.104500Z

in that case it really is a "where do you place value" problem

emccue 2020-11-12T16:54:44.105800Z

lets say you can horizontally scale, but it costs the company 5x as much than if you wrote it high performance

mbjarland 2020-11-12T16:54:48.106Z

yeah, like I said, at this point not sure if horizontal scaling will be simple or not

emccue 2020-11-12T16:54:59.106400Z

is living with worse code worth your life energy

mbjarland 2020-11-12T16:55:05.106900Z

: )

seancorfield 2020-11-12T16:55:05.107100Z

@mbjarland At work, almost all our services and apps are stateless and we run three instances of most of them. I wouldn't say we are "high scale" but we have thousands of concurrent users 24x7. And that's on Jetty, without much multi-threaded assistance.

emccue 2020-11-12T16:55:22.107300Z

when in a year or 2 you get loom for free

emccue 2020-11-12T16:55:31.107700Z

(or more, eternal pessimism)

mbjarland 2020-11-12T16:55:36.108Z

I'm old enough to value my life energy

emccue 2020-11-12T16:55:50.108400Z

and all your async work is now useless

mbjarland 2020-11-12T16:55:59.108900Z

@seancorfield very useful information, thank you

emccue 2020-11-12T16:56:19.110Z

(assuming requests do just delegate to other servers, that would probably be the case)

mbjarland 2020-11-12T16:57:06.111300Z

also that model seems alluringly simple and we would love to not complicate things if not necessary

emccue 2020-11-12T16:58:02.113100Z

I remember a relevant talk, uno momento

lukasz 2020-11-12T16:59:01.115100Z

Old ops mantra is to have two of everything, no matter how performant ;-) Your super scalable is not so scalable if it has to go down when deploying new code

mbjarland 2020-11-12T16:59:02.115200Z

I was thinking of the cognitect aws api as an example of a idiomatic api

mbjarland 2020-11-12T16:59:24.116200Z

have not looked at it much but I have to say I love the data only approach...and no completable future in sight

mbjarland 2020-11-12T17:00:01.117200Z

also very repl-self-documenting which is nice

seancorfield 2020-11-12T17:00:16.117600Z

An anecdote about concurrency: years ago, when we were early on our Clojure path, we built a process that scanned our DB for updates, ran searches against a (proprietary) search engine, and then produced HTML emails. It worked nicely but we were curious about how much volume we could run through the system so I turned a few map calls into pmap calls... and crashed the search engine because it couldn't handle the volume. We ended up standing up two more search engine instances, just for this process. When we started to analyze the effectiveness of sending (by that point) millions of emails a day, we figured out that we got better "bang for our buck" by sending about an order of magnitude fewer emails and being more targeted about the audience (and the content) -- and at that level, we didn't need the pmap's (and, ultimately, we didn't need those extra search engine instances). Sometimes, "scale" is not what you really need πŸ™‚

1πŸ’―5πŸ‘
Ivan 2020-11-16T10:54:14.167900Z

exactly; from a technical perspective it is cool and interesting, but from a business perspective it is not always what you're after - it is key to get your engineer to think towards the business focus

emccue 2020-11-12T17:00:28.117800Z

https://www.youtube.com/watch?v=5TJiTSWktLU

2020-11-13T09:34:35.126Z

I don't buy the "boring code" argument. This problem is not essential to reactive/functional programming, it's purely about syntax, and it's only an issue in languages with poor metaprogramming support. The infamous snippet shown around 9:33 could be written like this :

(require '[missionary.core :as m])
(m/ap
  (let [user (m/? (find-user-by-name ws name))
        cart (m/? (m/aggregate conj (load-cart user)))
        uuid (m/? (pay (transduce (map get-price) + cart)))]
    (m/? (send-email (m/?= (m/enumerate cart)) uuid))))
It's arguably as much readable and maintainable as the "boring" version, concurrent email sending is made explicit, and it's still fully non-blocking.

1☝️
emccue 2020-11-12T17:00:49.118100Z

yeah this is the one^

mbjarland 2020-11-12T17:01:41.119Z

ok I have an interrupt, thank you everybody, this was enlightening. The clojure community is really one of the many reasons to love this language

mbjarland 2020-11-12T17:01:56.119300Z

Thanks!

seancorfield 2020-11-12T17:02:12.119700Z

@emccue That talk looks interesting. Added to my "Talks to Watch" collection πŸ™‚

mbjarland 2020-11-12T17:02:35.120400Z

oh and I will watch the above, thanks @emccue

emccue 2020-11-12T17:02:58.120800Z

The only reason I pass as halfway competent is that I watch a bunch of talks

emccue 2020-11-12T17:05:10.121500Z

This one is by the person who wrote Reactive programming with RxJava

emccue 2020-11-12T17:05:21.121900Z

(I had to skim to find the reason I thought it had some authority)

Ben Sless 2020-11-12T17:43:53.122400Z

https://github.com/funcool/promesa aims to work with completable futures idiomatically in Clojure. I think it does the job rather well

Ben Sless 2020-11-12T17:48:04.122900Z

doesn't solve the code being async-colored

seancorfield 2020-11-12T18:00:35.123400Z

That is a great talk -- thanks for the pointer @emccue!

greglook 2020-11-12T18:52:41.124300Z

Late to the party, but we use manifold for async and it works well. We use yada/aleph for our service APIs, so it was a natural fit. https://github.com/aleph-io/manifold/

mbjarland 2020-11-13T10:29:08.126600Z

Any chance you can share the container model? I.e. are you running this inside of say jetty as well or are you running an aleph server which is world facing?

greglook 2020-11-13T22:04:01.127100Z

For our public APIs, the aleph endpoints are fronted by a pretty standard loadbalancer setup (with some machinery to handle dynamically-allocated instances). Aleph is built on Netty, so I’m not sure how you’d use it with Jetty anyway. Our internal services also use Aleph, but those endpoints get routed to directly by the service mesh.