Yet another abstract style decision question from me: I'm wrapping an api. I want to offer users the ability to extend my wrapper The basic entrypoint is:
(req connection request)
I need the ability for users to extend my library so that they can:
1. Modify the request before it is sent (an example extension function would add missing default values to a request)
2. Modify the results of the request based on the request. (Examples would be taking a list of response messages and extracting only the relevant keyword from each message, or reducing them to a single value )
Would it make sense for req
to be part of a protocol, with multiple other protocol functions handle-request
and handle-result
which are used internally?
Would it be better to offer the ability for handlers functions (for both pre and post processing) to be passed to req
?
Or in general is it best to just require users to wrap this type of thing and do the pre-post processing in their own function?middleware is popular for this sort of thing
yeah i've dabbled extensively with both middleware and interceptors... never could get things to feel quite right for reasons I can't quite articulate
now that I mention it, middleware is probably the best of the options, yet for some reason I don't like it personally, I just end up getting tripped up by the pattern all the time (forgetting to call the next handler, reversing order of things, etc)
my experience was to use middleware well and reliably write new middleware I had to spend a few hours just writing weird middlewares and experimenting in the repl
it's an idiom that you need to learn if you want to use it properly IMHO
gotcha, thanks for the input 👍
I recently worked on a project doing exactly what you describe - we needed a way to extend and vary api consuming code
it started as copy/paste first of course, then I tried a strategy-pattern style protocol version
what ended up sticking and being reliable was middleware
cool
Back to:
(req connection request)
If you need to access the connection inside the middleware, would you put the connection inside the request map? or have you internal middleware functions all take 2 arguments (the connection and the request)?the most common thing is definitely to wrap everything in one map (ring style) when using middleware, but two args also works - all the middleware would just have to agree on the convention
I feel like in my previous experiments the "put things into one big map" lead to sloppyness in just using that map as a way to hold everything, leading to complex dependencies between handlers. The 2 args seems like it might do. Thanks!
spec might also help - eg. each middleware defining a spec for the data it uses and adds
good point
and yeah, agreed that when used in an undisciplined way the "ad-hoc map of everything" argument leads to messy and brittle results
Curious: have you tried anything to combat this.
I'm currently undergoing investigation into alternatives
I have some suspicions / guesses about when and why it happens (and what went right in places where it doesn't happen), but nothing really formal atm
That's perfect
:) we are still rooting the domain out, we feel stuck between a rock and a hard place. The only ideas we have we have seen lead to this same end place.
my knee jerk response is that an undisciplined mix of random parameters is a sign that the shape of the current design doesn't intuitively match up with the domain, and the map keys are being forced to act as an adaptor between the actual data flow and the one the domain needs
but that's just a hunch, not having seen your code
I think putting everything in a map that get's passed everywhere starts to become similar to mutable state. It actually kind of is mutable state, you're just kind of using the state held by the environment (ie stack frames) instead of being explicit about it (ie using atom
s
except you have the guarantee that nothing "mutates" from any call unless you access it from the return value
but in a big picture it does introduce the same sorts of problems
The anti pattern I'm particularly struggling with is the introduction of a big bag of state made up of 90% of your system. Then somewhere along the pipeline that map gets enhanced. And in different parts of the codebase it has different keys but the same name.
oh yeah... I remember that problem from a ball of mud product I worked on a few years back
the system map version avoids @dominicm’s issue where the same function gets different variants of the map based on the code path that led to the call
at least the system map has a single point of generation / truth
good point
That depends how granular you go. Even with 2 entry points (Web server, event handler) taking different dependency sets and calling them "deps" it's very confusing. Especially as various parts add derivations to that map (db snapshots, removing the record wrapper) the actual function that performs behaviour may not be taking something that looks anything like your source of truth. You're left baffled for hours wondering how it's calling mongodb directly like that.
If you expand that to each endpoint in your system (an attempt at avoiding the massive deps map and attempting to make each endpoint easier to follow by minimizing dependencies) you can end up with maps which are passed around with injection into them. Having a map of dependencies is synonymous with recipe for disaster in my mind right now.
I've been considering making a non-assocable map to prevent it happening 😂 I don't want to go that far. I'd rather find a new abstraction.
dynamically bound symbols instead of map keys? lol
We joke, but maybe that's not a terrible idea?
My only hang up is async code
async (go blocks and futures and async/thread) all propagate dynamic bindings
now this would be the binding from their call chain and wouldn't reflect later changes under other call chains...
NIO is way more important in http, which means something like netty, which I don't think does dynamic bindings.
you could pass it a function that is created via binding-conveyor-fn
(ins)user=> (source future)
(defmacro future
"Takes a body of expressions and yields a future object that will
invoke the body in another thread, and will cache the result and
return it on all subsequent calls to deref/@. If the computation has
not yet finished, calls to deref/@ will block, unless the variant of
deref with timeout is used. See also - realized?."
{:added "1.1"}
[& body] `(future-call (^{:once true} fn* [] ~@body)))
nil
(ins)user=> (source future-call)
(defn future-call
"Takes a function of no args and yields a future object that will
invoke the function in another thread, and will cache the result and
return it on all subsequent calls to deref/@. If the computation has
not yet finished, calls to deref/@ will block, unless the variant
of deref with timeout is used. See also - realized?."
{:added "1.1"
:static true}
[f]
(let [f (binding-conveyor-fn f)
fut (.submit clojure.lang.Agent/soloExecutor ^Callable f)]
(reify
clojure.lang.IDeref
(deref [_] (deref-future fut))
clojure.lang.IBlockingDeref
(deref
[_ timeout-ms timeout-val]
(deref-future fut timeout-ms timeout-val))
clojure.lang.IPending
(isRealized [_] (.isDone fut))
java.util.concurrent.Future
(get [_] (.get fut))
(get [_ timeout unit] (.get fut timeout unit))
(isCancelled [_] (.isCancelled fut))
(isDone [_] (.isDone fut))
(cancel [_ interrupt?] (.cancel fut interrupt?)))))
nil
Then you have to remember to do that :p
one of the main things I miss about compile time type checking is the compiler telling me where my refactor is incomplete, hiding a gumbo of objects in a map makes this problem even worse
Yes, I've definitely been wrestling with this type of thing a lot lately
I've been getting some pushback over the use of or
to choose between values - e.g., picking one or another value that might be in a map, like (or (:hopefully-is-present my-map) (:fallback-if-not my-map))
. The complaint being that or
should be used for strictly logical/boolean operations. Does anyone have opinions on whether that use is idiomatic, and if not, what should be used instead?
or is idiomatic, but also keywords take a second arg that acts like or
so in this case, just do (:hopefully-is-present my-map (:fallback-if-not my-map))
but really, or
isn't boolean or logical in lisps
yeah that's a poor example in that case - it's used elsewhere in other ways though. thanks!
I'm currently the sole developer on this clojure project - we have a legacy clojure project from some years back but as is, I'm the only full time clojure dev. I get reviews from our architect who has some clojure experience but otherwise I'm left as the sole defender of some of these practices, so it's been useful having this community. thanks!
(this is my first big clojure project after some very basic personal dabbling)
I personally don't mind or
used like this generally either
literally the only objections I've seen are from people not familiar with lisps
it's a standard lisp idiom
I will head into my next code review armed with the phrase "standard lisp idiom" 😉 Thanks folks
another idiom for "first key present"
user=> (some {:b 2 :c 3} [:a :b :c])
2
since maps acts as functions over keys
oh neat. I haven't seen that one before