it's sad because it puts the chained exception within the ex-data
in addition to chaining it as the ex-cause
I end up doing everything Alex mentioned minus setStackTrace
preserve the chained cause & ex message, but dissoc stuff from the data
for that case it doesn't matter that the outermost exception's stacktrace differs, the real meat of the stacktrace is on the chained cause
aaah, I was trying to set!
the data
field (which obviously didn't work), but .setStackTrace
might do it. Especially if it's as simple the example @hiredman pasted.
Thank you all very much!
Within a REPL session, is it possible to add a new dependency? Using Deps and CLI.
there is an experimental add-libs3 branch in tools.deps.alpha that provides this functionality. we expect it to eventually be included but there are some key integration questions we still have. Sean Corfield's deps.edn has some setup info for this https://github.com/seancorfield/dot-clojure
@michaellan202 There’s an example of how I use add-libs
on that branch here https://github.com/seancorfield/next-jdbc/blob/develop/test/next/jdbc/test_fixtures.clj#L229-L244
That lets me add all of next.jdbc
’s test dependencies into a running REPL that was started from another project (that may well depend on next.jdbc
, but without those deps). This lets me run next.jdbc
’s tests from my editor, even when working on another project.
Got it, so it’s not a permanent addition, just temporary
It just loads libs into the REPL. If you want them present the next time you start the REPL, you need to add them to deps.edn
.
In one of my RDD talks I show how you can edit deps.edn
to add dependencies and then call add-libs
on that same hash map from within deps.edn
(by having some code in deps.edn
that is normally commented out).
(I am tempted to automate that so I can just edit deps.edn
, put my cursor just inside the :deps
map or :extra-deps
map, and hit a hot key and have add-libs
called on that…)
haha, that sounds really convenient
Just added it to https://github.com/seancorfield/vscode-clover-setup — you need the :add-libs
alias from my dot-clojure repo active when starting a REPL and then you can just edit the :deps
or :extra-deps
in your deps.edn
file and with cursor inside the lib spec hash map, ctrl-; shift+a
and it sends it to add-libs
and loads those dependencies!
When I have a namespace with a couple of defmethod declarations, what’s the best way to make sure the namespace is loaded (I mean required)?
For instance, let’s say I have a library whose main ns is my-lib.core
and the defmehtod declarations are in my-lib.foo
Should I write something like?
(ns my-lib.core
(require my-lib.foo))
The problem is that when someone looks at the ns
form, my-lib.foo
seems superfluous.
And in fact cider-refactor removes it as it considers it as a unused libspec vectorI'd like to write code that behaves differently depending on the user's Java version. More specifically, I'd like to write a function that uses java.lang.module.ModuleFinder
if it's available, and just returns nil
if not. Any general pointers on how to approach this? If you can point me to any prior art on this topic, I'd appreciate it.
https://clojure.github.io/clojure/clojure.reflect-api.html You can use resolve-class from that api
Thanks! I'll look into that.
There's also when-class
- it's used for conditional loading of java.sql.Timestamp: https://github.com/clojure/clojure/blob/3c9307e27479d450e3f1cdf7903458b83ebf52d7/src/clj/clojure/core.clj#L6796-L6803
Thanks! That looks nice and straightforward.
Note that that code doesn’t exist in core anymore
But feel free to borrow it
Sure. I borrowed the gist of it, that got me there.
https://github.com/eerohele/Tutkain/blob/30578b438d6a57be25c83d24f79de281b79ffbd7/clojure/src/tutkain/repl/runtime/completions.clj#L137-L150
The mapcat
fn still reflects, and I'm not sure how to hint it when java.lang.module.ModuleReference
is not guaranteed to exist, but it's not really a big deal.
you could use a fn and type hint the argument
oh, you can't do that
Yeah, that's what I'd like to do.
I figured I could maybe use the same kind of type hint as with string arrays etc., but it didn't seem to pan out, or then I just didn't get it right. The "syntax" is a bit fiddly.
I think the compiler is going to lead you into trouble in that case instead
Yeah.
What is the cleanest/most idiomatic way of taking an existing map with plain keywords and turning it into a map with namespace qualified keywords?
{:a 1 :b 2} => {:foo/a 1, :foo/b 2}
I am using rename-keys
which works, but I am positive there’s a better mechanism for this
There was a reduce-kv based solution on stackoverflow which is similar to my rename-keys approach…
@srijayanth creating a map-vals
or map-keys
is like an initiation ritual for a Clojure developer since they don’t come with the standard library… 😛 I like to for
to iterate through the key-value pairs and then into
to create the new map
(into {} (for [[k v] {:a 1 :b 2}]
[(keyword 'foo k) v]))
thx @simongray
np. I wrote that piece of code so damn many times
fyi, as noob as my question sounds, I’ve been using Clojure since 1.2 😄
haha
I still don’t get why rename-keys is in clojure.set. I remember somebody providing an explanation that seemed adequate at that time
don’t worry, I’ve never even used a transducer… 😛
I love transducers. I wrote this simple version of 2048 that uses transducers
yeah I know they’re great, I just don’t like how it makes my code look and I never had any real performance issues, so I never bothered to use them 😛
probably should get over it and use transducers everywhere applicable
Even without performance issues, they come in handy.
It is a lot easier to compose transducers and pass them around
right… do you have a recommended article or something for me to get into the right mindset?
I’ve seen them around in codebases I’ve been working on, so I’ve edited them… just never ran into a situation that made me think “that calls for a transducer!”
Btw. for map-vals
et al I really like to use medley
library.
It also has map-keys
:
(map-keys #(keyword "foo" (name %)) {:a 1})
;;=> #:foo{:a 1}
Thanks @jumar
@jumar it’s such a tiny bit of code that pulling in a library just seems overkill 😉 but I guess if you use medley for other things, it makes sense
https://ask.clojure.org/index.php/2813/move-rename-keys-from-clojure-set-to-clojure-core
@simongray - when transducers came out, I was a little befuddled about how they work. There’s a Rich Hickey talk where he exposes the insides of a transducer(it isn’t the StrangeLoop one I think). After that, every time I see code that has a series of 2 or more map/filter/reduce/iterative constructs, I eagerly convert them to transducers
We once had a discussion here about any scenario where a transducer might not be preferred. While someone pointed out something around eagerness/laziness etc, I came away feeling that there’s very few downsides to using transducers at all
For instance, here’s the transducer chain for that 2048 bit
(def xform (comp (remove zero?)
(partition-by identity)
(mapcat #(partition-all 2 %))
(map (partial apply +))))
And I end up using it as follows
(defn move-left [row]
(take 4 (concat (sequence xform row) (repeat 0))))
If you see the xform, it reads incredibly clearly from top to bottom
remove zeroes, partition equal numbers together, chunk them in 2s and sum them up
I eventually wrote my own stateful transducer that pads the ends of collection with zeroes. I stuck that transducer to the end of the chain
thanks a lot
yeah, I use map-vals
, find-first
, index-by
and assoc-some
most frequently.
What I like about map-vals
in particular is that it has a lot more meaningful name than just using the idiom (although when you get used to it it's also quite readable) - another aspect could be performance but I don't care about that one in most cases.
I guess I just need to get into the habit of converting most functions that contain a threading macro into composed transducers
Yeah, my personal limit is more than 2. if there’s more than 2 maps/filters in sequence, then that’s an ideal candidate for transducers
It really is a shame that not enough people use it. I find it awesome
I transduced the hell out of those advent of code problems
I like this discussion about when to use transducers: https://groups.google.com/forum/#!topic/clojure/JjiYPEMQK4s I think it's very helpful to understand the use cases even without knowing how exactly they work. Especially Alex Miller's points: > I would say transducers are preferable when: • 1) you have reducible collections • 2) you have a lot of pipelined transformations (transducers handle these in one pass with no intermediate data) • 3) the number of elements is "large" (this amplifies the memory and perf savings from #2) • 4) you put to produce a concrete output collection (seqs need an extra step to pour the seq into a collection; transducers can create it directly) • 5) you want a reusable transformation that can be used in multiple contexts (reduction, sequence, core.async, etc)
thank you!
I rarely need reusable transformations, but maybe I just haven’t looked hard enough.
that’s what I mean about the mindset. Gonna scan that thread for some rules of thumb.
but obviously memory savings is advantageous too
and @srijayanth is right in that lazy collections are rarely called for in practice
They are useful as hell though 🙂
I can think of maybe 4-5 cases where I actually use lazy (usually infinite) collections
and I’m pretty sure I only remember them because they are noteworthy for being the rare infite colls
I end up using cycle a fair amount. It is sometimes surprising how often that pattern shows up
You can make pretty cool spinners with a simple cycle
yeah, in one case I use cycle
to take N items from a randomised colour selection
to colour some tabs in a CLJS lib I make
and then iterate
is another common case of infinite colls
(cycle "|/-\\")
A nice classic spinner 🙂hah, nice one
the other way is to have different css styles/classes cycled
You can also make those annoying prehistoric marquees from the geocities days
my tabs use case for cycle
: https://github.com/kuhumcst/stucco/blob/master/src/dk/cst/stucco/plastic.cljs#L28-L37
It’s because rename-keys is part of relational algebra and the natural way to represent a relational in Clojure is as a set of maps.
Transducer chains aren't much harder to read than ->>
chains so I would also prefer transducers even for not so performance sensitive code
Premature optimization is one thing but wasting cycles for no good reason is another
Like after Java 8 people realized that it makes no sense to go converting most loops to Streams because you probably lose 20-30% on performance and simple loops read just fine
@nilern you’re saying premature optimisation is a good thing?? 😛
anyway, you’re probably right
I actually thought that Java 8 streams were kinda transducer-like, i.e. they were more performant than regular loops
but obviously it’s not something I ever researched very deeply 😛
like I seem to remember something something parallelism, laziness, blablabla
Transducers also usually get you to stop and think for a few seconds, which is always good when programming.
How many times did you find in code review someone used flatten
when mapcat
would have been fine? I often see it pop up when there's some confusion.
While worrying about optimization prematurely is silly, so is shooting oneself in the foot from the onset
Yeah flatten
is often a red flag and not because it is slow but because is a deep flatten even correct...
so how do you deal with dependency cycles? I have this stateful "core" of what i'm writing at the moment, and an external service living two namespaces away, each depending on the previous. the service then needs to update the state. enter dependency cycle.
If there is a lot of data you can utilize more cores with .parallel()
But in the single-threaded case transducer/Stream pipelines only beat handwritten code if it is too gnarly to do everything in a single handwritten loop so intermediate collections are added
you can extract everything related to state, including functions to update, into separate namespace and then require it from every service and the core.
yup, like i said, it's usually a place where mapcat should be used, but somewhere the dev lost control of the return type - I saw it recently with a function which returned a sequence of maps, simplified some things, switched to mapcat and broke a test, because the test mocked the function to return a map instead of a sequence of maps. That function was then mapped on the input sequence, which made flatten work and mapcat not. It always feels good to fix wrong assumptions in tests on top of offensive functions
whenever I see flatten I read "I'm not sure what's going on here, yolo"
What’s better:
(vec
(map-indexed
(fn [idx x] [idx x])
[:a :b :c]))
;or
((comp vec map-indexed)
(fn [idx x] [idx x])
[:a :b :c])
(assuming I’m not using transducers)
beauty is in the eye of the beholder 🙂
Goal is to have a vector at the end.
Personally, if there’s more than one transformation happening, I switch to transducers.
In that case comp
is just pointless obfuscation
Imo probably the first one, but there's also
(into []
(map-indexed (fn [idx x] [idx x]))
[:a :b :c])
(I'd still probably default to just wrapping the map-indexed call with vec
)user=> (reduce-kv (fn [acc i v] (conj acc [i v])) [] [:a :b :c])
[[0 :a] [1 :b] [2 :c]]
Vectors are associative collections from an index to a value.
Do you prefer that over all other options, in terms of readability or even shortness? Or you’re just showing other options 🙂
(into [] (map-indexed vector) [:a :b :c])
Or is this faster? (I haven’t done benchmarks on reduce-kv)
In terms of readability, I would not think about it at all and just extract it into a well-named function. :)
A map result could also be handy (zipmap (range) [:a :b :c])
This gives a wrong result though. Ah, the edit. :)
Sorry about that
I’m looking for the neatest code for an indexed vector for React components; map is no-go there.
Needs order.
If you really want to optimize (persistent! (reduce-kv (fn [acc i v] (conj! acc [i v])) (transient []) [:a :b :c]))
And idx accessible on each one.
In that case, not handy
into
does exactly that. :)
It uses normal reduce
but yeah
In any case, thanks for the input!
If the goal was to produce a map then reduce-kv
can avoid allocating those kv pairs
(mapv vector (range) [:a :b :c])
But beware that mapv
is eager
How has this not been posted? https://clojure.org/news/2021/04/06/state-of-clojure-2021
It was in #announcements yesterday
Fogus wrote this one. I am going to read tea leaves and say that Alex was busy getting spec 2 ready. Sometime this week? 😉
Thanks for publishing the survey and the write-up @fogus!
Urgh:
user=> (keyword "emp.id" (str 100))
:<http://emp.id/100|emp.id/100>
user=> :<http://emp.id/100|emp.id/100>
Syntax error reading source at (REPL:2:0).
Invalid token: :<http://emp.id/100|emp.id/100>
I can obviously write a fn to normalise this, but is there any real way to deal with numeric keys?
I remember someone from the Clojure team was saying that keywords should not start with a digit (like symbols), but the ability to construct them has been retained for backward compatibility.
The problem I have is that I am receiving a json that I am then namespacing
As I said, I can always use a fn to prefix something
just don't
You can use non-keyword keys (string or integer keys) just fine, if that works for you @srijayanth
yes, I’ll just add a prefix
Thanks
Ah!
I seem to recall Rich saying that he might've used transducers as the basic abstraction for Clojure's collection APIs instead of seqs, if he'd thought of them back then. Am I remembering correctly? I'm very curious what that would look like, if so.
Sounds strange. Sometimes you really need first
and rest
-- https://www.metosin.fi/blog/malli-regex-schemas/ comes to mind.
Parsing style usage is an extreme case but stopping processing early or processing multiple collections gets awkward. There are solutions like reduced
or various flatmapping and zipping operations but even if the little extra allocations go away after inlining the pipeline abstraction stops being a friend at some point. I have used a lot of https://odis-labs.github.io/streaming/streaming/Streaming/Stream/index.html in OCaml but Java 8+ Streams seem similar.
@michael348 Yes I clearly recall this from a talk; I can’t remember which one but that sounds about right;
@nilern I am not so sure about rest
; perhaps it’s a coding style, but I don’t really use it often in day-to-day code; It does feel pretty low level to me (as in, not something I’d default to);
another option is to write code in terms of a protocol, have a shared protocol namespace, and a separate implementation namespace that most consumers never need to care about
From the History of Clojure paper: > I think transducers are a fundamental primitive that decouples critical logic from list/sequence processing and construction, and if I had Clojure to do all over I would put them at the bottom.
Already in my PHP days I noticed that web apps are mostly a bunch of map
, filter
and doseq
; even reduce
feels quite unusual. Libraries and compilers are not so straightforward.
https://github.com/pixie-lang/pixie put transducers more at the bottom; but it still has seqs
I don’t recall him saying he wouldn’t have seqs; I remember it was a short comment/sentence rather than a fully articulated argument; Don’t want to speculate on what the exact idea of transducers vs. seqs would be;
@nilern My language before Clojure was also PHP!
The 2nd best! 😝
> I was taught assembler in my second year of school. > It's kinda like construction work — with a toothpick for a tool. So when I made my senior year, I threw my code away, > And learned the way to program that I still prefer today.
yeah, that History of Clojure quote is likely what I was thinking of. I'll take a look at pixie
I think he said something along the lines of "I wouldn't have made sequences lazy by default if I'd thought of transducers first"
maybe in this? https://www.youtube.com/watch?v=6mTbuzafcII
dedupe is a good example of how things would have looked differently had transducers been around from the start (probably)
The arity which accepts a collection is just:
([coll] (sequence (dedupe) coll))
Sometimes the transducer solution forces you to hide mutable state in them, where with reduce you could just use the state argument, which feels like "cleaner" FP. I wonder if it would be possible to reinvent transducers so they would get a state argument passed in (like reduce) to somehow avoid mutation inside.
The state is not as encapsulated (in the sense of the ST
monad) as say, the transients inside into
but I think transduce
, into
et al. do encapsulate it so it is only a problem when calling transducers directly which basically only happens in library code
I do think that state in transducers does kind of make it harder to have a parallel transduce no? similar to the reducers fold?
I guess so although things like drop
are not parallelizable anyway
Might as well slap an arbitrary monad in there while you are at it 😜
You could replicate something like that with a transducer like the xform library's reductions
https://github.com/cgrand/xforms/blob/62375212a8604daad631c9024e9dbe1db4ec276b/src/net/cgrand/xforms.cljc#L491
you would need the reducing machinery to be aware of the state attached to the pipeline of transducers
I thought it might've been in this talk, not sure https://www.youtube.com/watch?v=4KqUvG8HPYo