Hi. Is checking against the arity a runtime or compile time thing? For example, for (defn f [a b c]), calling (apply f ’(1 2)) will incur Execution error (ArityException)
it is checked at run time. Some linters like Eastwood and probably also clj-kondo can warn about some that can be detected from the source code at compile time, but probably not for apply
calls with variable number of args supplied in the last sequence.
Many such cases involving apply
are in general not possible to detect at compile time (as in, as hard as the halting problem is to solve)
The checking is checking against with the function it self, not the arglist meta info on the related var, right?
I am pretty sure it checks against the function itself. It is pretty easy to alter the arglist meta info of a function to mismatch what the function takes, and find out, in a short REPL session. See sample REPL session that I added in a thread to this message, if you are curious.
it might be helpful to describe it as a lookup failure, you aren't validating that a given arg count is accepted, you are looking up the invoke arity, and blowing up when it isn't there
user=> (defn foo
([x y] (+ x y))
([x y z w] (+ x y z w)))
#'user/foo
user=> (meta #'foo)
{:arglists ([x y] [x y z w]), :line 7, :column 1, :file "NO_SOURCE_PATH", :name foo, :ns #object[clojure.lang.Namespace 0x55424fb0 "user"]}
user=> (foo 1)
Execution error (ArityException) at user/eval145 (REPL:11).
Wrong number of args (1) passed to: user/foo
user=> (alter-meta! #'foo #(assoc % :arglists '([x])))
{:arglists ([x]), :line 7, :column 1, :file "NO_SOURCE_PATH", :name foo, :ns #object[clojure.lang.Namespace 0x55424fb0 "user"]}
user=> (meta #'foo)
{:arglists ([x]), :line 7, :column 1, :file "NO_SOURCE_PATH", :name foo, :ns #object[clojure.lang.Namespace 0x55424fb0 "user"]}
user=> (foo 1)
Execution error (ArityException) at user/eval154 (REPL:14).
Wrong number of args (1) passed to: user/foo
user=> (foo 1 2)
3
user=> (foo 1 2 3)
Execution error (ArityException) at user/eval158 (REPL:16).
Wrong number of args (3) passed to: user/foo
user=> (foo 1 2 3 4)
10
yes. that’s what I checked. Thanks for that.
Hello,when we do (future & body),we run the body on a separate thread as far as i understand. All the body will be executed by the same thread?For (prn (.getId (Thread/currentThread))) at any place on the body will give always the same ID?
i have a function that i want to behave differently depending on the future it executes
futures are multiplexed over a thread pool
so could be many threads (they will die off if not used)
thank you , i will read more of those and think of alternatives
Hello folks! I often see a problem when in a long-running REPL the (RT/baseLoader)
becomes this enormous chain of DynamicClassLoader
s. Basically, every form compile operation adds another loader onto the stack. As a result, certain class-resolving calls become woefully slow as they have to walk the whole chain before they can report an error, for example.
This classloader chain growth – is this something expected, or is there something wrong with my setup?
There was recently a discussion about it, you can still find it. tl;dr: on possible reason is if you use an old version of nREPL: https://github.com/nrepl/nrepl/issues/8
Thank you! I think I've seen this ticket before, but couldn't find it again. It looks like that the fix that was implemented is not a fix at all, since it only resets the context classloader of the thread while (RT/baseLoader)
looks at clojure.lang.Compiler/LOADER
dynamic variable first.
In the docstring for reify
macro there is a string
Methods should be supplied for all methods of the desired
protocol(s) and interface(s).
However, right now it is possible to implement a subset of methods:
(defprotocol Foo
(method-1 [this] [this a])
(method-2 [this] [this a] [this a b]))
(method-2
(reify
Foo
(method-2 [this a b] (format "(method-2 %s %s)" a b)))
1 2)
Is it ok to rely on that behavior? Probably, it is unlikely to change in the future because it potentially can break compatibility with older versions of clojure. Maybe the docstring should be changed?We could be nitpicky and say that the docstring may be technically correct already because it says "should" and not "must". 😄
right ) but at the same time it brings confusion using all
word )
I think it's more a good practice to prevent run-time errors than a necessity.
they should be supplied, but it's ok to be lazy if there are methods you know won't be called or whatever
and it would be a breaking change at this point to require all methods to be impl'ed
thanks for clarification @alexmiller
The fix seemed to be effective (I cherry-picked over clojure.tools.nrepl because I don't use the more modern nrepl) On top of it, changing SoftReference to WeakReference here https://github.com/clojure/clojure/blob/c1ede7899d8d7dd4dcd6072d5c0f072b9b2be391/src/jvm/clojure/lang/DynamicClassLoader.java#L47 had tremendous effect in my dev experience - now GC kicks in more frequently so overall code loading time is faster and more stable (https://github.com/clojure/clojure/commit/85e99ee9 went in the exact opposite direction so I doubt an upstream proposal would succeed. Nonetheless building clojure with my change passes and the result is stable)
> This classloader chain growth – is this something expected, or is there something wrong with my setup?
My understanding is that it's somewhat normal to have even thousands of DynamicClassLoaders around. A same given DCL can only load a class once, so often on eval
, load
etc a new DCL must be created
They do get GCed, which is what the Soft/Weak references control (with Soft it happens but e.g. 1h later when it's unavoidable)
(of course take my analysis with a grain of salt - I'm just a guy and his YourKit)
@vemv Thank you for the response! I also observed the SoftReference used for cache becomes problematic in my usecase. Those references free only when OOM error is triggered, so I went as far as artificially producing an OutOfMemoryError by creating a large array during the reload cycle. But it doesn't work very reliably still.
If you find something interesting don't hesitate to share over DM :) I only started using my (slight) clojure fork this week, am open to other solutions/insights
I never experienced an actual OOM btw. I use a lot of JVM flags though b/c of unrelated OOMs in the past
Can’t remember exactly but if you use recur
that is somewhat optimized but it’s not equivalent to tail-call-optimization right? The downside of that being the recursion is not as performant as a tail-call-optimized implementation, which Java doesn’t currently support?
recur
is a self tail call and is optimized. tail-call-optimization usually means both self and other function calls. Clojure only offers explicit self-tail-call optimizations.
There's also trampoline
for mutual recursion.
I think I've only used trampoline one time in 10 years of Clojure work
How do you even implement a mutually recursive algorithm with an iterative approach?
Don't you need a GOTO?
that's what trampoline is for.
Ya, but I was thinking like in Java what would you do? You'd just implement a form of trampoline?
Or there's something you can do with like named nested loops or using break/continue ?
function process(startData){
var nextOp =startOp;
var nextData =startData;
while (true){
nextOp, nextData = doOp(nextOp, nextData);
if (nextOp == QUIT_OP){
break;
}
}
}
it's kinda the same idea
Ya I mean that's just a trampoline.
also, that's psuedo js rather than psuedo java
But I think actually you can use named loops for it
I'm not familiar with named loops, but that sounds plausible
bar: for (...) {
foo: for (...) {
do_whatever();
continue bar;
}
do_some_other_thing();
continue foo;
}
}
Something like thatwhich algorithm in particular :thinking_face:?
I don't know, can't think of anything that requires mutual recursion 😛
I'm just wondering, if there was some algo that is simple to implement with mutual recursion, what would the iterative approach look like?
You can implement your own replacement for the call stack.
Hum, I need to be more precise. I'm talking about an algorithm that is mutually recursive but does not need the stack either. So like a case where not having indirect recursion TCO in Clojure is a "problem"
Then do a loop with a variable saying what "function" to call next, which is implemented as a case statement with one branch per "function". Each branch pushes something onto the hand-implemented "call stack" if you want to "call a function", or pops something off of it if it wants to "return".
You mean: there is mutual recursion, but every call is a tail call?
So trampoline IS the iterative approach to indirect and mutual recursion ?
Ya, like I'm thinking a case where someone goes: "Dang it, this would be so much easier to implement if only Clojure had TCO for indirect recursion"
And then, well Java doesn't have that either, so what would Java do? Would it just also create a sort of Trampoline, or is there some other imperative feature like GOTO, or labeled loops or other that can be used for it as well
TCO is about perf, not about easy 😛
I feel TCO is about making performance easy
Some algos are easier to implement recursively, (also that's the only way to implement them immutably sometimes), but if you didn't have TCO, you'd be forced to take the harder route of implementing it with the iterative algo
I guess readability will depend on the algo too
public static void visitNode(File file) {
if (file.isDirectory()) {
visitChildren(file.listFiles());
} else {
process(file);
}
}
public static void visitChildren(File[] files) {
for (File file : files) {
visitNode(file);
}
}
;; can be written like so:
(loop [[file & files' :as files] files]
(cond
(empty? files) nil
(directory? file) (recur (concat (children file)
files'))
:else (do (process file)
(recur files')))
That's a good example
So what would be the iterative approach?