clojure

New to Clojure? Try the #beginners channel. Official docs: https://clojure.org/ Searchable message archives: https://clojurians-log.clojureverse.org/
Yehonathan Sharvit 2020-12-27T08:38:15.494500Z

A question about functions and their metadata: Why is the metadata of a function (e.g. docstring) attached to the var that refers to the function and not to the function itself?

Timur Latypoff 2020-12-27T08:50:20.497100Z

I’d go philosophical and say that it’s not metadata if a function, but metadata of the var. You can attach metadata to an anonymous function, but what is the practical use of it?

Yehonathan Sharvit 2020-12-27T09:10:50.497300Z

Yeah but with metadata on vars, in case you need to create a var that refers an existing function, you loose metadata.

(defn foo "A func" [])
(def bar foo)
(:doc (meta #'bar)); => nil

1
Timur Latypoff 2020-12-27T09:16:49.001900Z

You’re right. Looks like it was just the simpler way to do it initially and simpler to understand (not to remember which properties belong to var, like :dynamic, and which belong to fn). def’ing the functions with another name also breaks many editors’ intellisense, so I guess it is just assumed that if you’re doing it, you know the implications and know how to deal with them :)

teodorlu 2020-12-27T10:50:46.002100Z

I'd say that if you (def foo bar) you explicitly state that you want a different name. If you wanted the same thing, you could have used the same thing. It makes sense to me that different names have different metadata.

teodorlu 2020-12-27T10:51:31.002400Z

A reason to create a different name might be to add docs, to add metadata.

teodorlu 2020-12-27T10:52:01.002600Z

(not entirely confident about this line of thinking, I'm not sure I'm right)

vemv 2020-12-27T11:22:22.002800Z

> Why is the metadata of a function (e.g. docstring) attached to the var that refers to the function and not to the function itself? Because vars can contain any kind of Object, not limited to functions. Not all Objects are IMetas. So a different approach wouldn't be nearly as homogeneous

vemv 2020-12-27T11:24:20.003Z

(def bar #'foo) is a bit better, although :doc, :arglists metadata won't be copied. It's possible to create a thin macro that does so (the Potemkin lib does that, but also a lot more which is why it fell out of fashion)

RollACaster 2020-12-27T13:53:55.003400Z

I am currently working on this library https://github.com/rollacaster/org-parser-tree and I allow to customize it using mulitmethods: https://github.com/rollacaster/org-parser-tree#customizations I haven't seen any other libraries adding customizations with multi-methods, and I was wondering if there is a reason for that e.g. some better approach?

p-himik 2020-12-27T14:26:36.003800Z

At least HoneySQL does this. But in v2 @seancorfield has made a switch from multimethods to atoms. I don't know why though. I know of two potential reasons not to use multimethods, or at least not to use them directly: - They're slower (which is a cost with no gain if you don't need their flexibility) - There might be a need to def more than one method to extend something in a meaningful way. In this case, you either have to put the onus on the users to make sure that they call defmethod for all the required multimethods, or you have to create an interface that allows to do it with one call thus removing the potential to leave the system is a partially customized state, but it will hide multimethods from users so you might as well use something else

alexmiller 2020-12-27T14:44:57.005200Z

Multimethods are not significantly slower

p-himik 2020-12-27T14:45:57.006500Z

Ah, another tiny reason - users will have to make sure that the namespaces with all the relevant defmethod calls are loaded before the very first usage of your library.

alexmiller 2020-12-27T14:46:16.007100Z

They used to be a lot slower due to a gap in default path caching that gave them that reputation but that has now been long fixed

👀 1
p-himik 2020-12-27T14:47:42.007900Z

That's good to hear, thanks!

RollACaster 2020-12-27T15:26:36.008100Z

Thank you for adding all this context to my question, it helped me a lot!

GGfpc 2020-12-27T18:58:56.013100Z

I have this question that is kinda bugging me, it's not strictly related to clojure but it's part of a clojure project. Currently I'm running a hobby project on docker and I have a frontend server that pushes messages to a backend node (via rabbitMQ) that computes stuff and the UI will poll the frontend server for the result. At some point the backend runs some tasks that use up all of the CPUs. I was hoping to scale horizontally on the number of backend nodes, but now that I think about it, since all containers could do the same heavy task at once this could kill my CPU, right? Would it be a better idea to just have multiple consumer threads in one clojure process and schedule around a fixed thread pool?

2020-12-27T19:30:23.014100Z

java threads plus immutable data already gives you a lot of isolation, the difference is downtime and redundancy scaling, I'd split things based on their coupling in infrastructure scaling

2020-12-27T19:30:54.014800Z

it's definitely a win to put more things in one clojure process, since the jvm and clojure itself have significant overhead

2020-12-27T19:31:25.015400Z

of course, for something that will only be for a hobby where downtime doesn't matter? you could put it all in one jvm

👆 1
GGfpc 2020-12-27T19:54:33.016300Z

Thanks for the insight. It's a hobby project but I don't really want it to go down as soon as it gets the slightest traction. I think I'll split the CPUs into two JVMs for at least some redundancy and then create 1 consumer per thread and use a shared claypoole pool for the heavy tasks

2020-12-27T20:03:27.016600Z

one approach is to have a generic "worker" which can do tasks for various parts of the system, with a shared input (via a queue service), it's like the normalized version of the program

Andrew Byala 2020-12-27T22:39:54.018600Z

Ok, this will sound a little silly, but could someone please verify something for me? I think the last example in the https://clojuredocs.org/clojure.core/juxt is incorrect, in that it claims to return a list of vectors instead of a vector of lists. I would appreciate a sanity check before I change things! What it says:

(def split-by (juxt filter remove))

(split-by pos? [-1 -2 4 5 3 -9])
=> ([4 5 3] [-1 -2 -9])
What I believe happens:
(def split-by (juxt filter remove))

(split-by pos? [-1 -2 4 5 3 -9])
=> [(4 5 3) (-1 -2 -9)]

James Kozianski 2020-12-27T22:44:50.018700Z

I get the same result as you, and that seems to be aligned with what the docs for juxt say

Andrew Byala 2020-12-27T22:47:11.018900Z

Thanks! Doc updated.