Hi @dmaltbie
It sounds like a combinatorics problem to me, at least for a brute force approach, so I'd reach for clojure.math.combinatorics
and do something like this:
> clj -Sdeps '{:deps {org.clojure/math.combinatorics {:mvn/version "RELEASE"}}}'
user=> (require '[clojure.math.combinatorics :as c])
nil
user=> (defn small-pair [coll n]
(->> (mapcat #(c/combinations coll (inc %)) (range (count coll)))
(filter #(= n (reduce + %)))
(sort-by count)
(first)
(map #(get (zipmap coll (range)) %))))
#'user/small-pair
user=> (small-pair [1,2,6,3,17,82,23,234] 26)
(3 6)
user=> (small-pair [1,2,6,3,17,82,23,234] 40)
(4 6)
user=> (small-pair [1,2,6,3,17,82,23,234] 23)
(6)
user=>
(it would be more efficient to calculate the zipmap
just once in a let
at the top, but I was just playing in a REPL to get to this point).@zackteo ^
Oh wow :o @seancorfield yeah I was thinking I needed to implement something like this https://www.geeksforgeeks.org/print-all-possible-combinations-of-r-elements-in-a-given-array-of-size-n/ but in clojure
Was gonna attempt to foolhardily reference the python code and make a conversion (if i had time)
The combinatorics library is great for stuff like this. Took me a few minutes to get that exact code above, but it really was just a dozen lines in the REPL looking at the shape of things as I worked from that top line down.
There's actually a function in there called subset
that already does the mapcat.
I see I see. Thanks for your help!! :)
But in my opinion, for a programming challenge like that, using the combinatronics lib seems like cheating 😛, the hard part is figuring out how to generate the possible combinations. I don't know the rules of your event, so maybe want to check that
But in idiomatic Clojure you'd just that library yes
Yeah i was thinking about that too :P but technically is part of Clojure similar to math in python hmmm.
I see I see!
It's not though, you need to add an extra dependency on it. So similarly, it won't work if they have some automated runner or something like that
@didibus Oh, cool, I hadn't read far enough through the docs. I thought there should be an easier way!
(and it's subsets
)
Seems weird that in order to register for an event, you're supposed to solve a programming puzzle... I wonder how they "validate" responses?
I unfortunately don't have a lot of time or energy to work on it. So ill probably just go with combinatorics. It isn't a test runner but we need to explain the solution for this one in a txt doc 😅
Oh is like a algo competition I guess. I was curious because I wouldn't ordinarily think they allow clojure submissions
I consider all the Contrib libraries "fair game" as a sort of "standard library" for Clojure but not everyone shares that view.
(but then I maintain four of them and have helped with several others so I might be a bit biased 🙂 )
This http://creditsuisse.recsolu.com/external/events/O4d5CSpiL25wEbLEAwcPkQ
hahahaha
Ya that's neat they allow Clojure
I suppose it doesn't matter too much to me if I they don't accept it as a submission - I wanted to use it to motivate me to learn abit more Clojure as I try to tackle problems
Well, I think Rich Hickey does not consider them "core" 😛 Not sure, but the impression I get is such. I personally don't care if its "official" or not as long as it works and does what I need. My perspective is just that I need to add an extra dependency, and that's always an "extra step", so I feel they are not "standard" because of that
It's been decades since I've had to write stuff like combinations from scratch. It's like the tired old stuff with search algorithms in interviews. No one outside the FAANG companies implement this stuff themselves any more.
Good luck with the application @zackteo and, as you say, it's a good motivation to learn more about Clojure even if you don't get accepted.
Ya, generating combinations is not an easy algo even. I'd consider this a hard question
Mmmmm, unfortunately @seancorfield I think I still need to mindlessly drill such problems depending on what companies I apply for ><
But anyhow, thanks guys 😄
@zackteo This what I got
(def data [1 2 6 3 17 82 23 234])
(defn recursive-comb
[kset-index nset-index nset kset func]
(if (neg? kset-index)
(func kset)
(doseq [i (range nset-index (- (dec (count nset)) kset-index))]
(recursive-comb (dec kset-index) (inc i) nset (assoc kset kset-index (get nset i)) func))))
(defn combinations
[n k]
(let [nset (vec (range (inc n)))
kset (vec (repeat k 0))
res (atom [])
func #(swap! res conj %)]
(recursive-comb (dec k) 0 nset kset func)
@res))
(defn all-combinations
[n]
(mapcat #(combinations n %)
(range (inc n))))
(defn smallest-pair
[ints sum]
(let [indices (all-combinations (count ints))]
(reduce
(fn[acc e]
(if (= sum (reduce + (map #(get ints %) e)))
(reduced e)
nil))
0 indices)))
(smallest-pair data 26)
(smallest-pair data 40)
(smallest-pair data 23)
Credit mostly goes to this MATHLAB solution I adapted from: https://stackoverflow.com/a/35689122/172272@didibus Thanks 🙂 Let me try to take a look at it!
So I have a laziness-related issue. Basically, I have this recursive function that builds a lazy seq of all possible solutions to a problem, by recursively calling itself with a smaller problem (to get all solutions to the smaller problem) and then appending all solutions to the ‘bigger’ problem to each solution of the smaller problem mapcat
(think dynamic programming).
Because generating solutions is somewhat costly, I would like the generated seq to be completely lazy (as in, if I ask for one solution, it just computes one solution to each sub-problem). However, this doesn’t seem to be the case — something’s forcing at least a significant amount of solutions to be generated. Now, I think that mapcat
is probably the problem here, because it seems to do some sort of chunking. I’d like to write an equivalent that never realizes more elements of the colls than what is needed.
So: is there a way to write such a ‘completely’ lazy mapcat
?
Here’s an example of how mapcat
seems to realize the first coll eagerly:
> (def a (mapcat #(for [x %] (do (println x) x)) (for [_ (range 10)] (range 10))))
#'my.repl/a
> (first a)
0
1
2
3
4
5
6
7
8
9
0
[cli.repl]> (second a)
1
You can try this:
(defn unchunk
[s]
(lazy-seq
(when (seq s)
(cons (first s) (unchunk (rest s))))))
(def a (mapcat #(for [x %] (do (println x) x))
(for [_ (range 10)] (unchunk (range 10)))))
> (first a)
0
Thanks @didibus! Alas, it seems the problem actually lies elsewhere… I’ll keep looking 🙂
…I’m thinking (for [x xs y (unchunk (f x))] y)
should do the trick, so maybe the problem is that f
is being eager for whatever reason…
turns out the problem was that I was using schemas for the function outputs :face_palm:
I'm betting there's a nice trick to do {:a ["foo" "bar"] :b ["baz" "quux"] :c ["I'm" "out"]}
=> {"foo" :a "bar" :a "baz" :b "quux" :b "I'm" :c "out" :c}
. My current solution involves a nested reduce. It's expected the the items in the vectors are distinct.
I would do (into {} (for [[k vs] input, v vs] [v k])))
I like that. I don't use that part of for nearly enough. Thanks!
had a similar problem (generating solutions by recursively generating and returning a lazy sequence of the solutions) - my implementation is here: https://github.com/ekoontz/menard/blob/master/src/menard/generate.cljc#L77
I guess I’m doing the unchunk
in that function generate-all
Hi all, I don't know whether this is the right place for this, but over the weekend I was watching some conference presentation that talked about the limitations of the 'text-editing/file' paradigm for source code. That got me thinking about about what a different approach could be, and I put down some thoughts about a product https://github.com/RedPenguin101/Formative. There is no code for this (and frankly I'm not capable of writing it), but I would be interested in peoples thoughts around this approach, which I think Clojure is uniquely suited for and plays to Clojure's strengths. Also interested if something like this already exists! Thanks much
A little while ago I recall seeing a clojure editor like this
“the limitations of the ‘text-editing/file’ paradigm”? Are we sure about that? As some famous language inventor once said, “Programming is not about typing.” Beware also the unreasoned embrace of decomposition; its prima facie appeal may ignore the way developers actually work with text. I can thrash out a hundred lines that would decompose into a fifteen node tree, and then decide, Naw, that won’t hunt. With a hundred lines of text I just slice and dice a refactoring in seconds, with a structural editor I am now fighting my IDE just to say what I mean. A good counterpoint here is that I have played with structural editors like Paredit and they are nifty, but I keep them turned off: fighting them at certain times is not justified by the keystrokes they generally save. Cue again the “not about typing” gem.
Yeah, I think Aurora was more in the right direction than the early LightTable demos -- as intriguing as they were -- because Aurora operates at an entirely higher level of abstraction. But even then, I think the visual/symbolic development systems have a very long way to go before they're going to make any inroads into mainstream programming. Our "muscle memory" for traversing files and directories and viewing deliberately arranged groups of functions will take a huge amount of "better" before we shift...
@hiskennyness If I'm understanding the example you give about writing 100 lines of code (presumably in a function?) and then refactoring into several functions, it wouldn't be precluded by either structural editing or the 'form editor' approach. You could extract forms into new functions, or conversely, inline them. exactly as you would in a text editor. Both structural editing and form editors constrain the programmer_,_ but the goal would be to constrain in a way that encourages an opinionated view on the best way to do things - a very Clojure-ish thing I feel. Structural Editing, I think, is about moving 'up a level' in how we interact with forms, away from editing characters, which is a legacy of the how things are stored on the machine. What I would be interested in is trying to do the same thing with the file, another legacy of how things are stored on the machine. Obviously with both you introduce constraints. So I guess thinking about it more, 'limitation of the text-editing/file paradigm' is the opposite of what I meant - it's introducing limitations to those things in the name of creating a (subjectively) better experience.
@seancorfield interesting you mention the 'muscle memory'. The talk that prompted this was https://www.youtube.com/watch?v=8pTEmbeENF4, which is essentially about just that.
No, @allaboutthatmace1789. Not refactoring into functions. Just slicing/dicing text, during which process the text is not valid Clojure. The syntax highlighter is having a cow. Then I clean it up. Anything preventing me from mangling text is in my way. ie, You missed what I said about Paredit. Have you looked at visual programming tools from way back when? Definitely a tempting idea, but like I said, I keep Paredit turned off for a reason.
Oh, I see what you mean - yeah I can see how that would stop structural editing being useful.
@hiskennyness You'd need to provide details, I feel all the slicing/dicing you're doing is probably covered by some structural edit which you don't know about. That said, I won't deny that we can figure our way to any edits more easily with only knowing a few text editing commands and that can be a friendlier experience. I recommend you watch the talk I linked to Unison. Because I don't think you're seeing exactly what I'm talking about. Text editing can be your interface of choice if you want, and it could be visual, or structural, or something else. When the program source is no longer textual, those things are just various UX you can build on top. What happens in unison is that your text files are not your source code. They are just an interface to them. And you cannot add a function which isn't valid to the source. Think a kind of per-function git commit, where you have a hook that asserts the function compiles to allow the commit
In fact in Unison, you can delete the textual code, and then Unison can regenerate it for you, though it won't be exactly the text you typed, it will be the same function
That video is a little rough. The win I saw was great, but fundamentally about being immune to bit rot, dependencies, versioning, and of course deployment. I will take a look at the doc and see if I can find anything about structural editing. But Unison looks like a nice solution for the bits I grokked. thx for the lead.
Well, you're right, there's no structural editing per say, you can use the editor of your choice. But I'm not so much talking about the editing being structural as that the source code is edited in well behaving structures.
Basically, until you add the function in the source, the function is not in the source. So I consider the add function command a structural edit of the source. And you cannot add a function that doesn't compile. And another example is renaming, you rename not by changing the name on text all over the place, renaming is also a structural command edit you run on the source, etc.
Maybe I shouldn't say structural, maybe it's better to call it semantic editing?
Folks interested in this stuff, come check out the https://futureofcoding.org/ community! There's lots of people in the slack working on similar projects
@didibus Natural language is tough. 🙂 I did realize we were talking about different things after a while. I was too hasty in thinking I knew to what you referred. Unison is intriguing!
@tomisme Super interesting, thanks for the link!
The old VisualAge editor for Java had a paradigm similar to this for single-method focused editing (I believe due to its roots as a Smalltalk environment). It was not a great fit for Java as you typically needed more context than a single method but would be better for Clojure. The graph stuff always seems like a great idea on small examples - in practice, there are practical limits to how much of a graph you can fit on a page and still have nodes be readable and edges understandable, particularly in cases of high fan-out/fan-in.
I shared this in #off-topic earlier this week, but somewhat relevant to your topic: here's a demo of the CEDAR environment from Xerox PARC showing off the state-of-the-art from ~1984 https://youtu.be/z_dt7NG38V4?t=2305 around here he starts demonstrating some of the general editor functionality and then gets into more of the developer functionality with navigating/editing/building code. The way he describes mouse navigation/selection granularity working may be of interest to you.
The original vision for LightTable was very much around functions but early trials indicated people found that hard to work with and it evolved into a more traditional editor. I remember back in the '90s, HP were working on a C++ development environment that was sort of along these lines, with editing focused on classes and drill down into methods. Very experimental stuff that ended up going nowhere I think, but it fed into HP's feedback about templates on the ANSI Standards Committee, and I got to see a prototype of it running at one of the committee meetings.
Hmm, it does sound like it falls in the category of ‘if it worked, someone would’ve got it working by now’. Especially if lighttable already tried and abandoned it.
Chris Granger talked about how LightTable had to pivot at one of the Conj's I think. Here's an indication of his original ideas about it https://www.chris-granger.com/2012/04/12/light-table-a-new-ide-concept/ and how much it drew on Smalltalk for that https://www.chris-granger.com/2012/10/05/all-ideas-are-old-ideas/ and this shows how it turned into a much more traditional editor https://www.chris-granger.com/2013/02/27/light-table-030-experience/
Cross posting this question since I think there is a decent chance any such library would be compatible with both… https://clojurians.slack.com/archives/C03S1L9DN/p1599928442460000
question about the clojure implementation / java concurrency model
have you tried instaparse? https://github.com/engelberg/instaparse . probably wouldn't be too hard to find/coerce a json grammar to work with it
since i am going through and trying to copy paste the core vector/map/set data structures in a more "java friendly" face
persistent vector has a field for caching its hash code
and on the first time through it will calculate it
but...
this._hash = hash
is this assignment atomic?
how can we know that two writes won't clobber each other in weird or undefined ways
if it's pure, does it matter? worst case is that some extra work is done.
well, the extra work isn't an issue - that hash == 0
call can be false and it won't matter
but lets say the hash code is 4 bytes (so we can reason about it)
and its 7
so 0111
I don't know if that assignment could be read halfway through writing bytes
so maybe it starts at 0000
and only 0100
has been written so far
https://stackoverflow.com/questions/4756536/what-operations-in-java-are-considered-atomic
assignments of ints are atomic
> another thread is guaranteed not to see a partially written int
hadn’t heard of this. thanks, I will check into it
Assignments of all primitive types are atomic in Java (except the 64 bit longs and doubles - know commonly as “long tearing”)
This pattern for setting the hash is racy, but safe as written - either you observe the computed hash or compute and store it, possibly writing over another threads value atomically
Java Concurrency in Practice is a great book for stuff like this, well worth reading even if you’re not using Java
Hmm, a big chunk of what is in APersistentVector I can get for free with AbstractList
so...
yep
a bit of hoopla for mutable lists that is wasteful, so clojure not using it makes perfect sense
Are pointers (object references) in a 64bit jvm also susceptible to tearing? Maybe I have it backwards, as in a long on a 32 bit jvm needs 2 separate machine ops to be assigned. So maybe a better question is: is tearing possible at all on a 64bit jvm, since the machine should be able to copy 64bits in one shot
Whether or not it is possible on a 64-bit JVM, I doubt anyone would recommend writing code that works on a 64-bit JVM but fails on a 32-bit JVM, since there are so few extra things needing care on a 32-bit JVM.
I will second the recommendation for the Java Concurrency in Practice book. Lots of info about what is safe in the JVM with multiple threads, and what is not.
This Stack Overflow question quotes a portion of the JLS (Java Language specification) that says that however a JVM implements object references, reads and writes of them must be implemented atomically.
https://stackoverflow.com/questions/2576513/is-writing-a-reference-atomic-on-64bit-vms
If you put objects inside of a Clojure collection whose hash values can change after they are added to the collection, then there can be a data race there. Thus one should not do so.
Clojure's immutable values, including collections that contain only immutable values, never change their hash value once created, so they are all safe to put inside of such Clojure collections.
Putting mutable Java objects inside of such collections, and then mutating them later so their hash value changes, is a recipe for bugs.