If you can change op to a hash-map then it makes the code far simpler
(def operands {"+" + "-" - "*" * "/" /}
((operands "+") 4 3)
Thanks. I hadn’t thought of that!
Wow. That looks really interesting. Thanks for the pointer. Looks like a better way to both specify AND document your structures.
i've never made myself learn the looping options of format
but I love it when i see it in action
When I look at the :post function, maybe I don't understand the semantics of :post. What I'd like to do is assert that the value returned from subtype?
is explicitly true
, false
, or :dont-know
. I think I may be confused about using sets as membership tests. I'll change the post function to:
(fn [v] (member v '(true false :dont-know) v))
I already have a member
function in my utils library defined as follows: perhaps I should replace the final call to some
with a (loop ... recur)
which checks equivalence until it finds one? I suspect that would be faster than rebuilding a singleton set, and then checking set membership many times. I suspect a small (loop ... recur)
would compile very efficiently? right?
(defn member
"Determines whether the given target is an element of the given sequence."
[target items]
(boolean (cond
(nil? target) (some nil? items)
(false? target) (some false? items)
:else (some #{target} items))))
@andy.fingerhut You commented: It calls `subtype?` twice with the same parameters, once with `delay`wrapped around it, once without, and then compares the return values of the two.
Thanks for finding that. I believe that is a bug. ITS GREAT to have a second set of eyes look at code. It should call subtype?
within the delay with the arguments reversed. I.e., two types are equivalent if each is a subtype of the other. But don't check the second inclusion if the first is known to be false because such a call may be compute intensive and unnecessary. Looks like i'm missing something in my unit tests. :thinking_face:
The semantics of type-equivalent?
are if either of s1
or s2
are false
, then return false
(types are not equivalent). If both s1
and s2
are true
, then return true
. Otherwise call the given default
function and return its return value if it returns.
I have a question about spec. I'd like to check whether infinite lazy seq is valid. But this code is infinite loop. What should I do?
(s/valid? (s/coll-of int?) (range))
how could you check an infinite sequence? you’d have to check every element, which by definition would take an infinite amount of time
I just want to check 100 of them. So, I've used it like this so far, but it's too messy.
(s/def ::coll #(valid? (s/coll-of int?) (take 100 %)))
(s/fdef func
:args (s/cat :coll ::coll)
:ret ::coll)
@andy.fingerhut WRT your comment: >>> (doall <expr>)
is one general purpose way to force any `<expr>` that returns a lazy sequence, to realize all of its elements eagerly, without having to define separate eager versions of functions like `filter`, `map`, etc.
I don't completely follow. my eager versions of filter, map etc simply call doall
as you suggest. Are you suggesting that it's better just to inline the call to doall
? As a second point, I don't think do doall
really forces all lazy sequences, rather only the top level one. For example if I have a lazy sequence of lazy sequences, then as I understand doall
will give me a non-lazy sequence of lazy sequences. Unless I misunderstand, If I want to use dynamic variables, then I have to fall everywhere in my code which is producing a lazy sequence and somehow force it with doall
.
s/every and s/every-kv do this already
They will check a bounded sample (up to 100 by default)
So just (s/def ::coll (s/every int?)) is exactly same as above (actually better for gen etc)
I once did a mildly hairy cl-format
for production code but before making the PR thought I should rewrite for the benefit of the Klojure Kids, one of whom had already said he did not know it existed (but seemed enthusiastic). The rewrite was monstrous, so I left it in: better they should learn cl-format
and be empowered forever.
You are correct that doall
forces the top level sequence, not nested ones.
Set literals are constructed only once by the compiler's generated code, if they contain only constants, I believe. I would expect set containment to be faster than either member
or some
or an explicit loop, since sets use hash maps to check for membership and thus do not iterate over all elements, but for a 3-element set I doubt you will notice much difference in the context of your application.
I've done a few experiments putting debug prints in a few places here and there trying to determine why the code sometimes creates lists that get twice as long, but I don't have any good clues yet. I doubt I will spend much more time on it. I suspect there is some kind of mutable data structure being used somewhere, but that is just a guess without evidence.
Hello, how to flatten vector of vectors of maps to have vector of those maps?
This sample REPL session has integers instead of maps, but the code works regardless of whether the bottom thing is maps or any other type:
user=> (mapcat identity [[1 2] [3 4] [5 6]])
(1 2 3 4 5 6)
user=> (vec (mapcat identity [[1 2] [3 4] [5 6]]))
[1 2 3 4 5 6]
1👍apply concat
does that
cljs::formula.events=> (int (char 97)) ==> 0
I'm having issues converting char to int on my cljs repl. This should not result in 0 right?
(int "a")
WARNING: cljs.core/bit-or, all arguments must be numbers, got [string number] instead at line 1 <cljs repl>
0
> (doc of int) Coerce to int by stripping decimal places.(yet another clj and cljs difference)
This is more a host language difference or vm difference to me
There is no char type in js. And on the jvm chars exist and are ints
they are not ints
> An int value represents all Unicode code points, including supplementary code points.
Is what I’m going from
The JVM type system has distinct types for char
and short
, even though the values of those two types have a one-to-one correspondence to each other.
on that note, how does it store chars with codepoints >64k. I know that Java uses UTF-16 encoding, but I guess you can’t store an (f.e.) emoji in a char
?
(more, just bewonderment than a real problem I’m struggling with :p)
it uses multiple chars
there are apis that understand this (and some older ones that do not, so some care is required)
I haven't read this full wikipedia page on utf-16 to see how good of a job it does explaining this, but there is a range of 16-bit values called "surrogates" in UTF-16 encoding of Unicode, such that a pair of surrogates can represent all Unicode code points that do not fit in a single 16-bit value: https://en.wikipedia.org/wiki/UTF-16
Yeah right. Reading up on it (and what Alex says), if you charAt a char requiring 4 bytes to represent, you get half of it. Thanks 🙂!
the codepoint apis understand that stuff
Wondering how that came to be, but apparently surrogate pairs were not a thing when Java was first released. The more you know!
yeah, wasn't introduced until much later
My understanding is that when Unicode first started, they thought 16 bits would be enough for all code points. Java and a few other software systems started with UTF-16 with no need for surrogate pairs, then later added them when 16 bits was no longer enough.
The "History" section of the Wikipedia article on UTF-16 summarizes a few main points of who did what, although not necessarily exactly when (although the cited references might)
The history of computing is often fascinating 🙂
E.g.: we tried x
, that turned out to be an oversimplification, then y
and z
— also an oversimplification, so now we started over with a
, with little bits of legacy x
, y
and z
in there.
That's interesting. I thought it was the same thing because s/coll-of called s/every. I didn't know there was a ::conform-all in coll-of. Thanks
@jr0cket i did have that option, but i wanted to get a deeper understanding of symbols and evaluation in clojure. thanks.