If you can change op to a hash-map then it makes the code far simpler
(def operands {"+" + "-" - "*" * "/" /}
((operands "+") 4 3)
I have a question about spec. I'd like to check whether infinite lazy seq is valid. But this code is infinite loop. What should I do?
(s/valid? (s/coll-of int?) (range))
how could you check an infinite sequence? you’d have to check every element, which by definition would take an infinite amount of time
I just want to check 100 of them. So, I've used it like this so far, but it's too messy.
(s/def ::coll #(valid? (s/coll-of int?) (take 100 %)))
(s/fdef func
:args (s/cat :coll ::coll)
:ret ::coll)
s/every and s/every-kv do this already
They will check a bounded sample (up to 100 by default)
So just (s/def ::coll (s/every int?)) is exactly same as above (actually better for gen etc)
Hello, how to flatten vector of vectors of maps to have vector of those maps?
This sample REPL session has integers instead of maps, but the code works regardless of whether the bottom thing is maps or any other type:
user=> (mapcat identity [[1 2] [3 4] [5 6]])
(1 2 3 4 5 6)
user=> (vec (mapcat identity [[1 2] [3 4] [5 6]]))
[1 2 3 4 5 6]
apply concat
does that
cljs::formula.events=> (int (char 97)) ==> 0
I'm having issues converting char to int on my cljs repl. This should not result in 0 right?
(int "a")
WARNING: cljs.core/bit-or, all arguments must be numbers, got [string number] instead at line 1 <cljs repl>
0
> (doc of int) Coerce to int by stripping decimal places.(yet another clj and cljs difference)
This is more a host language difference or vm difference to me
There is no char type in js. And on the jvm chars exist and are ints
they are not ints
> An int value represents all Unicode code points, including supplementary code points.
Is what I’m going from
The JVM type system has distinct types for char
and short
, even though the values of those two types have a one-to-one correspondence to each other.
on that note, how does it store chars with codepoints >64k. I know that Java uses UTF-16 encoding, but I guess you can’t store an (f.e.) emoji in a char
?
(more, just bewonderment than a real problem I’m struggling with :p)
it uses multiple chars
there are apis that understand this (and some older ones that do not, so some care is required)
I haven't read this full wikipedia page on utf-16 to see how good of a job it does explaining this, but there is a range of 16-bit values called "surrogates" in UTF-16 encoding of Unicode, such that a pair of surrogates can represent all Unicode code points that do not fit in a single 16-bit value: https://en.wikipedia.org/wiki/UTF-16
Yeah right. Reading up on it (and what Alex says), if you charAt a char requiring 4 bytes to represent, you get half of it. Thanks 🙂!
the codepoint apis understand that stuff
Wondering how that came to be, but apparently surrogate pairs were not a thing when Java was first released. The more you know!
yeah, wasn't introduced until much later
My understanding is that when Unicode first started, they thought 16 bits would be enough for all code points. Java and a few other software systems started with UTF-16 with no need for surrogate pairs, then later added them when 16 bits was no longer enough.
The "History" section of the Wikipedia article on UTF-16 summarizes a few main points of who did what, although not necessarily exactly when (although the cited references might)
The history of computing is often fascinating 🙂
E.g.: we tried x
, that turned out to be an oversimplification, then y
and z
— also an oversimplification, so now we started over with a
, with little bits of legacy x
, y
and z
in there.
That's interesting. I thought it was the same thing because s/coll-of called s/every. I didn't know there was a ::conform-all in coll-of. Thanks