@rauh, your snippet is interesting. I've had problems with using transit/read with regular JSON in the past. When values (not just keys) start with ^
, the result is corrupted
does the snippet fix that?
we really need a fast "JSON-string => clj-data-structure" implementation
js->clj is a real perf killer in my experience
@pesterhazy Yeah probably would also corrupt it. But if you delete these handler then it should be safe: https://gist.github.com/rauhs/301f59e0e2f94db4f22a4724fe50bd5f#file-parse-json-with-transit-cljs-L27-L36
IMO it would be pretty easy to write a super fast js->clj
implementation. I just personally don't use it much so never bothered with it
cool, thanks for the pointer
All you need to do is manually create a for
look and create PAM/PV's from it.
right
might be a good weekend project
some people dislike js->clj but IMO there's no alternative when you're consuming JSON data from a server as clj data structures
One start would be to speed up keyword, though it's probably not the main cost for js->clj
: https://dev.clojure.org/jira/browse/CLJS-2120
I'm sure it's part of it
@pesterhazy Benchmark this: https://gist.github.com/rauhs/c2f84c91ef311c1c080a93b852c50daa
@rauh, what do you mean by "not for node"?
@pesterhazy It caches any key it sees, so somebody can attack your server and just send nonsense JSON with lots of random keys.
ah because of security considerations
In CLJ they're weak refs and will be GC'd. In JS that doesn't exist
That's also the reason core doesn't do it. IIRC the first implementation actually did cache them
the cache needs to have limit of N elements
Yeah, but that's pretty complicated to do. and might negate any perf benefits
shouldn't be too hard
you could have n Objects with m keys max
once you reach n*m you can clear the first one again
that might work for a lot of situations
I don't think it's that easy. NOw you have linear search. You'd need to store access tiem every time you access it. -> Overhead
And then walk it every time since JS doesn't give you a possibility to count the keys of an object (fast).
really?
And now I'm already faster with a simple new keyword creation.
Object.keys(o).length
is not constant time?
that's nuts
@rauh FWIW, when I wrote http://blog.fikesfarm.com/posts/2017-11-09-avoid-converting-javascript-objects.html, I checked if :keywordize-keys true
was affecting js->clj
perf. It was not (a 1.1 speedup by disabling that). Compared that to to the overall speedup ranging between 26 and 68 by ditching js->clj
as a whole and using goog.object/getValueByKeys
). (See the post for details.)
Well, it was affecting perf, but small compared to the gross perf stuff going on in js->clj
@mfikes Yeah my guess is that for
and (vec (map ..))
are the main culprits. Just getting rid of those would likely be much faster
FWIW https://dev.clojure.org/jira/browse/CLJS-2341 is an improvement js->clj
It gets rid of for
as you suggest
Perhaps the (vec (map ...))
could be another fruitful perf avenue to pursue
@mfikes I wish we had a js-for
to generate native for
loops 😕
@richiardiandrea FWIW, we have this kind of error message in our cljs build regularly: > Attempting to call unbound fn: #’clojure.spec.alpha/macroexpand-check at line 2529, column 11 in file /Users/Borkdude/.boot/cache/tmp/Users/Borkdude/dre/…/app/r0g/-tnd3jd/public/js/app.out/cljs/pprint.cljs I think I saw you talking about something similar here.
It’s not reproducible. When we touch a source file, it recompiles and it’s fine.
@borkdude there is something going on there, it's been hard to track down and I haven't yet
@richiardiandrea Do you perhaps have parallel build on, like we do?
I wonder if it’s a race condition of some sort
talking about lumo
only, where no parallel builds are done...so there might be some other race condition
(maybe in common with cljs
)
The error I posted here didn’t come from lumo, but I saw you posting something similar, so just sharing this fwiw
that's good thanks a lot, and you are not the first btw I think to notice that