Is there a way to measure how much memory a CLJS data structure takes up?
In Chrome DevTools this can be measured by comparing two heap snapshots, where first doesn't have anything and second one includes a value of interest assigned to a global variable so it doesn't get GCed
Comparing two snapshots shows allocations diff
Also can be measured in node https://github.com/roman01la/js-memory-usage
I have a toJS
function exposed here to make working with CLJS values easier from JS. It also converts MetaFns to normal functions so they are easier to work with:
https://github.com/borkdude/sci/blob/e895c4524a1a43568e4919315364978ea61df607/src/sci/impl/js.cljs#L36
Does it make sense to add that part to clj->js
proper?
Maybe the use case is too niche.
you can already do that via the protocols clj->js
uses
(defprotocol IEncodeJS
(-clj->js [x] "Recursively transforms clj values to JavaScript")
(-key->js [x] "Transforms map keys to valid JavaScript keys. Arbitrary keys are
encoded to their string representation via (pr-str x)"))
@thheller I'm interested in the bundle size visualisation to see how much of several parts of sci occupy in the final bundle. I remember vaguely you had something for this in shadow-cljs. Any tutorial how to set this up just for the visualization? I have almost 0 experience with shadow.
Thanks for the protocol pointer.
do you build for the browser? the build reports currently only work for browser builds
it's a library that can be used in both environments, so also for the browser
basically you just create a simple build config and run the build report normally
:builds {:sci {:target :browser :modules {:sci {:entries [sci.core]}}}}
https://shadow-cljs.github.io/docs/UsersGuide.html#_build_report
thanks!
you may need to setup an actual example use though
if the library is written "correctly" all the code will be removed as dead code otherwise
since nothing is actually used if you just import the ns
Right. Also a lot of the core functions are pulled in by the lib, but might already be used the the app otherwise, so it wouldn't be 100% on the lib
Just to get going for now, I'm getting:
Can't find 'shadow.cljs.devtools.cli' as .class or .clj for lein run: please check the spelling.
I've set :lein true
in shadow-cljs.edn. Should I add a dep to project.clj?
yes. include the dep
the build report shows how much your code contributed
so cljs.core code is shown separately
right
thats likely the (def clojure-core ...
etc which can never be removed
acceptable size though. so probably best to set up an actual example with the common uses
yeah, it was the point of def clojure-core
to hold on to all those functions, because you don't know up front which ones the user is going to use in their program string.
When I compile sci to an npm library (advanced compiled JS) I end up with ~350KB unzipped
there's some caveats to this approach though
depending on the data structure and the way it is operated upon, it can suffer either optimizations/deoptimizations
like this
You mean different runtimes right? Thatβs true. I imagine numbers can be different between browser JS VMs as well
is it meant for npm consumption by js clients?
in that case, using the same runtime but changing a for loop slightly reduced memory usage by 30 to 70%
it didn't affect node 10 but did affect node 12
this was the change
for (let prop in obj) {
-> for (let prop of Object.keys(obj)) {
hehe, reminded me about this talk https://www.youtube.com/watch?v=r-TLSBdHe1A
I haven't seen it, but this sounds like it's just up my alley π
thereβs a funny moment he describes how having a shorter user name on his machine caused GCC to produce 100x faster code π
you know what, that doesn't surprise me at all
webpack had a similar problem around two years ago
@filipematossilva That's certainly one of the goals: https://www.npmjs.com/package/@borkdude/sci
it's similar insofar as names and paths would be concatenated indefinitely
so the longer your file paths were, the quicker you'd hit the memory limit
You can see an example of it here: https://github.com/borkdude/sci-birch
350kb is a fair bit but for nodejs consumers it's not the worst thing in the world
especially if it's a single lib
typescript, for instance, is 7mb in a single file
and TS ships several versions of the compiler, to cater for different consumers
yeah. I guess TS is just a dev dependency though
not quite
in your case you want to interpret cljs
but there are plenty of libs that want to interpret ts
like ts-node
oh, then I'm fine I guess π
or that require TS to be compiled with, like Angular
I was more worried about front-end bundle size
TS is a total of 47mb on a clean install
it's about 80kb zipped
which is fine maybe, trade-off
@alexmiller hey I built a Google Closure Library artifact today - could we get a release?
Sure, do I just need to press the button on the build box?
no you have to do those sonatype steps
we talked about this last time - this could be automated
the build has the instructions at the top
@borkdude you can generate source maps and inspect with any bundler inspector from npm
thanks! @thheller already guided me to a solution with shadow-cljs which works quite nicely
that dimly rings a bell
so you did the build, I just need to do the sonatype part?
I see two staging repos, one for google-closure-library-third-party and one for google-closure-library. I assume these both need to be released?
released both, I assume will take a bit to show up
google-closure-library 0.0-20191016-6ae1f72f and org/clojure/google-closure-library-third-party 0.0-20191016-6ae1f72f are out there now
the third party jar is ~50% the size of the last release. not sure if that's weird
@alexmiller yeah both need to be released
thanks!
oh nice, thanks for the new releases!
This is awesome. Thank you both! What I need this for is to measure the relative size as a data structure grows, so I think Roman's suggestions should suffice.