is the ctx
effective immutable? what if it could be created in advance?
this kind of api would be sweet:
(def my-eval (sci.api/evaluator {:preset ..., :bindings ..., :namespaces ...})
(my-eval "(+ 1 1)")
; => 2
@ikitommi This exists. It works like this:
(def ctx (sci/init {:preset ...}))
(sci/eval-string* ctx "(+ 1 1)")
However, the ctx is not immutable, it remembers all effects (defs, etc).This is used for REPLs, etc.
I think I'm missing the point since you already have eval-string*
in your benchmark as well.
if the ctx is not immutable, the eval-string*
should not be used?
… with reused ctx.
if sci could create a immutable prototype of ctx ahead-of-time out of options, e.g. with sci.api/evaluator
, when calling it at runtime, it would just add the mutable part in to make a full mutabl ctx, might same some micros?
@ikitommi Yes, it should be used with re-used ctx in a lot of cases, that's exactly the point, e.g. in a REPL
hmm. but my case is non-repl, would benefit of not re-creating the full ctx on every eval? looking at the code, most of the ctx is immutable and doesn’t need to be re-created on every call?
what could go wrong in your case if you did re-use the stateful ctx?
(def ctx (scio/init {:preset :termination-safe}))
;; customer 1 writes using a web-ui
(scii/eval-string* ctx "(def company1-secret 123)")
;; customer 2 reads it
(scii/eval-string* ctx "{:hacked company1-secret}")
; => {:hacked 123}
this is, if you evaluate sci-strings in a shared environment, e.g. on the backend.
and persist things into db.
So a new API could save you 9µs at most. Not sure if that's worth it?
I have another usecase, but I agree with @ikitommi that for safety reasons it would be worth it. Although if you give up performance (and in my case it would be more) you could fix that
I’m currently thinking of working around this by not allowing def and other mutable operations, but a immutable snapshot of ctx would be nice for me
the perf is not that important here, just testing it. Coudn’t find the issue about “lean api”, was pretty sure there was one. idea that you could have an api, that would NOT include the default namespace mappings and would return an optimized eval function. One could give all the var-bindings as options to evaluator
.
currently, malli.core loads ~500ms, sci.core with defaults adds 2000ms to it and makes the js-bundle quite big. trying to integrate dynaload in, so that one could control if sci should be budled in or not. But, still, it’s all (big) or nothing.
@ikitommi This is that issue: https://github.com/borkdude/sci/issues/357
having a custom api would allow me to write malli.sci
, which would have a relevant subset of sci, that would be:
• usefull enough
• load faster
• create small (enough) bundles on js
• be bit faster at runtime (the 9micros per)
I think issue 357 would help in getting leaner builds, not sure about initial load time though, since still the parser based on tools.reader, etc. etc. have to be loaded
@dominicm has released a tool yesterday which shows load time per namespace. This could be used to get a sense of where the load time goes currently: https://sr.ht/~severeoverfl0w/slow-namespace-clj/
(time (require '[sci.impl.namespaces]))
"Elapsed time: 2018.835938 msecs"
🙂 There is a suggestion in the issue how you can experiment with leaner builds right now, by just overriding the sci.impl.namespaces file in your project. Might be worth taking a stab at that. When this approach works, maybe we can make some nice API that will generate this file for you.
Am I right that there are two things being mixed here: immutable ctx and lean builds?
Yes. It's a bit of a fragmented discussion 🙂
ok just checking 🙂 Maybe I was missing something
Btw, I have a quick workaround for the immutable ctx here:
(def mutable-ctx (sci/init {:preset :termination-safe}))
(def immutable-ctx (update mutable-ctx :env deref))
(defn init-ctx [immutable-ctx] (update immutable-ctx :env atom))
(sci/eval-string* (init-ctx proto-ctx) "(+ 1 2 3)" )
;;=> 6
(sci/eval-string* (init-ctx proto-ctx) "(def x 1) x")
1
(sci/eval-string* (init-ctx proto-ctx) "x")
Execution error (ExceptionInfo) at sci.impl.utils/throw-error-with-location (utils.cljc:54).
Could not resolve symbol: x [at line 1, column 1]
This can be used for benchmarking, to check how much this actually saves in performance.(edited)
Ah smart. I’ll use this 🙂 For me it’s about caching between requests. So a small saving could make a big difference with a lot of traffic. But I think this example would fix that
Made an issue here: https://github.com/borkdude/sci/issues/369
Thanks!
(ns user
(:require [criterium.core :as cc]
[sci.core :as sci]
[sci.impl.interpreter :as scii]
[sci.impl.opts :as scio]))
;; 25µs
(cc/quick-bench (sci/eval-string "(+ 1 1)" nil))
(def ctx (update (scio/init nil) :env deref))
;; 13µs
(cc/quick-bench (scii/eval-string* (update ctx :env atom) "(+ 1 1)"))
and thanks for @dominicm for slow, here’s from malli:
clj-time : 435.44744099999997 msecs
clojure : 1200.8413420000002 msecs
clojure.core : 146.148892 msecs
clojure.core.specs : 48.750516 msecs
clojure.java : 49.91768999999999 msecs
clojure.spec : 28.553624 msecs
clojure.spec.gen : 0.386083 msecs
clojure.spec.test : 27.847193 msecs
clojure.test : 446.680308 msecs
clojure.test.check : 402.867701 msecs
clojure.test.check.clojure-test : 12.399544 msecs
clojure.tools : 331.46736000000004 msecs
clojure.tools.namespace : 276.347247 msecs
clojure.tools.reader : 24.264338 msecs
clojure.tools.reader.impl : 1.6841169999999999 msecs
com.gfredericks.test.chuck : 510.495037 msecs
com.gfredericks.test.chuck.regexes : 45.83087 msecs
edamame : 47.460241 msecs
edamame.impl : 43.228698 msecs
malli : 870.768437 msecs
sci : 1624.3018359999999 msecs
You're more than welcome, it was a fun little tool to write. I've had it sitting in my snippets for ages from doing debug. It's a fun game to play.
What about more specifics? sci consists of multiple namespaces
I'm surprised that clojure.test takes more time to load than clojure.core. Is that maybe because clojure.core is AOT-ed and clojure.test is not?
Or maybe the benchmark isn't that reliable?
Things like AOT are not factored by the tool.
I believe the tool works, I'm just wondering about the results I'm seeing
It's definitely curious.
Oh
Just realized, so these are the groups
So it's clojure.test and clojure.test.check, etc all bundled up
sci.addons : 6.825647 msecs
sci.impl.unrestrict : 9.062802 msecs
sci.impl.main : 12.817414 msecs
sci.impl.parser : 16.701379 msecs
sci.impl.opts : 19.185607 msecs
sci.impl.records : 19.598105 msecs
sci.impl.multimethods : 20.508318 msecs
sci.impl.hierarchies : 20.85216 msecs
sci.impl.doseq-macro : 21.41064 msecs
sci.impl.macros : 22.040018 msecs
sci.addons.future : 25.766132 msecs
sci.impl.interop : 29.472987 msecs
sci.impl.fns : 31.424244 msecs
sci.core : 33.194666 msecs
sci.impl.max-or-throw : 33.917608 msecs
sci.impl.destructure : 37.038555 msecs
sci.impl.for-macro : 38.409595 msecs
<http://sci.impl.io|sci.impl.io> : 48.15481 msecs
sci.impl.protocols : 49.136158 msecs
sci.impl.types : 50.514564 msecs
sci.impl.utils : 52.204186 msecs
sci.impl.interpreter : 152.375188 msecs
sci.impl.analyzer : 155.308065 msecs
sci.impl.vars : 236.363401 msecs
sci.impl.namespaces : 694.504691 msecs
tested with empty sci.impl.namespaces
too:
sci.addons : 8.37424 msecs
sci.impl.main : 13.100328 msecs
sci.impl.namespaces : 14.493458 msecs
sci.impl.unrestrict : 19.593454 msecs
sci.impl.doseq-macro : 28.334558 msecs
sci.impl.macros : 35.316999 msecs
sci.impl.multimethods : 36.71492 msecs
sci.impl.parser : 37.466406 msecs
sci.addons.future : 39.400002 msecs
sci.impl.records : 39.564858 msecs
sci.impl.destructure : 45.046122 msecs
sci.impl.hierarchies : 47.945577 msecs
sci.impl.interop : 50.139426 msecs
sci.impl.opts : 53.413407 msecs
sci.impl.for-macro : 58.779449 msecs
sci.impl.fns : 65.738822 msecs
sci.impl.max-or-throw : 69.958636 msecs
<http://sci.impl.io|sci.impl.io> : 73.913664 msecs
sci.impl.types : 88.955042 msecs
sci.impl.protocols : 89.97283 msecs
sci.impl.utils : 111.602038 msecs
sci.core : 123.166932 msecs
sci.impl.analyzer : 329.678756 msecs
sci.impl.vars : 348.128747 msecs
sci.impl.interpreter : 409.58255 msecs
it totals actually to more, guess there is variance, but as the namespaces load other ns’es, the order might matter here?
so what does this tell us about sci.impl.namespaces?
maybe run this in criterium as well? 😉
you can use :reload-all
to force reloading of namespaces
another round with just totals: 1. empty namespaces:
sci : 1244.0422230000001 msecs
sci.addons : 15.879876 msecs
sci.impl : 1161.9842830000005 msecs
2. default namespaces:
sci : 1836.7869400000002 msecs
sci.addons : 25.766132 msecs
sci.impl : 1771.0004950000002 msecs
yes, the test is flawed at least 🙂
but, with the new clojure client compile thing, I think the JVM load time is not a big problem, that will go away with 3rd party tooling (compiling all non-project files ahead-of-time).
new clojure client compile thing?
running the compile
manually and adding the compilation results into classpath: https://clojure.org/guides/dev_startup_time
right
looking for some tooling on top of this, should only compile external dependencies.
tried in a big project, dropped compilation from 30sec to 4sec, but fails if the project files are compiled too, as the changes are not captured after that. for immutable / external files, should be good
sci bundle with default namespaces. with empty (no vars binded) namespaces:
I see 3 different numbers here. Gzipped vs not gzipped, but also optimized. What does optimized mean?
What you said about changes is not correct. Clojure uses the last modified time to decide.
Order is handled by my tool, it builds a dependency graph. But only using the ns macro, so will not work with dynamic requires.
The core team is working on automating compile, as clj gets involved. I'd started working on a tool, but Alex said he was already and there were core problems that needed solving before it could work reliably.
I believe tools namespace cannot handle aot artefacts, which may explain your problems.
I think that's the impact of Google closure
might be that. Things broke on integrant reset
That would be it then. I haven't investigated why it has issues. It touches on internals a little, so I expect it will be frustrating to discover.