(isn't container support the default now in jdk11?)
@pmonks "I don’t know why that’s happening (this is my first time using depstar, so I may have messed something up there)" -- no, that's just how AOT works. In order to compile your main namespace, it has to load (compile) all the namespaces it depends on so you get pretty much your whole program compiled to .class
files. That's just normal. What (:gen-class)
does is flag that particular namespace to be generated as a class with methods -- functions beginning with -
(by default) will become named methods on that class, instead of (what Clojure usually does which is) to compile each function to its own class with an .invoke()
method. So your -main
function becomes a public static
method of the class (which is what Java expects).
I didn’t realise AOT-compiled-Clojure couldn’t “thunk” into on-demand-compiled-Clojure.
Leiningen tries to do some fancy post-AOT clean up I think, to delete a lot of .class
files that aren't directly part of your project so you don't get "everything" compiled? Can't remember whether that is the default or an option.
(I almost never use AOT, so have minimal experience with it)
Yeah and all of my previous experience has been with leiningen.
If you don't want other nses compiled, don't :require
them in your main ns -- instead use runtime require
/`resolve` (or the nice new requiring-resolve
in 1.10).
Thanks - looking now…
I don’t particularly care either way, tbh. Have just heard folks here recommend against AOTing much, if any, of a codebase (if possible).
One good test to run here would be to build your uberjar without -C
and -m
on depstar
-- which would not AOT anything -- and then run it with java -cp path/to/the.jar clojure.main -m mybot.main
That will do compile-on-demand and if it behaves the same as the original clj
memory usage, then compile-on-demand might be responsible for the larger memory usage.
Ok now that my 5pm US-PT (actually midnight UTC) batch job is done, I’ll give that a try. 😉
So locally, removing AOT seems to have resumed the “higher memory usage” scenario.
Next up, heroku env.
So it sounds like there's some overhead from loading source and compiling on demand that doesn't go away? I'm a bit surprised it doesn't all get GC'd eventually.
Yeah me too. Though I only have up to 24 hours of continuous operation available to look at, since Heroku forcibly restarts the dyno (VM) at least once per day. (as mentioned above, that includes at least 24 calls to (System/gc)
).
Deployed. Now we wait for a bit for it to show up on the Heroku dashboard.
I can't say I noticed a difference at work when we switched from compile-on-demand to AOT, but then our smallest process runs with 1G heap so a few 10s of MBs wouldn't really be noticeable...
Makes sense. I can definitely see the CPU cost of on-demand compilation at startup showing up. 😉 Memory graph still a bit inconclusive…
Calling System/gc is silly. Predicting what effect (if any)calling that method has is hard, so calling it, and then trying to reason from the basis of calling it is not helpful
Sure, but it can’t hurt in a non-interactive app.
You should look at the gc logs, and look at the sizes of the generationd
And even a multi-second pause in this bot is preferable to exceeding memory (which puts me in Heroku “R14” jail…).
how will you be in jail if the memory is bounded?
Calling System/gc will never stop you from running out of memory
That is how automatic garbage collection works
By exceeding the 512MB quota for everything inside the VM (JVM heap + JVM metaspace + memory required by any other processes + OS memory requirements (caches, etc.) + …).
It can only monkey with the heuristics in the gc
Regardless, I tested it first (obvs) and it improved memory utilisation over longer periods of time.
You should set a lower max heap for the jvm then
I don’t set the heap - the container does.
Remember I’m running with -XX:+UseContainerSupport.
Then have the container do a better job
Not to mention that controlling the heap doesn’t help with metaspace anyway.
If you are exceeding the limits
(which is where on-demand compiled classes etc. go)
Know anyone who works at Heroku to improve their VM impl.? Cause I don’t. 😉
It’s not a container.
It’s a “dyno”.
I don’t provide anything but the stuff that gets fed to the JVM.
(and while Heroku also supports Docker, I have negative interest in switching to that model of app construction)
So it’s still a bit early to be sure, but it looks like it was AOT that was saving much of the memory:
(that second “hump” is when I deployed with AOT disabled)
You should check the different generations, compilation may result in extra old generation garbage, that is never collected because it never needs to be
From what I’m seeing most of the savings are in the metaspace, not the heap.
(recalling that old generation is on-heap)
Which makes sense if on-demand compiled Clojure code is somehow more off-heap-expensive (e.g. metaspace, code-cache) than AOT compiled Clojure code.
In your original numbers, you only showed 11MB difference in metaspace, but 44MB in heap.
Yeah that’s local.
if you can AOT, AOT
Aye, AOT as the last step of an application, prior to production deployment seems like a reasonable step to me.
@ghadi I probably will, given the memory restrictions in this environment. That doesn’t help me understand why this is happening though.
To date I’ve never used AOT, and if there is a substantial (~40% memory saving, for this one bot) saving to be had, that could be a discriminating factor in choosing AOT vs on-demand compilation for other apps in future.
You'll need VisualVM or something similar to see what's actually going on inside the heap/metaspace...
@seancorfield no easy way to hook that up to Heroku, though it runs an agent that does break down on-heap vs off-heap memory usage for their dashboard.
Right, I meant locally, with a low heap size to vaguely mirror what's happening on Heroku.
Yeah - I’m doing that, but the absolute numbers are pretty different than what I see on Heroku. Possibly because it’s hard to limit the JVM’s off-heap memory usage (-Xmx only affects the heap), so it’s happily slurping up the (copious, compared to Heroku) memory on my laptop.
BTW, here’s a more complete view of heap vs off-heap usage reported by heroku. The two deployments (the first to AOT uberjar, then back to non-AOT uberjar) are pretty obvious:
Stand by - I accidentally chopped off “total memory usage”.
we're in the wrong channel, btw
We are now, yes. Originally this looked like tools.deps issue.
I concur with @hiredman’s suggestion to get better details about heap regions.
there are several interesting metrics depending on the GC used (G1 by default on 11)
Here’s a better picture:
Even at this granularity, it’s pretty clear that the delta in the non-heap memory usage is greater than that on-heap.
i.e. looking at the heap regions likely won’t help
(and yes the default Heroku dashboard sucks - it’s deliberately setup to encourage one to pay for a better one)
Sadly I have to step away from this now, but it looks like AOT is causing the difference, even if it’s unclear why. Thanks for indulging me on this!
not a mystery to me. compiling is more work than not compiling
More work == CPU load, sure. But that work, once complete, shouldn’t leave long-lived garbage hanging around.
Well, unless the JVM does tricks for AOT-compiled classes that it can’t use for dynamically generated (non-disk-resident) classes (e.g. mmap
’ing pre-compiled .class files rather than keeping them resident in memory at all times, unloading disk-resident .class files once they’ve been JITted, etc.).
Or if the Clojure compiler has caches that don’t get primed if it’s not called upon to dynamically compile some Clojure code.
A new clj
release candidate (what we were formerly calling "dev" releases) is now available - 1.10.1.619 (see https://github.com/clojure/homebrew-tools#version-archive-tool-releases for installation info)
• Fixes -Spom regression in overwriting groupId in existing pom.xml files
• Improvements in error handling for -X
• New: -F execution specifier to invoke an arbitrary function that takes a map at the command line: clj -Fclojure.core/pr :a 1 :b 2
=> {:a 1 :b 2}
Awesome, I already wondered why one had to specify a function in deps.edn instead of just being able to call any function
(! 1038)-> clojure -Sdeps '{:aliases {:foo {:fn clojure.core/prn :args "test"}}}' -X:foo
Invalid :args for exec, must be map or alias keyword: "test"
(! 1039)-> clojure -Sdeps '{:aliases {:foo {:fn clojure.core/prn :args :bar} :bar "test"}}' -X:foo
"test"
🙂Thanks for the quick fix to pom.xml
groupId
!
Why not just pass the entire map as one cmd line arg?
$ clojure -Fclojure.core/pr '{:a 1 :b 2}'
Key is missing value: {:a 1 :b 2}
or support both?not a big deal, but if the map is coming from a file, this could get a bit weird
because quoting sucks and it's more of a match to the -X
quoting sucks, but I bet you will soon be quoting the individual args as well, there is no escape 😉
but I implemented both at different points :)
no pun intended :)
powershell's -X:alias
still broken 😞
have you missed my suggested fix that handles splits to -X:
and alias
?
if ($arg -eq "-X:") {
$sym, $params = $params
$ExecAlias += "$arg$sym", $params
} else {
$ExecAlias += $arg, $params
}
yeah, I haven't had a chance to look at that yet
it's in the queue!
@alexmiller I'm a bit surprised this first version fails (in 1.10.1.619):
seanc@DESKTOP-QU2UJ1N:~/clojure$ clojure -Sdeps "{:aliases {:foo {:fn clojure.core/prn}}}" -X:foo :a 1
Invalid :args for exec, must be map or alias keyword: nil
seanc@DESKTOP-QU2UJ1N:~/clojure$ clojure -Sdeps "{:aliases {:foo {:fn clojure.core/prn :args {}}}}" -X:foo :a 1
{:a 1}
It's not a big deal but it worked before and I wasn't sure if it was a deliberate change or just an artifact of the implementation.
Yeah, that should probably work