There is some issue about (name nil) => NPE
?
On REPL it not occur, but in my app, when I do (name nil)
, it throws with an empty stacktrace.
it happens in my REPL
(ins)user=> (name nil)
Execution error (NullPointerException) at user/eval144 (REPL:1).
null
it should always NPE
But (.printStackTrace *e)
should show a stack for you
From my cloudwatch
:cause nil
:via
[{:type clojure.lang.ExceptionInfo
:message "java.lang.NullPointerException in Interceptor :my-cool-route - "
:data {:execution-id 123, :stage :enter, :interceptor :my-cool-route, :exception-type :java.lang.NullPointerException, :exception #error {
:cause nil
:via
[{:type java.lang.NullPointerException
:message nil}]
:trace
[]}}
:at [io.pedestal.interceptor.chain$throwable__GT_ex_info invokeStatic "chain.clj" 35]}
{:type java.lang.NullPointerException
:message nil}]
:trace
Not sure if it's something with pedestal error-handlingif the problem are the empty stack traces, it's because of
OmitStackTraceInFastThrow
which is enabled by defaultright- is that the objection, that it doesn't print the whole trace?
you can set
-XX:-OmitStackTraceInFastThrow
to disable it:trace []
???
should I use -XX:-OmitStackTraceInFastThrow
in prod?
well, it will cost you a very marginal performance hit as you're disabling an optimisation
but depending on what you're deploying it may make no noticeable difference
I've definitely lost a lot of time from hard to reproduce prod errors with elided stack traces
many people do set this in prod
seems like it should only be a cost if you are throwing a lot of exceptions (in which case you might want to know why that is :)
maybe using core.match in a hot loop :P
If some code decided to use JVM exceptions for normal expected case control flow, catching them internally, that could have a performance degradation, but hopefully there are very few libraries that use exceptions that way
It seems I’ve only ever seen NPE exceptions affected by the OmitStackTraceInFastThrow
default
Interesting to know. thanks!
which is interesting. I’d think it’d apply to more cases
In practice though, somehow I’ve never seen it
at one point there were common jvm internals that worked that way, but I would be surprised if that wasn't optimized
I too recall finding some potentially hotter paths I thought were going to use try/catch logic for control flow in the core. I can’t remember specifics now. I actually assumed some of that hasn’t likely changed.
Some class loader logic is based on catching ClassNotFoundException
I believe - but perhaps that isn’t considered a “hot path”
I'm pretty sure a lot of this was optimized
they're nowhere near as expensive as they once were.
@mikerod I'm trying to think of a case where I'd load new classes in a loop as the most perf critical part of my program
😄
I guess if my program is meant to be short lived, that would in fact be the case, but then why am I even using clojure on the jvm in that case...
If your process is very long-lived, you can hit the stacktrace optimization fairly quickly even if you're only dealing with occasional exceptions. We have that optimization disabled in production for that reason -- and I disable it in dev too because my REPL tends to run for days so I hit that optimization in the REPL too (my main work REPL has "only" been running since last Thursday, but my current open source project REPL has been running for over two weeks at this point!).
Yeah @noisesmith I think the general advice of it’s probably fine to disable OmitStackTraceInFastThrow
makes sense to me too. I think Alex basically said it, but it’s probably one of those things that would make more sense not optimized until/unless you actually find some unavoidable issue. I find that it’s weird that it’s on by default.
That said, all production app’s I’ve ran have always left the default on - and we’ve had a bit of pain before with these empty stacks in our logs. 💀
it might make sense for Clojure CLI to set this when starting a repl
If Clojure CLI doesn't do that by default, then this certainly seems like a strong example motivating some kind of "alias that is enabled by default" in deps.edn, or something similar.
well, could just be hardcoded, doesn't necessarily need that
Quote from release notes about this flag: "The compiler in the server VM now provides correct stack backtraces for all "cold" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace. To disable completely the use of preallocated exceptions, use this new flag: -XX:-OmitStackTraceInFastThrow
." which is kind of interesting.
so first, they "bake in" prebuilt stacktraces for built in exceptions! and second, once a particular exception has been thrown enough, it's only on recompilation that the compiler chooses the faster omitted stack throw.
which implies a lot more nuanced implementation than what I pictured in my head
Coming from @alexmiller’s comment in a thread on the 1.10.3-rc1 announcement: when I talk to tooling maintainers, they pretty much all say that prepl
is too limiting for them to use -- because you can't interrupt evaluation and you can't "partially consume" a potentially infinite sequence.
I'd love to see more tooling based on just plain socket REPL or prepl but it seems that without those features, tooling maintainers are mostly going to stick with nREPL. Even in Chlorine/Clover -- which can connect to a plain socket REPL -- it side-loads https://github.com/Unrepl/unrepl so that it can provide both of those features, but that also adds a lot of complexity that I know Mauricio finds frustrating.
I know Rich favors a simple, streaming REPL -- he's mentioned that several times, both in person, and on the Clojure mailing list -- but it seems that controlling long-running evaluation/printing is an important feature for developers, which seems somewhat at odds with socket REPL/prepl. Has any thought been given to addressing that inside in a future release of Clojure? I think Alex has talked about making it easier to get to an editor-connected-REPL state so it sounds like something is in the hammock, at least?
I know you are just relaying stuff others have said, but isn't it wild to refer to a tool that bottoms out at calling eval as too limitting?
🙂 Well, only insofar as you lose control of things if the evaluation/printing "hangs" (runs too long, never completes). Personally, I find unrepl's "helpful" attempts to limit rendering of values more of a nuisance since it doesn't expand enough of a large returned results -- but at least it protects me from accidental infinite sequences or hung evaluation.
I think I had a discussion, maybe with ghadi, in this channel a while back about interruptable eval. It is kind of a rock and a hard place. Users really want it, but all the jvm underlying methods to support are marked with things like "deprecated, don't use, very bad"
it is possible to interrupt a prepl eval though, just open another prepl, find the thread call that hideous .stop method on it
and run away before anything finds out it was you
so the usage pattern becomes: open a prepl, ask it for its thread id, then send it stuff to eval, and if you need to interrupt it you have the id
That's an interesting idea... and I guess handling infinite printing just through the usual print length/depth stuff?
I dunno, there are some nrepl tickets that discussed that showed issues with the length and depth settings, I think the solution there was just printing to a fixed length buffer
which like, you can use a repl to define a function for that, and then launch a new repl server (in the same process) that serves repls with your new printing function
I think to some degree this is tool writers see an existing fixed pattern for how to do this in nrepl, and not wanting to change anything, asking for the same pattern to be available in prepl
I, of course, am not the man in the arena, so I may be missing a lot
I guess you could even set up a new connection for every evaluation as long as you closed out old connections as results came in or got killed...
There are about 5 exception types that are optimised this way. Another common is ClassCastException
Is this really something that needs to be in "a future release of Clojure"? I mean this can be done perfectly fine in a library. IMHO prepl should have been a library too but oh well
@thheller It was more a question of "is this primitive considered sufficient/complete and might it get future enhancements?" -- I only know of two community projects that used prepl: Reveal and Conjure and the latter has migrated away from it because of these limitations. My understanding of it appearing in core was that there was an expectation more tooling would be built on top of it, but since that isn't happening I'm wondering if it needs to be enhanced in core for it to gain adoption.
the benefit of both prepl and the socket server being "in the box" means you can rely on always having them available
which means you can add some Java properties to your clojure program, without touching the program itself, and get access to it externally
at any rate, that's why they are not libraries
makes sense given how small they are I guess
I think it's a good long range bet that prepl will continue to receive attention. hard to say if it's a good short to medium range bet.
Thanks @alexmiller -- I hope that long range attention includes thinking about interruptible execution 🙂
https://gist.github.com/1789849d21be38310694dbf214d60d34 is an example, it sends @(promise)
to a prepl, and then kills that after a second
That's not too bad. I can see building something in front of that which always keeps an execution socket process open and a control socket process that can be used to kill/restart the current execution process as a prepl proxy for it...
it is basically shifting the machinery from the server into the client