Are there important drawbacks to slurp
ing a url? All slurped urls would be http1, unauthenticated.
I'd prefer slurp to a lib since under my constraints, the dep tree should be unaltered and there should be JDK < 11 compat
It comes with some weird constraints that are platform-specific to the way the default java http client works
But besides that, if it works for your usecase, I don't see why not.
If you need redirects you might also want to use HttpURLConnection
@ben.sless I would like to discourage the normalization of this approach. Yes, I know several tools that do it. As a developer who works in security, itās horrifying
@quoll oh, I did not mean to endorse this approach by way of reference. I find it pretty horrifying as well, which is why I always read the scripts I download before running them.
It was more in the vein of analogizing it to (eval (slurp url))
which I find equally horrifying
I had forgotten about io/copy
but I think thatās ideal. (I always used the byte-array
/`loop`/`.read`/`.write` approach taught to me by C, and never addressed in Java). I love @borkdudeās tweet and would not have realized that it could be written along with a comment in 240 characters!
I've used this in several tools to avoid a dependency on an http library
Itās a pattern that keeps coming up!
is it me or are you an early bird?
or a night owl :)
Canāt sleep once the sun comes up š¢
Lying in bed trying to convince myself that I want to go out for a run
:)
I made it! 5km!
> I've used this in several tools to avoid a dependency on an http library
Thanks! Looks like a nice option. I see you've contributed to http://github.com/martinklepsch/clj-http-lite/ . I'd almost just go and choose it but slingshot/throw+
seems bit of an odd thing to pull in
Slingshot's pretty cool. A little bit odd sometimes and it wasn't adequate for what I wanted it for, but it's small and not a lot to depend on.
@vemv I have a fork of clj-http-lite without slingshot, because slingshot could not run with babashka at the time (and it also loads faster with less deps)
Hmm, I should think about bb support for farolero. Is there a way to either throw arbitrary objects or extend exception in it?
you can throw ex-info exceptions?
Right, one of the key features of farolero in the JVM is that the throwable that's used doesn't interact with blanket exception or ex-info catch blocks, because it extends java.lang.Error. I guess a bb script is likely to be small enough and with few enough libraries as to not make this a huge problem, but ideally I'd be able to throw something that wouldn't interact with catching ex-info.
@suskeyhose Let's discuss elsewhere to not distract from the http discussion
š
(btw, I'm adding java.util.Arrays/copyOfRange
to bb now and this was the only remaining blocker to run slingshot with it)
$ ./bb -e "(require '[slingshot.slingshot :as s]) (s/try+ (s/throw+ {:type ::foo}) (catch [:type ::foo] [] 1))"
1
(I'd have to run its tests to see if it fully works)
thanks! any specific constraint comes to mind?
Donāt send it to read
and eval
š
Does anyone whoās more familiar with Clojureās bug backlog than I am know if there are any bugs related to the Compiler interpreting vararg sequences provided to macros incorrectly? this is intended behaviour š
@suskeyhose I guess the thing thatās a little āsurprisingā to me here is that wrapping the varargs seq in a list
or a seq
(e.g. ~(list enums)
also throws this exception, whereas wrapping with vec
doesnāt. I know thatās because itās still getting expanded to a list which is then interpreted as a function call, and the []
syntax doesnāt have that ambiguity, but it just seems like the compiler could do something better about thisā¦
So putting the quote there won't make it work quite like you expect, and the reason for this is that the contents of the sequence will never be evaluated, it'll just be the symbols and expressions you passed as arguments to the macro. If you want the arguments to be evaluated but included in the resulting source as a sequence, calling list
is the way to do it.
So if you use the quote, you just get a sequence of symbols, not a sequence of HttpMethods
Also the reason that wrapping it in a list
or seq
inside the unquote instead of outside is that it's the same thing, just a list of elements, that gets included in the source code, which means that it'll be evaluated as a call.
Lists and sequences have no ambiguity, they are always function calls.
> and the reason for this is that the contents of the sequence will never be evaluated, itāll just be the symbols and expressions you passed as arguments to the macro.
Right, haha, I realized this after sending that message š
> Also the reason that wrapping it in aĀ `list`Ā orĀ `seq`Ā insideĀ the unquote instead of outside is that itās the same thing, just a list of elements, that gets included in the source code, which means that itāll be evaluated as a call.
Right, itās just surprising to have to treat this completely differently. ~(vec enums)
works the closest to how I would expect it to. But if I want to actually use a list
(for some reason), I need to do (list ~@enums)
(obviously these two functions take different arguments, youād have to do the same thing for vector
)
> Lists and sequences have no ambiguity, they areĀ alwaysĀ function calls.
Definitely, to the compiler. But if you read code like this without knowing this background about macros, you might interpret the intent of the programmer in either manner. But yes, to the compiler itās un-ambiguous
Iām not sure how youād actually be able to get this to work with seq
. Maybe you canāt
anyway - thanks for shepherding me on this journey, haha
Well seq
is just a coercion function to turn collections into sequences, so I'm not sure how else you'd like it to work. lists are collections that already implement the sequence abstraction, so there's no difference between seq
and list
really in this case.
Except that if you had no arguments you'd get nil vs an empty list.
And in that case you could just use (seq (list ~@enums))
> so thereās no difference betweenĀ `seq`Ā andĀ `list`Ā really in this case. Right, the only difference is the argument shape they accept. > And in that case you could just useĀ `(seq (list ~@enums))` Yeah, this just seems like something you shouldnāt have to do, but maybe thereās just no way around that given the syntax of the language
š¤· Usually functions aren't too picky about their aguments, so it probably won't often matter anyway.
yeah exactly. Just filing that tidbit away in my head to prefer vectors in macros for function arguments
Specifically, this is the behaviour Iām seeing -
this is as minimal of a reproducing case as I can find so far. also, if thereās a better channel for stuff like this please let me know
coercing the sequence to a vector also works -
No matching method GET found taking 1 args for class com.amazonaws.HttpMethod
seems telling. Are you sure this is not an interop problem (instead of a defmacro one)?
Seems to be that something in analyseSeq
in the Compiler is attempting to macroexpand the argument sequence
@vemv if you try this with only 1 argument to the macro itās fine as well
(defmacro my-macro
[& enums]
`(println ~enums))
=> #'user/my-macro
(my-macro HttpMethod/GET)
#object[com.amazonaws.HttpMethod 0x2253f919 GET]
=> nil
Where there is more than 1 argument, the compiler is interpreting the first reference in that sequence as a StaticMethodExpr
(println ~enums)
seems off. ~enums
will expand to a list, which at runtime means a call
right. I ran into this because I was trying to pass the list to a function in my macro body
This ends up happening it seems -
(macroexpand-1
'(HttpMethod/GET HttpMethod/PATCH)
)
=> (. HttpMethod GET HttpMethod/PATCH)
The println
is just the first function I thought of for this case, it could be anything
the real-life scenario is more like -
`(let [...
opts# (process-opts ~enums)
...
]
...)
I have to ~enums
in order to capture the sequence & enums
in the input to the macro
Perhaps at this point you'd want to formulate the question again describing the end goal
From intuition, (list 'quote x)
may help but it's hard to tell without a more specific problem to solve
well, my end question is ādoes anyone know of a bug filed for this alreadyā?
afaik & args
is perfectly fine to pass to a macro, but it breaks when you pass multiple Enum references
there are multiple ways to work around it it would seem
> well, my end question is ādoes anyone know of a bug filed for this alreadyā? sounds like a "xy problem" to me, sorry One thing worth pointing out is that HttpMethod/GET is not an object you can pass around, so it's not surprising that you can't embed it arbitrarily in macroexpansions
Your problem is that you are unquoting a sequence, which is printed as a list, which is evaluated as a function call. Most likely you wanted unquote splicing.
(defmacro my-macro
[& enums]
`(println ~@enums))
I donāt want unquote splicing here - I want to pass the sequence to the function Iām calling
unquote splicing results in the incorrect generated code
I see. Then you would need to do this: (println (list ~@enums))
Because again, you cannot print a sequence into code except as a function call.
letās ignore the println
for a moment
one second
Sure, what I'm saying isn't connected to the println. Any time you would like to produce a sequence of items based on varargs to a macro you cannot use it directly in the resulting code because it will be treated as a function call. You have to wrap it in a call to list
.
This is why coercing it to a vector works as intended. Vectors when evaluated return themselves, they are not evaluated as a function call.
ah I see
> because it will be treated as a function call Yeah, this is the behaviour Iām seeing, but itās inconsistent
well, it appears to be
In what way does it appear inconsistent?
when I pass 1 item it expands OK. It probably will blow up at runtime is what youāre saying though?
Yes.
The item won't implement IFn, or worse, it will implement IFn and it'll give you some result you don't expect.
Causing an exception somewhere that is hard to trace back to the macro.
right, that makes sense
> Any time you would like to produce a sequence of items based on varargs to a macro you cannot use it directly in the resulting code because it will be treated as a function call Why is this the case?
Well any time you evaluate something of the form (a b c)
it'll be treated as a function call.
All sequences are printed as lists.
right - yeah Iām playing around with this a bit more right now
And while the printing isn't actually used in macroexpansion, it's a good analogy.
All sequences are treated as function calls when evaluated.
Well, they're treated as calls. Macro, builtin, function, or special operator. But a call nonetheless.
yeah I see. I donāt really play around with macros too often is what Iām learning here š
Lol, it's all good. Macros take a bit to really grok.
'~enums
also behaves as Iād expect it to, presumably because the wrapping quote
preserves the ālistinessā (data type?) vs. writing out the (GET PATCH)
directly to be interpreted as an IFn
not sure how to accurately describe exactly what '
is doing there that makes it work
hmm I may be incorrect about ābehaves as Iād expectā there as well - it ālooks rightā printed in the REPL, but isnāt actually what Iād want
Ye olde curl URL | sh
As long as you don't care about performance it's okay.
If you want to use it to download files open and copy might be more convenient than slurping
Why isn't add-libs et al. part of the official (master branch) tools.deps.alpha distribution? https://github.com/clojure/tools.deps.alpha/blob/add-lib3/src/main/clojure/clojure/tools/deps/alpha/repl.clj#L75
@jumar Because the exact design and integration with Clojure is still being worked on.
Alex has said that āsomething like add-libs
ā may find itself directly into Clojure or t.d.a in a different form at some point.
I periodically ask him to bring the add-libs
branch (which I use heavily ā see my dot-clojure repo on GitHub) up-to-date w.r.t master.
Great, thanks for the info!
hi, any idea how I can change the clojure crash report path?
Full report at:
/tmp/clojure-8030624471491730958.edn
I am running inside a container and it restarts but there is not enough information to figure out why.
Also the files are not preserved in tpm.
I would like to write the file to a specific locationtry --report stderr
or the Java property -Dclojure.main.report=stderr`
see https://clojure.org/reference/repl_and_main#_as_launcher
This cause memory leak. To be precise memory usage grow faster and it ends with exception Error class: java.lang.OutOfMemoryError
(defmethod ig/init-key ::cli-planner [_ {:keys [supplier cache-key-prefix] :as opts}]
(declare-queue opts supplier cache-key-prefix)
(let [p (promise)
t (doto (Thread.
^Runnable (fn [] (shop-supplier! opts supplier cache-key-prefix p))
"cli-planner thread")
(.start))]
{:thread t :promise p}))
This not:
(defmethod ig/init-key ::cli-planner [_ {:keys [supplier cache-key-prefix] :as opts}]
(declare-queue opts supplier cache-key-prefix)
(shop-supplier! opts supplier cache-key-prefix (promise)))
Why?ha I found everything was a misleading. I am doing last check but I found bound-fn
inside and outside this thread. It looks like this is the source of the out of memory. I donāt know why exactly it cause memory leak, but I donāt want to have this functions anyway in the code.
Thank your for solving the puzzle together š
thanks, works like a charm
My first idea is, because promise
oustide thread cause garbage collector to not being able to free memory. But if it is correct assumption and why in details? Or maybe because of Thread
is returned? :thinking_face:
One thread should not be an issue, unless you are already right on the boundary of memory usage.
Iām a bit surprised at this. Youāre throwing away the promise, so the JVM knows that itās not going to be needed, but it will be needed for what shop-supplier!
does to it. I think it may depend on how aggressively the JVM can optimize the code.
May I presume that using a let
wrapper to return the promise (without using a thread) also runs out of memory?
That is what I am trying to figure out, but test take hours š I can check all possible options, but I hope someone here know the answer š
But memory behave differently from the beginning. You can clearly see on right part of the chart how memory behave without memory leak with the fix.
*lines are different k8s pods (instances)
that code is being called by integrant, right? ... does that mean that the results of calling ig/init-key
are getting put into a system map in ram or something similar? Because the one that runs out of ram returns the promise, while the one that doesn't, doesn't ... so the content of the promise might be being held onto by the surrounding code ... unless show-supplier!
returns the promise it's given??
> that code is being called by integrant, right? right
how does the promise get deref'd in the second example?
promise
is only to stop the thread. It has value ::done
for example. The memory consumption by promise is not an issue.
ah ... fair enough
but it affect the garbage collector somehow . I mean I donāt know if promise
, but the difference in the code.
And honestly I donāt see the reason why. It is deeper dark magic š
Well if I will figure out this I will add more info here.
What's the best way to write code that runs only locally but won't run in production?
A macro with a compile time check on some condition (e.g. environment variable, dynamic, var, whatever). This allows you to completely elide the non-production code in production.
That's what I'm looking for.
Is there a var that works like that in Clojure (e.g. *assert*
) or should I set it manually in my build tool ?
I would just make a custom one
why?
more fine grained control. turning off ALL assertions in production is maybe something you wouldn't want to do
I do a new class path root that wonāt be on the prod class path. dev.nocommit.logging or whatever
I keep it gitignored so each person can have what they want there. Precommit hooks can look for nocommit and no forms will compile in CI with any of those namespaces
Try opening it in visualvm. I think that since threads are GC roots if you hold a reference to a thread it won't be collected
This is too complex system. I can run it only in specific environment to observe this memory usage.
in the version returning the promise, who consumes it and when do they stop referencing it?
Suppose my application has two standalone tasks (for example, two http endpoints). Is it normal to call task.start() inside ig/init-key method?
It is just intregrant. The promise
is used to
(defmethod ig/halt-key! ::cli-planner [_ opts]
(log/info "stopping cli-planner thread:" (:thread opts))
(deliver (:promise opts) :done))
so not really on production.well anything dereferencing that promise will never return if that deliver is skipped
I am not sure, because I have to wait for results, but it looks when promise
is outside thread
it affect garbage collector inside thread
the puzzle is only memory consumption, code process things like it should
Are there any examples of applying transducers to callback APIs? I know I could wrap the callback API with core.async; but Iād prefer to do something lower level.
is it just my functional brain hallucinating or does this sound like a monadic operation - lifting the procedural logic into an object that can be composed into the file processing pipeline?
in my opinion using a collection abstraction in the middle is still usually the most intuitive thing to work with. my hunch is this would go a lot smoother if you use a collection or queue as your glue, and once the whole pipeline works, a transducer can replace the collection conversion as a performance optimization (if that is in fact needed)
because otherwise my senior engineer side think this smells like it would become the kind of code one ambitious dev will make in a fugue of caffeine and long nights, and nobody will be able to maintain or understand afterward
Well reducers/transducers are similar to monads so itās probably not surprising.
I was really asking about how to turn a callback based java parser into a clojure.lang.IReduceInit
such that it can take xforms and work with reduce
/ transduce
etc. So I was asking about how to apply those existing abstractions to a new thing, rather than create something new that smells like a monad.
Anyway Iāve worked it out and have it pretty much working now.
right, and to be more concrete, my suggestion is that instead of implementing IReduceInit
and making a transducing function as your first draft, it would go more smoothly if you do it with a collection or queue in the middle as a first draft, and only reach for that interface and transducers as a performance optimization if needed. acknowledged that this is opinionated advice and if you what you have works and is maintainable then cheers š»
This already is the performance optimization work, after that queue implementation š
though itās not just about performance
oh I misunderstand then
what's the "not just about performance" aspect here?
all the other trade offs of using reducers/transducers vs seqs
eagerness, resource control etc
oh to me those aren't issues with seqs, they are issues with laziness (which is already a no go when talking to the outside world)
I think I understand what you are getting at
well seqs complect laziness, caching and sequences, so they are issues with seqs š Though to be fair you said ācollectionā and I said āseqā š
Anyway the code is pretty simple
implementing IReduceInit isnāt hard
there's no garbage collector inside a thread, a value can be garbage collected if it doesn't have a gc root the two ways a promise will effect gc: ā¢ a thread is a gc root and one that is waiting for a promise deref won't exit if the promise is not delivered ā¢ a promise can hold arbitrary data, and if the promise is held by a gc root, the data in the promise will be as well
I'm picking on the promise here because the handling of the promise seems to be the only interesting difference between your code snippets
In the first example would it be correct to say the promise is referenced by two threads? The initializing thread and the created thread
right
Not sure if I understand. It looks like the issue is when promise
is outside thread
, then all data processing in the thread
have memory leak. This is not about value in promise
at all. promise
can have for example value ::done
, so it is very small if even appear.
if I take this question literally it sounds like you want to turn the callback API into a transducing context, so you could attach a transducer to it the way you would attach one to into
or chan
- is this actually what you are talking about?
Essentially I have a callback based parser for a data format, and Iād like to ātransduceā (load) it into a connection object with an optional xform
yes precisely
Was thinking Iād just deconstruct the xforms
i.e. something like this:
(((map inc) conj) [] 1) ;;=> [2]
BUT I am not 100% sure. I have to wait half a day š
is the promise always delivered to?
You can take a heap dump then analyze it with visualvm to see who holds references to the promise and the thread
where conj
would be the reducing function for adding something to my connection.
because skipping delivery can make the thread hang
and []
would be the connection
no, it is used only to stop processing in rare cases, so in general it is almost never delivered
@ben.sless holding a reference to a thread is irrelevant, you can literally ask the jvm to list all existing threads, they are gc roots and are only collected if they exit
@kwladyka at this point I don't think the information you have provided is sufficient to narrow down your problem
What do you need to know?
Right, and it isn't daemonized. Still, a heap dump should be a good place to start when hunting down memory leaks and runaway references, no?
could you put your callback into your completing arity?
no, the callback is equivalent to the 2-arityā¦ i.e. itād need to be essentially for reducing an individual item
I think this is not at all about delivering prmoise. For me it looks like a construct with promise
outside thread which is never delivered affect memory usage in thread
.
The thread is about processing data and this is what consume memory.
there are other callbacks for completing etc though which would need to call the completing arity
and this processing have nothing to do with this promise
Pretty sure I can knock this together quite easily actuallyā¦ but thatās me finishing for today š© One for tomorrow!
Just wondered if there were existing examples of this sort of thing
where someone has augmented a java callback api and java aggregating object, with an arbitrary xform
.
Can you recommend good video / article about garbage collector and how it work?
Even better on clojure examples?
callbacks are one-shot, but a transducer transforms an operation that is probably not one-shot
I'm wondering what advantage transducers would have when attached to an API over comp
they have the disadvantage of being a lot more complex than comp
there's a mismatch here
an api callback is a one time thing
Well wait for results of my tests to figure out which code exactly cause the issue.
it depends on the api, a lot of callbacks(listeners) are not a one shot thing
I'm getting a vector of maps with numbers from an API. I am filtering the maps for the numbers above a certain value. When it returns an empty sequence (ie no numbers above the value), I want to receive the number closest to the value. How can achieve this kind of fuzzy filtering?
this seems too simple for what you are asking, but I'd just do (or (seq (filter high-enough result)) [(closest result)])
I put the second conditional in a vector so that the result is always sequential, even though closest should clearly return a single item
in practice this sort of logic is a common reason that you have to change a ->>
body into a let block where bindings refer back to previous bindings
hey, I think you have to ask more precisely. This is hard to understand your needs and probably this is why nobody answered.
i have used a priority queue to reduce over a collection keeping the top N
items. You could do something like this. Then you're left with either the collection you want, or the largest items if they haven't hit the threshold.
You could try posting a (redacted) stacktrace here. There are various types of OOMs; perhaps by posting the specific type/cause something can come to mind
@noisesmith I understand you are suggesting to solve it algorithmically: if filter result is empty? then run closest. This I can do. The thing is, I have this kind of logic in many places for various vectors of maps. I was thinking about some cool functional pattern inside the mapping function : ), or wrapping in the middleware that will always return the closest result
how do you write and work with a priority queue ?
it's only useful if you know you only need at most N
items
but you just keep the largest N
items you've seen so far.
Thanks, will have a look
that's a lot more complicated and error prone than just using seq and or
something inside a mapping function as you suggest requires state management and rarely (never?) performs better than seq / or, and is always more complex / error prone
Hi everyone, with gen-class
why does the following with-meta
not create the (java) annotations on the generated class ?
(The commented reader-macro ^
does work, but I cannot create that from a macro. I'm trying to create gen-class
expressions from a macro...)
(gen-class :name
;; ^{JsonAutoDetect {:getterVisibility JsonAutoDetect$Visibility/ANY
;; :isGetterVisibility JsonAutoDetect$Visibility/ANY
;; :fieldVisibility JsonAutoDetect$Visibility/NONE}}
(with-meta kafka_testdrive.messages.position
{com.fasterxml.jackson.annotation.JsonAutoDetect
{:isGetterVisibility
com.fasterxml.jackson.annotation.JsonAutoDetect$Visibility/ANY,
:getterVisibility
com.fasterxml.jackson.annotation.JsonAutoDetect$Visibility/ANY,
:fieldVisibility
com.fasterxml.jackson.annotation.JsonAutoDetect$Visibility/NONE}})
gen-class is a macro, it does it's thing at compile time, with-meta is a function which does things at runtime. Same reason (let (vector a 1) a)
doesn't work
OOM errors are not scoped to a specific stack (though I guess in theory you could capture all active stack traces when the OOM happens?) it's really something you want heap profiling data for, as @ben.sless mentions. heap profiling with clojure is hard because the same classes are used everywhere and laziness can mess with the tool's idea of who owns something, but profiling does help
That makes sense... is there another way I get that ^{...}
or the same effect generated from within a macro?
I mean technically you can grab the stack of the code that hit the OOM condition, but that's only sometimes the code that's leaking memory
and sometimes the issue isn't a leak, but rather you are trying to use an algorithm that consumes more memory than you provide the vm
another possible source of the problem you could look into (unlikely but possible) is that Thread
doesn't propagate dynamic bindings from the caller's context and always uses the root bindings
you can replace (doto (Thread. ^Runnable (fn [] code goes here)) (.start))
with (future code goes here)
- it's simpler and reuses thread objects from a pool instead of creating new ones on every call so it performs better too
and it propagates dynamic bindings from the call context, which is nearly always what you want
OOMs sometimes are caused by stack consumption which can be accurately correlated with the stack that threw the exception anyway a stacktrace can be useful, there are other reasons also (e.g. sometimes it details the kind, like ran out of metaspace)
I see eastwood has a dependency on {:group org.clojars.brenton, :artifact google-diff-match-patch, :version 0.1}
but i cannot find any licensing information for this artifact. doesn't seem to have a repo associated with it through clojars. does anyone know where i could source this information?
Does anybody know about an article or some documentation on why clojure.lang.BigInt
exists in addition to java.math.BigInteger
, but no such thing was done for java.math.BigDecimal
? I read some comments that itās done to prevent autoboxing, but iād like to read a little deeper into the reasoning.
yeah, that's it
BigInt uses longs in long range, and BigInteger beyond
Thanks!
feel free to create an issue on eastwood, we should have a nice dep tree
otoh eastwood is dev tooling, so licensing matters are more relative (i.e. you'll never bundle eastwood to a prod artifact, so one doesn't have to worry nearly as much)
Yeah, this isnāt a one shot callback; itās an evented parser, that emits an event (calling the callback/listener) on each āformā parsed.
Control passes to the parser and isnāt relinquished until the whole file is processed. Iād really like to load all the forms into a db (hence the connection object); with an optional transformation (hence the xform), but thereās an impedence mismatch due to the inversion of controlā¦ Similar I guess to why you often need to resort to using PipedInputStream
when redirecting input to an output stream.
Iād really like to handle this without spawning an extra thread (which is what Iāve done in the past), and I see transducers as a solution to this, as the reducing-function can be passed into the parsers thread.
Hence I was thinking Iād write an into-connection
function that would wrap all this up nicely.
Thereās also a hash code issue with BigInteger
, IIRC
yup
user=> (.hashCode (Long. 12345678901))
-539222985
user=> (.hashCode (java.math.BigInteger/valueOf 12345678901))
-539222925
user=> (.hashCode (bigint 12345678901))
-539222985
what's the incantation i'm looking for here: clj -Dillegal-access=debug
. is it -J-Dillegal-access=debug
?
answer: clj -J--illegal-access=debug
and i'm getting illegal access warnings from clojure.data.xml, which i thought was free of them
clojure.data.xml
or clojure.xml
?
clojure.data.xml
at clojure.data.xml$parse.invokeStatic(xml.clj:346)
(amongst many other stack frames š
reading in a google group message i was expecting this to not be an issue with clojure.data.xml
Ah, reflective access, not reflection. My bad!
So, yeah, I am surprised. I would have expected reflective access to not be an issue with that contrib lib. @alexmiller?
i'll go post it on http://ask.clojure.org and not bother him
Is it perhaps due to an older transitive dependency or something?
let me check. only three deps but one was cheshire so i'll try again
remained the same. moving to "0.2.0-alpha6"
from "0.0.8"
actually changed the behavior of the program and it failed to find some things in the poms
Despite the name, you should use 0,2.0-alpha6
Whatās the issue?
gonna have to debug why it failed to hit some info in the poms. but i'm switching now. thanks
BigInteger
has a different hash code than the equivalent Long
value, whereas clojure BigInt
is identical
I have a hazy memory of that being given as the explanation for why Clojure needed BigInt
instead of just building on top of the Java type
ah, what was previously a :tag groupId
is now :tag :xmlns.http%3A%2F%<http://2Fmaven.apache.org|2Fmaven.apache.org>%2FPOM%2F4.0.0/groupId
the bump to "0.2.0-alpha6" does solve the illegal access though. unfortunately breaks all navigation into the csv. but i can patch that. thanks all
Because 0.0.8 didnāt handle XML namespaces but 0.2.0-alpha6 does?
isn't what you're showing the opposite of that?
(also fyi, Clojure doesn't use .hashCode, hash
is the relevant function here)
most likely
yeah. also lots of "\n" in the content
Which also changed because 0.0.8's behavior was incorrect?
I think the whitespace thing broke us when we upgraded ā but that was a long time ago I think?
i'm building a dep license concator and having to parse poms. haven't done that in a while. so had been a bit since i'd looked at xml at all, much less parse it
there's a flag or something for the whitespace thing
content type to skip or something
I believe that's an example
Oh thank you Alex. I vaguely remembered that but nothing in the docstring was catching my attention