@huahaiy - that isn't the case. I have three 'ffi's implemented. libpython-clj, for example already works on with JNA direct mapping and JDK-16 and it just depends on what dtype-next's ffi system finds on the system. You write one https://github.com/clj-python/libpython-clj/blob/master/src/libpython_clj2/python/ffi.clj and it works across https://github.com/cnuernber/dtype-next/blob/master/src/tech/v3/datatype/ffi/jna.clj, https://github.com/cnuernber/dtype-next/blob/master/src/tech/v3/datatype/ffi/mmodel.clj, and https://github.com/cnuernber/dtype-next/blob/master/src/tech/v3/datatype/ffi/graalvm.clj. So you, as the implementor, do not have to write multiple FFI layers yourself. You write one, describe it with data, and I translate that data into the subsystem-appropriate classes and implementations.
https://github.com/cnuernber/dtype-next/blob/master/test/tech/v3/datatype/ffi_test.clj.
That's great, no more writing JNI stuff I guess
I have a link just for you - https://graalvm.slack.com/archives/CN9KSFB40/p1616200527026800?thread_ts=1616189277.022900&cid=CN9KSFB40. I made a bit of noise on the graalvm native-image channel and got a fantastic response.
@chris441 Does this also work if you want to expose Clojure stuff as a native lib? like I did here? https://github.com/borkdude/sci/blob/master/doc/libsci.md
I can try to expose avclj as a native lib and we can find out. My overall goal was to get some nontrivial portion of libpython-clj working both as an executable and as a native lib.
So people could write python extensions in Clojure but be able to talk back to the python system that loaded them.
My question is just out of curiosity, not that I have any plans
As far as JNI is concerned, the best way to do that is javacpp.
Nothing else is even close IMO.
I used that for cortex and aside from some teething issues it worked great.
But I don't like JNI. I like to find libraries dynamically so I can account for version changes in code without separate releases bound to specific binary versions of things.
I guess I could ask the question like: avclj compiles to an executable. Is it possible to compile it a shared library again (which kind of defeats the purpose of that app, but just as an example), so you can call that shared lib from python as a native lib?
It should be. Nothing I did precludes that.
Well, nothing that I know that I did precludes that.
cool stuff.
🙂. Appreciated. Agreed 🙂.
I do have a question. Sometimes I have an extra require or something that really explodes both the build time and the executable size. It usually takes me quite a long time to track down the pprint statement or whatever that causes it. Are there any tricks to tracking down these issues faster?
For instance, RT/var will cause the explosion as will pprint.
@chris441 I usually just do this by divide and conquer. Start as small as possible and then start adding things. Things known to explode: Look up vars at runtime. E.g. require, resolve, requiring-resolve. I have made a patch in babashka to avoid the explosion of pprint. Also I made this variation of dynaload that won't blow up the binary: https://github.com/borkdude/dynaload
Also I have logged a JIRA issue about pprint
Beautiful, dynaload is a great insurance policy.
Possibly you can use https://github.com/borkdude/grasp to find these likely causes
Grasp for non-top-level require, find-var, requiring-resolve, resolve
Yep, makes sense. Interestingly enough, RT/var in the static initialization code section of a class doesn't cause it. But RT/var in any instance-level code definitely well.
I think GraalVM static analysis accounts for static initialization, which is influenced by --initialize-at-runtime
and --initialize-at-build-time
. Is (var x)
triggering something? This is a special form which is compiled away to the var object I think?
If GraalVM can't decide that an entire namespace isn't needed, it may leave all vars/functions around and everything related to it, which may blow up the binary, is my guess
Sometimes -J-Dclojure.compiler.direct-linking=true
also helps
Maybe I should add a graalvm native-image linter to clj-kondo :thinking_face:
well, and with multimethods and protocol implementations graal often can't figure out if a namespace is OK to elide and I can't blame them.
true
It looks like we can include the truffle nfi in a native image. That would give you a cython-equivalent system I believe.
nice