@blueberry thank you for commenting! this library is kind of amazing. Docs are quite good, I already ran some code with it yesterday.
Notwithstanding that idiomatic clojure is less cpu efficient than mutable java code, I've yet to be convinced otherwise, e.g. for mutation heavy algorithms (e.g. take Levenshtein distance calculation) > Properly written Clojure is equivalent to Java. It compiles to the same bytecode.
In a nutshell, can you help me hammer this down â > Moreover, Neandethal is faster even than Java libraries that use the same MKL binaries. why? better design around MKL? or it is best seen in the benchmark? feeling curious
Thanks for this cool lib!
@matan of course idiomatic clojure is slower for numerical tasks than mutable Java. That's the point: use the tool that is proper for the job. Clojure supports mutable arrays, buffers etc. That's what I was talking about. When you use equivalent code, you get (almost) equivalent bytecode, and can get (practically) the same speed. The problem is that the speed at the level of Java is still many times slower than the speed that the hardware can execute.
Better design around MKL.
You're welcome.
Now when I think about what I have said... The design is not around MKL, it is overall better design for this particular domain. It accomodates MKL, cuBLAS, and a few other libraries in a quite efficient way.
and could even support pure Java implementations transparently... if there were any.
@blueberry Would you happen to know of any machine learning clojure libraries using Neanderthal?
nice homonoid species naming style ð hope they don't end up the same....
@whilo is core.matrix itself a very good API? I've never used it yet
@jsa-aerial thanks! does bayadera have any explicit docs to it?
@jsa-aerial > A Clojure Library for Bayesian Data Analysis and Machine Learning on the GPU. The tagline left me a bit unsure, e.g. does it implement bayesian machine learning a la bayesian neural networks? I should read the source
@matan bayadera is one of Dragan's projects and so he would be able to give best advice. I don't think it is 'officially' released yet. I have not actually used it, but I don't believe it uses any NN stuff.
@jsa-aerial thanks!
@matan i think core.async is fine if you care about a numpy like high-level API that is polymorphic w.r.t. to implementations. unfortunately it was not as focused on performance as neanderthal, i hope denisovan provides a reasonable tradeoff for code implemented in core.matrix. still need to check https://github.com/whilo/boltzmann against it
@matan Bayadera does not use NNs, nor it makes sense to use them in general case. Bayadera is closest to an implementation of automated probabilistic decision making where you combine knowledge and data to update the knowledge and evaluate it taking utility (or cost) into account. In theory, it could be used to make automatic decisions on parameters and/or structure of neural networks, but it is not practical to do this with real-world million-node deep networks. Nor it is necessary IMHO. When people use the term "bayesian" together with deep learning, they usually mean "I replaced some numbers with (normal) distrubutions of those numbers". That is not Bayesian in the usual sense of what "Bayesian" mean: updating your priors with data to find out what is your updated knowledge.
BTW. I'm in the phase of polishing Bayadera for the first release to Clojars, so expect docs and tutorials this spring/summer. I already updated all engines to work on both AMD (OpenCL) and Nvidia (CUDA) GPU's.
@matan You mean end with Homo Sapiens? BTW there's something like a few percents of Neandertal genes in European population. Being used in a few percents of each European Clojure project might actually be a nice thing for Neanderthal (the library) ð
... and in large part of North and South American projects, now that I think of it ð
BTW, it's odd that the Java libraries wrapping around MKL have been so sloppy, maybe they weren't important enough for anyone but still kind of odd
Either way I'll use Neanderthal for my current project
Managed to set up MKL despite its terrible intel docs websites ðĪŠ
@matan they are not sloppy at all; they just miss some performance opportunities here and there. it's not an easy problem. there are thousands ways where you can lose performance, and those add up.
I have also provided https://github.com/cailuno/denisovan so all core.matrix code can use neanderthal now. Still missing the GPU backends though.