uncomplicate

matan 2018-04-12T07:37:24.000235Z

@blueberry thank you for commenting! this library is kind of amazing. Docs are quite good, I already ran some code with it yesterday.

matan 2018-04-12T07:38:50.000437Z

Notwithstanding that idiomatic clojure is less cpu efficient than mutable java code, I've yet to be convinced otherwise, e.g. for mutation heavy algorithms (e.g. take Levenshtein distance calculation) > Properly written Clojure is equivalent to Java. It compiles to the same bytecode.

matan 2018-04-12T07:39:28.000408Z

In a nutshell, can you help me hammer this down ― > Moreover, Neandethal is faster even than Java libraries that use the same MKL binaries. why? better design around MKL? or it is best seen in the benchmark? feeling curious

matan 2018-04-12T07:47:45.000230Z

Thanks for this cool lib!

2018-04-12T08:20:00.000368Z

@matan of course idiomatic clojure is slower for numerical tasks than mutable Java. That's the point: use the tool that is proper for the job. Clojure supports mutable arrays, buffers etc. That's what I was talking about. When you use equivalent code, you get (almost) equivalent bytecode, and can get (practically) the same speed. The problem is that the speed at the level of Java is still many times slower than the speed that the hardware can execute.

2018-04-12T08:20:12.000160Z

Better design around MKL.

2018-04-12T08:20:26.000044Z

You're welcome.

2018-04-12T08:23:49.000186Z

Now when I think about what I have said... The design is not around MKL, it is overall better design for this particular domain. It accomodates MKL, cuBLAS, and a few other libraries in a quite efficient way.

2018-04-12T08:25:05.000168Z

and could even support pure Java implementations transparently... if there were any.

matan 2018-04-12T17:03:31.000314Z

@blueberry Would you happen to know of any machine learning clojure libraries using Neanderthal?

matan 2018-04-14T13:35:39.000059Z

nice homonoid species naming style 🙂 hope they don't end up the same....

matan 2018-04-14T13:36:39.000038Z

@whilo is core.matrix itself a very good API? I've never used it yet

matan 2018-04-14T13:37:13.000077Z

@jsa-aerial thanks! does bayadera have any explicit docs to it?

matan 2018-04-14T13:39:14.000081Z

@jsa-aerial > A Clojure Library for Bayesian Data Analysis and Machine Learning on the GPU. The tagline left me a bit unsure, e.g. does it implement bayesian machine learning a la bayesian neural networks? I should read the source

jsa-aerial 2018-04-14T17:00:55.000040Z

@matan bayadera is one of Dragan's projects and so he would be able to give best advice. I don't think it is 'officially' released yet. I have not actually used it, but I don't believe it uses any NN stuff.

matan 2018-04-14T18:22:32.000096Z

@jsa-aerial thanks!

whilo 2018-04-14T21:27:43.000098Z

@matan i think core.async is fine if you care about a numpy like high-level API that is polymorphic w.r.t. to implementations. unfortunately it was not as focused on performance as neanderthal, i hope denisovan provides a reasonable tradeoff for code implemented in core.matrix. still need to check https://github.com/whilo/boltzmann against it

2018-04-14T22:26:16.000050Z

@matan Bayadera does not use NNs, nor it makes sense to use them in general case. Bayadera is closest to an implementation of automated probabilistic decision making where you combine knowledge and data to update the knowledge and evaluate it taking utility (or cost) into account. In theory, it could be used to make automatic decisions on parameters and/or structure of neural networks, but it is not practical to do this with real-world million-node deep networks. Nor it is necessary IMHO. When people use the term "bayesian" together with deep learning, they usually mean "I replaced some numbers with (normal) distrubutions of those numbers". That is not Bayesian in the usual sense of what "Bayesian" mean: updating your priors with data to find out what is your updated knowledge.

2018-04-14T22:27:39.000089Z

BTW. I'm in the phase of polishing Bayadera for the first release to Clojars, so expect docs and tutorials this spring/summer. I already updated all engines to work on both AMD (OpenCL) and Nvidia (CUDA) GPU's.

1ðŸ’Ŋ
2018-04-14T22:32:58.000113Z

@matan You mean end with Homo Sapiens? BTW there's something like a few percents of Neandertal genes in European population. Being used in a few percents of each European Clojure project might actually be a nice thing for Neanderthal (the library) 🙂

2018-04-14T22:34:39.000036Z

... and in large part of North and South American projects, now that I think of it 😛

matan 2018-04-12T17:04:31.000781Z

BTW, it's odd that the Java libraries wrapping around MKL have been so sloppy, maybe they weren't important enough for anyone but still kind of odd

matan 2018-04-12T17:04:55.000680Z

Either way I'll use Neanderthal for my current project

matan 2018-04-12T17:06:07.000146Z

Managed to set up MKL despite its terrible intel docs websites ðŸĪŠ

2018-04-12T19:29:28.000601Z

@matan they are not sloppy at all; they just miss some performance opportunities here and there. it's not an easy problem. there are thousands ways where you can lose performance, and those add up.

whilo 2018-04-12T21:03:15.000647Z

I have also provided https://github.com/cailuno/denisovan so all core.matrix code can use neanderthal now. Still missing the GPU backends though.