@blueberry I would love to have proper support for convolutions (CNNs) when you work on tensor support. Nvidia has CUDNN as a particular offering for deep-learning people, but it would be really cool to offer something that also works on AMD cards and with OpenCL (open-source).
Let me know if I can help.
I plan to do this, but probably only as a part of paid consultancy service. OTOH, who knows...
Iβd like to try and port a python/pymc3 model over to Bayadera to try that library out. Whereβs the best source for Bayadera documentation?
https://github.com/uncomplicate/bayadera/tree/master/test/clojure/uncomplicate/bayadera/examples/ in conjunction with the book Doing Bayesian Data Analysis 2nd ed.
Hmm, I know a lot of people in optimization and very few of them are dealing with kernels for parallel processors and are willing to go that far. I agree that "low-level" is a relative and often misused perception, but when I deal with mathematics exploratively I definitely don't want to write CL code at the same time. I am willing to do so to improve my slow code though and build better primitives occasionally. I think you are right though that neanderthal should not provide too much false convenience, so maybe I will write these "higher-level" functions in a separate library then.
signum is fairly basic though, not sure whether it is a good idea not to have it.
I have just trained the first toy NN with autograd and neanderthal (through the core.matrix wrapper for now).
π
Thinking about it, I should definitely try ClojureCL for the convolutional stuff.
@blueberry Have you thought about exporting a Java API? I think your audience would be much larger on the JVM in general, than just Clojure. It would also probably bring quite a few people to Clojure if they see that it can be used that well for high-performance code.
I did, but I only have two hands, the day only lasts so many hours, and I have my "real" job to do too. So, there are only two things that make me add functionality: 1) I need it or like it 2) Someone pays me enough money to do it (and it's still kinda interesting)
Java API falls into category 2 π
I'd prefer to use the time to do something that is missing to uncomplicate than to work on stuff for imaginary users
Especially when it is unlikely that those users are going to become my customers anytime soon
OTOH, I'd love if someone else (you?) created that Java API! π
Hehe, I know the hours problem too well. Actually I should work on my master thesis about Bayesian GANs and probabilistic programming. I do this at the moment to prepare for a proper optimization toolbox for my needs in Clojure so I can hopefully get away of Python. The Java API was not interesting for me, since I do almost everything that I can in Clojure, but I thought there might be customers in Java/Scala/Kotlin-Land, since the general library landscape on the JVM is rather high-level and not really nice for scientific computing. uncomplicate on the otherhand provides something unique and really nice low-level primitives to do the things on the JVM people normally use other environments for. I don't know about the market size though and whether it will pay off. It depends on how large the Java API surface needs to be, I guess.
There are many of them, but making them pay for software is a tough task...
I see. You could make the Java API proprietary.
Licence fees are still by far the biggest income source, I think. I am a proponent of open-source, but it sucks that there is no income model.
Sure, but getting developers to use such libraries is difficult if you are not Microsoft or Oracle.
It's much better IMO to create something and sell it to non-programmers.
Or to sell service/consulting.
That is also true!
Programming tools are bad business and have been for the last two/three decades.
Even Oracle and Microsoft have difficulties in selling stuff to programmers.
You see that I give all kind of cool uncomplicate libraries for free, and it is still hard to convince people to try to use it.
Linear algebra has been free for 100 years, it is forced on students in university, and they still refuse to make effort to learn it π
Hehe ^^
Right. Your audience is not very big, that is true.
Or for that sake, our audience. I am not working on the same technical level like you, but I have similar problems.
I think machine-learning is sellable right now, though.
It is insane, I will be teaching assistant from next week on and 120 students signed up for the lecture, with a waiting list for another 20. It started with 60 free slots and was already full 6 weeks before the lecture starts.
Yes, but not the tools. The final product/service. Currently more as hyped fog, but there is and will be space for working solutions.
Yes.
But data scientists are often not really good programmers. There are some solutions in this space also, like Microsoft R integration into SQL server, where you can sell stuff to the people doing the programming.
You can try to sell π
Not as a teaching assistant. Maybe I can let Clojure slip a bit, but the Prof. is a C++/Python guy.
Everyone and their grandmother is interested in AI now due to the hype. The problem is the majority underestimates the task and overestimates their ability or willingness to put up the work.
Damn prof++ π
Hehe ^^
Yes, that is the case in general for most trendy things I have heard.
On the other hand I don't think AI will be short-lived this time.
I'm kinda grateful for the DL hype though, because it distracts the competition from other ML/AI approaches (like the Bayesian stuff) π
That gives us some time to brush up our ml-foo.
Agreed π.
Although Bayesian is hard to sell even to ML people. Uncertainties are not always helpful and it is not that obvious.
On the other hand deep learning can be nicely integrated (that is what my friend and colleage and me are working on most of the time).
But we are the lonely Bayesians here at the research group.
About the autience: I had 30.000 users reading 50.000 pages this year (not including the filtered spam traffic)
Not that bad
I have found some OpenCL literature to convolution, not sure how appropriate it is: https://www.evl.uic.edu/kreda/gpu/image-convolution/
Of course, it's mainly due to HN, but still
Interesting.
And it is only on the blog, excluting the documentation of the projects which should be another 10-15.000
I've just looked. I underestimated the projects traffic. Almost 25.000 users, 40.000 pageviews
Hmm, I think Clojure is in a very good spot for large scale optimization in combination with neanderthal, autograd and different sampling methods in Anglican or Bayadera. But first I need to be able to incorporate standard neural network architectures more easily. Cortex is not cutting it for me and core.matrix without neanderthal is not good enough to build fast optimizers for Bayesian inference. You are right about that. I think we can integrate it anyway, though.
Did you do some benchmarks?
Only to verify that the operations that I need from neanderthal are not slowed down by the core.matrix protocols (matrix multiplication, matrix initialization and inplace scaled additions).
I have to do more to verify that it is optimal, but writing a fast core.matrix backend for everything is not my goal atm. Having autograd is.
Meaning: did you measure other core.matrix backends? Maybe they are not slow/slower for the kind of task that you need.
Hopefully they are not too bad for general deep learning stuff. This would make the autograd more generally useful, e.g. with nd4j + dl4j and enterprisey architectures like that, but I really like to have access to the lower levels as well and only the uncomplicate libraries provide this in Clojure land. Most other stuff I have seen in the past for OpenCL programming etc. were quickly abandoned and not really exhaustive.
I have to think about the convolutions for example, I don't think that building them on top of core.matrix primitives will cut it.
hd4j is rather slow for non-large matrices
nd4j with mkl backend...
hmm, ok. This was my best bet before you built the neanderthal.
I am not familiar with all the other Java matrix libraries, only clatrix and jblas. But they were too slow 3 years ago compared to Python+theano.
Which distracted me from doing optimization in Clojure.
Does neanderthal work ok for you so far on that front?
I have not done comparisons to pytorch yet.
I guess this is not too easy, since pytorch does not use mkl, I think.
I have to compile it, I think.
But in general I really like it.
The only thing that would be nice on the JVM would be to have some control over heap layout, I guess. This is what Julia provides and low-level people miss in Java. You cannot really control that you don't miss the cache most of the time. As long as I stay inside of neanderthal I should be fine though.
But I have never actually benchmarked the JVM in that regard, I guess it is smart enough to avoid cache misses in most cases. It is just not controllable.
Neanderthal takes care of the cache for you for the functions it provides. If you want to control the cache for your custom stuff, use ClojureCL.
Ok.
But I am not sure how you can control that with Julia either. CPU cache should be a pretty automatic thing. You "control" it by re-using the locations that you suppose should be in the cache?
I mean that objects are allocated in a certain order that you know are optimal. I think the CLR allows this for value types. The JVM can basically relocate objects during GC.
Oh, that...
In general that things are located together in memory.
Neanderthal data structure are optimized in that way as much as possible.
GC won't move them.
Yes, it just something that is not optimal in Java-land for optimization people. Julia is dealing with this.
Other than that I am not sure why yet another runtime is necessary, but there are smart optimization people in Julia land.
That's fine. However, that's why I created ClojureCL. It does not work only on the GPU. You can program CPUs with it.
I think I will go home now and be back online later.
Yes, I really like that!
Do you have some ideas of how to do image convolutions with ClojureCL?
or convolutions in general.
convolutional neural networks work well in many settings, not just images btw.
Will be back later.
I didn't look into that...
Maybe it would be easy, maybe not. But I don't think you can reach cuDNN performance by doing it half-time (or at all :)
Probably not. Yes.