what is UN-complicated about intel MKL ?
it requires email addr + sign up + creating an account/password just to install
WTF 🙂
why does 0.9.0 depend on MKL 🙂
Is there a way to view 0.8.0 docs ?
http://neanderthal.uncomplicate.org/codox/index.html only shows 0.9.0
okay 0.9.0 installed
does neandertheral 0.9 have any ops for stride ? i.e. I have an existing matrix, I want to take rows a to b, every c-th row
suppose I need to do 2d matrix convolutions,
kernels right?
I need to write a custom kernel to do 2d matrix convolutions right?
@qqq there are various strides, but not that exact use case that you need. Create an issue on GitHub and I will implement that one in the next version. For now, you could create such (sub)matrix by calling the RealGEMatrix constructor yourself and setting the appropriate m, n, fd, sd and ld arguments yourself.
BTW, you do not need to register and install MKL. You can acquire the needed shared libraries from any other source or your friends and put them on the LD_LIBRARY_PATH.
I'm sorry, where is RealGEMatrix documented? I'm looking at
oh, it's probably rgem ?
okay, I think you are referring to http://neanderthal.uncomplicate.org/codox/uncomplicate.neanderthal.core.html#var-ge
so I see m, n; where do I set the strides ?
@qqq it is not documented since it is an internal that you should not use yourself. I suggested it as a temporary remedy until I add that functionality.
the matrix stride is determined by the ld parameter
two things; (1) does this undocumented feature work in OpenCL GPU matrices too? and (2) if I'm doing neural net stuff, should I just bite the bullet and learn how to write custom OpenCL kernels by hand?
basically, I need to rewirte Tensorflow in clojure 🙂
1) you'd have to do the same thing using CLGEMatrix
2) yes
3) you might also find http://github.com/uncomplicate/clojurecuda useful
also, I'm looking at https://github.com/uncomplicate/neanderthal/blob/3aeeaa9554fcaefd12799a4d2090067abe7d98dc/src/clojure/uncomplicate/neanderthal/core.clj#L173
where is the :ld option ?
it's not there. It's an implementation detail from /internal
@blueberry: ha, I was debating OpenCL vs Cuda myself -- why are you suggesting Cuda ?
It's an option.