uncomplicate

qqq 2017-04-19T10:15:13.118050Z

what is UN-complicated about intel MKL ?

qqq 2017-04-19T10:15:24.119843Z

it requires email addr + sign up + creating an account/password just to install

qqq 2017-04-19T10:15:26.120176Z

WTF 🙂

qqq 2017-04-19T10:15:33.121229Z

why does 0.9.0 depend on MKL 🙂

qqq 2017-04-19T10:24:00.198895Z

Is there a way to view 0.8.0 docs ?

qqq 2017-04-19T10:24:09.200047Z

http://neanderthal.uncomplicate.org/codox/index.html only shows 0.9.0

qqq 2017-04-19T12:20:19.236173Z

okay 0.9.0 installed

qqq 2017-04-19T12:33:54.381532Z

does neandertheral 0.9 have any ops for stride ? i.e. I have an existing matrix, I want to take rows a to b, every c-th row

qqq 2017-04-19T13:16:42.915559Z

suppose I need to do 2d matrix convolutions,

qqq 2017-04-19T13:17:02.920337Z

kernels right?

qqq 2017-04-19T13:17:12.922711Z

I need to write a custom kernel to do 2d matrix convolutions right?

2017-04-19T14:27:11.057711Z

@qqq there are various strides, but not that exact use case that you need. Create an issue on GitHub and I will implement that one in the next version. For now, you could create such (sub)matrix by calling the RealGEMatrix constructor yourself and setting the appropriate m, n, fd, sd and ld arguments yourself.

2017-04-19T14:29:18.096035Z

BTW, you do not need to register and install MKL. You can acquire the needed shared libraries from any other source or your friends and put them on the LD_LIBRARY_PATH.

qqq 2017-04-19T14:37:02.236712Z

I'm sorry, where is RealGEMatrix documented? I'm looking at

qqq 2017-04-19T14:37:08.238331Z

oh, it's probably rgem ?

qqq 2017-04-19T14:39:20.278172Z

okay, I think you are referring to http://neanderthal.uncomplicate.org/codox/uncomplicate.neanderthal.core.html#var-ge

qqq 2017-04-19T14:40:37.301067Z

so I see m, n; where do I set the strides ?

2017-04-19T14:40:39.301614Z

@qqq it is not documented since it is an internal that you should not use yourself. I suggested it as a temporary remedy until I add that functionality.

2017-04-19T14:41:03.308729Z

the matrix stride is determined by the ld parameter

qqq 2017-04-19T14:41:35.317987Z

two things; (1) does this undocumented feature work in OpenCL GPU matrices too? and (2) if I'm doing neural net stuff, should I just bite the bullet and learn how to write custom OpenCL kernels by hand?

qqq 2017-04-19T14:42:05.327085Z

basically, I need to rewirte Tensorflow in clojure 🙂

2017-04-19T14:42:12.329218Z

1) you'd have to do the same thing using CLGEMatrix

2017-04-19T14:42:19.331321Z

2) yes

2017-04-19T14:43:03.344819Z

3) you might also find http://github.com/uncomplicate/clojurecuda useful

qqq 2017-04-19T14:43:35.354491Z

where is the :ld option ?

2017-04-19T14:43:59.361559Z

it's not there. It's an implementation detail from /internal

qqq 2017-04-19T14:44:00.361722Z

@blueberry: ha, I was debating OpenCL vs Cuda myself -- why are you suggesting Cuda ?

2017-04-19T14:44:30.370987Z

It's an option.