uncomplicate

qqq 2017-12-25T02:09:33.000081Z

@blueberry: can you point me at the right way to do convolutions 2d in jcuda? (I expect there to be a builtin but I am not finding it)

qqq 2017-12-25T02:40:22.000049Z

@blueberry: also, do you know of any efforts to bind the *.cu files from Tensorflow, PyTorch, or CuPY with JCuda? At the end of the day, there's no reason to rewrite hte .cu files, I just want a way to use them from Clojure, and the .cu/jcuda layer seems like a great place to intercept on.

2017-12-25T11:12:50.000013Z

I do not understand what you are asking.

qqq 2017-12-25T12:02:53.000009Z

@blueberry: libraries like tensorflow / pytorch / cupy provide, at some level, an accelerated nd-array + various ops on cuda

qqq 2017-12-25T12:03:01.000010Z

then there's some python wrapper over it

qqq 2017-12-25T12:03:26.000017Z

this "accelerated cuda nd-array + ops" seems like something that can be extracted, and then bound to via jcuda, and thenuseable via clojure

2017-12-25T12:45:54.000006Z

Those cu kernels are a small percentage of the overall code, and they do not work in isolation from the host code. You'd have to take the cu files (respecting the license, of course) and write matching host code in Java and/or Clojure.