The math operations in uncomplicate.neanderthal.math are supposed to work with fmap, I think. Wouldn't it be beneficial to do them on device, e.g. GPU?
Basically a neural network graph will be matrix-matrix multiplications + point-wise non-linearities. If I do them with fmap, I will implicitly copy the intermediary results to the CPU on each layer, right?
That's what's comming in 0.17.0. There will be vectorized equivalents for all math functions on CPU & GPU for all different types of vectors and matrices
fmap works only with Clojure functions, of course, since GPU kernels cannot meaningfully implement IFn
the namespace that you're interested in is vect-math
got it.
Now, if you need some mathematical function that is not in math (and thus vector-math) you'd have to implement the kernel yourself (relatively easy with clojurecuda/clojurecl)
The current snapshot has implementations for vectors and GE matrices on CPU. Other types of matrices and GPU is on the TODO, but should be available soon.
Ok, perfect.
It does not support GE matrices for me, only vectors:
As an exercise I have reactivated the core.matrix backend of @mikera and made it work with the current master branch and with the high-perf. BLAS routines for dense matrices. I think the autograd code could work with this core.matrix backend without major performance penalties compared to direct usage of neanderthal, since it only needs a small subset of neanderthal's routines, but some things that core.matrix provides, like automatic broadcasting.
Broadcasting and reshaping (common ops in scientific computation languages like Python, Matlab or Julia) will be expensive right? I could copy into a new matrix, but usually these other languages provide a view on the data. Don't know how they do it, do you have any suggestions?
@blueberry Thanks!
untested (yet!)
@whilo untested
Seems to work. core.matrix tests pass.
(an my manual tests)
I meant the vector-math engine
I have just added it with your commit for Vector and Matrix.
Old stuff, of course, passes the tests
?
what did you add?
vm/pow vm/mul vm/div etc.
Those have been there and have the tests.
But the matrix implementation of those functions do not. They work generally, but there might be a few bugs to fix.
Ok.
What is the prefered way to add a scalar to vector/matrix?
`(linear-frac a 3.3)
(linear-frac a 3.3)
Thanks!
The sign function and elementwise <,> and = (+ eps) would be nice.
Just as a note, I can try to do it, but I guess you have a list of missing features where things like this can be tracked, so they don't get lost.
open an issue on github
= + eps is already in math, f=
you also have f<
, f>
etc
Yes, but they are for usage with fluokitten. I think it would be nice to have something on device. And max would also be cool for ReLU activations.
I would like to select only positive entries in a vector for example.
I will open an issue.
Good night 🙂.