@blueberry ok, makes sense.
Hi I am new to using neanderthal, Its a pretty awesome library and I am trying to do the basic excercises in the http://deeplearning.ai course using this matrix library. My question is is there an example of broadcasting that i can see similar to what numpy has where I can take an (m rows ,n columns) matrix and add, subtract, multiply, divide by an (m,1) matrix which results in the (m,1) matrix turned into a (m,n) matrix prior to the operation being performed. Right now I use a map over a range of columns to copy this matrix out but if there is a better way it would be good to know
@rohit_ Broadcast is not a well-defined linear algebra operation. It is a convenience useful in neural networks. I might add it in the future, when I do some work on NN, but for now, there are a couple ways you could do it: 1. (a bit slower): loop over columns or rows and do the destructive vector operation you need (be careful not to copy anything) 2. (the right way): write your own CUDA or OpenCL kernel using ClojureCUDA or ClojureCL that implements the broadcast for the type of matrix (I assume GE) that you use. It is a rather ordinary kernel, nothing fancy is needed, and it is a good way to learn GPU programming.
Maybe I had an easier way of doing broadcasts, but I can't remember it now 🙂
gotcha I just wanted to make sure I wasnt doing something wrong with the looping. this response really helps 🙂
the documentation and the examples availalbe on the blog are awesome BTW
thanks for that
@rohit_ Great! Could you write a bit of your experience with following that course with Neanderthal? It would be really helpful to other people that are starting up, or are thinking about but need something to push them 🙂
I could I dont have a blog running but I could just send it to you 🙂
Sure! If you write a good experience report, maybe it's a good idea to publish it on my blog?