Chiyuan Zhang
Chiyuan Zhang
@nikolaypavlov Thanks for the suggestions - Based on my understanding, `maxout` is simply a max pooling over some units. We can achieve this by using the existing `PoolingLayer` or `ChannelPoolingLayer`....
@outlace, this is more like torch than theano in that sense. There is no planned Opencl support unless Julia gets better native support for gpu targets.
@nstiurca Thanks! This could be cool! Yes, I'm OK with the renaming if we have a working OpenCL backend!
I would suggest do it in your branch, but open a pull request to here, with "[WIP]" in the title and description of the goal and current progress in the...
@lqh20 I'm recently joining a new project MXNet. We are building a julia interface called [MXNet.jl](https://github.com/dmlc/MXNet.jl). It is still at relatively early stage, but some features are already working. For...
@philtomson It depends. Mocha.jl still has its advantage of simplicity and portability. But in terms of computational efficiency or feature richness, I think MXNet.jl should be replacing Mocha.jl. Because it...
@philtomson That could be one possible option. I will wait and see if that is feasible. As using MXNet.jl introduce an external dependency on libmxnet. If that dependency itself is...
@philtomson Glad to hear that it works out nicely for you. The single-GPU performance of Mocha.jl might be similar to MXNet.jl. MXNet.jl has a more flexible symbolic API to define...
For those who is interested in RNN/LSTM in Julia. Here is [an char-rnn LSTM implementation in MXNet.jl](https://github.com/dmlc/MXNet.jl/tree/master/examples/char-lstm) now. It used explicit unrolling so everything fit in the current `FeedForward` model,...
I agree that it's a shame that we do not have a Theano implementation in Julia yet. We do not even have a feature complete cuArray.jl like package in Julia...