xtensor
                                
                                 xtensor copied to clipboard
                                
                                    xtensor copied to clipboard
                            
                            
                            
                        xt::mean is very slow than numpy
Hey, I have tested some code with xt::mean, and find it very slow than numpy with hundreds of times difference. The data is a 2-d array.
xtensor version
xt::pyarray<double> mean(xt::pyarray<double> &data) 
{
    int samples_num = data.shape(1);
    int channels_num = data.shape(0);
    for(int i=0; i<channels_num; i++)
    {
        auto dd = xt::view(data, i, xt::all());
        auto trend = xt::mean(dd);
        xt::view(data, i, xt::all()) -= trend;
    }
    return data;
}
numpy version
data -= np.mean(data)
Does anybody know what is wrong with my code? I use m1-max, clang++ compiler.
BTW I want to add OpenMP to for loop, that's the reason why I need a loop.
It is probably xt::view that make it slow. What about doing the same as in NumPy? Probably
data -= xt::mean(data, 1);
(unchecked)
Thanks for your reply, actually when I just test 1-d array by xt::mean and np.mean, if the 1-d array is a little large, xt::mean will also be very slow than np.mean. The demo code as following:
xt::xarray data={1,2,3........100000};
xt::mean(data);
But numpy is very fast.
I will give you a full test later, and I do need to use xt::mean in my code.
That's a bit surprising. How did you compile? How much slower?
As a reference, I am experimenting with running benchmarking to get on top of this in the future : https://github.com/xtensor-stack/xtensor-python/pull/288 . At the moment it is not completely obvious how to distinguish the cost of the pybind11 binding from the actual performance issues of xtensor. If you are interested to contribute you are more than welcome