caffe
caffe copied to clipboard
This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® Xeon processors.
Threshold driven filtering in a fixed-size array leads to noisy (default) pair values at the end of a vector which results in low confidence noisy output.
/usr/include/c++/5/bits/unique_ptr.h:49:25: note: declared here template class auto_ptr; ^ /home/gavin/anaconda3/include/boost/smart_ptr/shared_ptr.hpp:255:64: warning: ‘template class std::auto_ptr’ is deprecated [-Wdeprecated-declarations] template< class T, class R > struct sp_enable_if_auto_ptr< std::auto_ptr< T >, R > ^...
This change https://github.com/intel/caffe/blame/7010334f159da247db3fe3a9d96a3116ca06b09a/src/caffe/util/bbox_util.cpp#L2266 using `at` instead of `push_back` is selectively over-writing some results, and leaving some low-scoring results within the top-k entries if there are not many strong detections in...
We change the SGDFusion flow("normalize" -> "GetLocalRate" -> "regularization & update") and the NON-SGDFusion flow("normalize" -> "GetLocalRate" -> "regularization" -> "update") for LARS. In original intel-caffe code, only SGD(Not NESTEROV,...
I'm using synthetic test: ```python import cv2 as cv import caffe import numpy as np proto = 'pose_deploy.prototxt' weights = 'pose_iter_102000.caffemodel' np.random.seed(223) k = 368 inp = np.random.standard_normal([1, 3, k,...
Hi i'm intel caffe user. I think, i found the wrong flow of SGDFusion function (/sgd_solver.cpp). When using GCC compiler or not using "iter_size", it doesn't make any problem. But,...
I followed https://github.com/intel/caffe/wiki/Build-Caffe-with-Intel--Compiler to build intel-caffe. Command line: `CC=icc CXX=icpc CPATH="" make all -j$(nproc)` Error: ``` /root/caffe/src/caffe/layers/image_data_layer.cpp(122): error: identifier "CV_BGR2RGB" is undefined cv::cvtColor(cv_img, cv_img, CV_BGR2RGB); ^ /root/caffe/src/caffe/layers/image_data_layer.cpp(182): error: identifier "CV_BGR2RGB"...
I want to use the quantization capabilities of intel caffe however it does not compile for GPU
When i search the internet , I found the legacy code of intel/caffe. In this code, there is use_cache_when_full option which can reduce network overhead. Why this option is not...
I need faster processing in object detection on CPU. As far as I know is using Intel Caffe. I tested the current trained model on Intel® Xeon® Processor E5-1660. The...