gpu-rest-engine
gpu-rest-engine copied to clipboard
A REST API for Caffe using Docker and Go
Using gpu-rest-engine, i created a service that could detect, classify -or- detect and then classify. First step is to do detection with SSD using of a single class (other than...
Please help! i'm using the latest caffe
Here, i did not modify the code like you, and the time cost like this: `classifitime=0.56s current thread===1577723648 0.99840999 - "Test" classifitime=0.57000000s current thread===1586116352 0.99840999 - "Test"` when i modified...
gpu-rest-engine-master$ nvidia-docker run --name=server --net=host --rm inference_server 2018/09/18 02:31:30 Initializing TensorRT classifiers I am just trying to get the TensorRT server started and on two different servers with fresh downloads...
Hi! I think, in current implementation the engine cannot server models if they have less than five outputs. The [classify](https://github.com/NVIDIA/gpu-rest-engine/blob/d8d2255884f965b2feca855cb9e18a13cbf2d8ae/tensorrt/classification.cpp#L102) method accepts as one of the input parameters N which...
I have started to implement this on a server. When I am running the `docker build -t inference_server -f Dockerfile.caffe_server .` command I have the following error during step 7/26:...