inference
inference copied to clipboard
Documentation bug? Inference server on port 9001
The docs say that a notebook is exposed on port 9002 but for me nothing is running on port 9002 (I get connection refused). Port 9001 shows the intro page.
inference server start --dev
No GPU detected. Using a CPU image.
Pulling image: roboflow/roboflow-inference-server-cpu:latest
...
Image roboflow/roboflow-inference-server-cpu:latest pulled.
Starting inference server container...
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b7b4fde8446 roboflow/roboflow-inference-server-cpu:latest "/bin/sh -c 'uvicorn…" 2 minutes ago Up 2 minutes 0.0.0.0:9001-9002->9001-9002/tcp, :::9001-9002->9001-9002/tcp elegant_murdock
Another bug?
https://deploy-quickstart.roboflow.com/results.html here it says to use inference.load_roboflow_model but that does not seem to exist.
@capjamesg - looks like the deploy-quickstart page is out of date. Could you update it to use get_model (or maybe get_roboflow_model.. not really sure the difference w/o digging in deeper) instead?