infer
infer copied to clipboard
🔮 Use TensorFlow models in Go to evaluate Images (and more soon!)
infer
Infer is a Go package for running predicitions in TensorFlow models.
Overview
This package provides abstractions for running inferences in TensorFlow models for common types. At the moment it only has methods for images, however in the future it can certainly support more.
Getting Started
The easiest way to get going is looking at some examples, two have been provided:
- Image Recognition API using Inception.
- MNIST
Setup
This package requires
Installation instructions can be found here. Additionally, a Dockerfile has been included which can be used to run the examples.
Usage
Overview
To use infer, a TensorFlow Graph is required, as well as a defined Input and Output.
Classes may also be included, a slice of possible values. It's assumed the results of any model execution refer to these classes, in order. (e.g. 0 -> mountain, 1 -> cat, 2 -> apple).
m, _ = infer.New(&infer.Model{
Graph: graph,
Classes: classes,
Input: &infer.Input{
Key: "input",
Dimensions: []int32{100, 100},
},
Output: &infer.Output{
Key: "output",
},
})
Once a new model is defined, inferences can be executed.
predictions, _ := m.FromImage(file, &infer.ImageOptions{})
Predictions are returned sorted by score (most accurate first). A Prediction
looks like
Prediction{
Class: "mountain",
Score: 0.97,
}
Graph
An infer.Model
requires a tf.Graph
. The Graph defines the computations required to determine an output based on a provided input. The Graph can be included in two ways:
- Create the Graph using Go in your application.
- Load an existing model.
For the latter, an existing model (containing the graph and weights) can be loaded using Go:
model, _ := ioutil.ReadFile("/path/to/model.pb")
graph := tf.NewGraph()
graph.Import(model, "")
For more information on TensorFlow model files, see here.
Input & Output
infer.Input
and infer.Output
describe TensorFlow layers. In practice this is the layer the input data should be fed to and the layer from which to fetch results.
Each require a Key
. This is the unique identifier (name) in the TensorFlow graph for that layer. To see a list of layers and type, the following can be run:
ops := graph.Operations()
for _, o := range ops {
log.Println(o.Name(), o.Type())
}
If you're not using a pre-trained model, the layers can be named, which can ease in identifying the appropriate layers.
Analyzing Results
In the MNIST example, we can execute a prediction and inspect results as so:
predictions, err := m.FromImage(img, opts)
if err != nil {
panic(err)
}
// predictions[0].Class -> 8