Inference

Run inference on an image and retrieve predictions.

Roboflow provides an API through which you can upload an image and retrieve predictions from your model. This API is available in the Roboflow Python SDK, REST API, and CLI.

To run inference using the Python SDK, use the predict() method.

import roboflow

rf = roboflow.Roboflow(api_key=YOUR_API_KEY_HERE)

project = rf.workspace().project("PROJECT_ID")
model = project.version("1").model

# optionally, change the confidence and overlap thresholds
# values are percentages
model.confidence = 50
model.overlap = 25

# predict on a local image
prediction = model.predict("YOUR_IMAGE.jpg")

# Predict on a hosted image via file name
prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)

# Predict on a hosted image via URL
prediction = model.predict("https://...", hosted=True)

# Plot the prediction in an interactive environment
prediction.plot()

# Convert predictions to JSON
prediction.json()

Use images in a Locally Running Inference Server Container

If you have a locally running Roboflow inference server running through any of our container deploys, such as the NVIDIA Jetson or Raspberry Pi container, you can use version() to point towards that locally running server instead of the remote endpoint by specifying the IP address of the locally running inference server.

The local inference server must be running and available for communication before executing the python script! When using our docker containers with the --net=host flag, we recommend referencing via the localhost format.

local_inference_server_address = "http://localhost:9001/"
version_number = 1

local_model = project.version(
    version_number=version_number,
    local=local_inference_server_address
).model

Last updated