Inference of neural network inside Supervisely is easy as a few mouse clicks. Here you can find all necessary information regarding how to start inference task.
Applicable for all Neural Networks
The procedure, described below is applicable for all Neural Networks inside Supervisely. The difference will only be in used json-configurations for inference. Such configurations is similar for most of the models but may differ for some of them.
You start inference from existing neural network. Please refer this page to learn how you can add model to your workspace.
Open "Neural Networks" page and start inference by clicking "Inference" button in models list.
"Run Plugin" page will load and necessary fields will be automatically set.
Step 1: Inference Settings¶
Configure the following fields:
Agent: Choose agent from Cluster page on which model will be applied.
Input project: choose a project from current workspace to apply model on
Result title: enter name for a resulting Project. You can change it later. You will see it in the list on Projects after inference. If project with the same name already exists random suffix will be added automatically.
Configuration: plugin, associated with source model may provide pre-configured options. Inference configs for all models are almost the same but may have some differences. Read "Configurations" chapter to learn mode. Inference configuration is JSON-based settings that are passed directly to the model. Depending on the model, you can choose desirable classes, GPU device to use and other options. Please refer configuration in the example.
Click "Run" to start inference.
Step 2: Monitor progress¶
New task will be started and "Tasks" will open.
You can select "Logs" in model context menu ("three dots" icon) to monitor task output or stop inference.
Step 3: Task finished¶
Project with model predictions will appear in Projects page.