Object tracking - YOLO and Norfair
This example makes use of a YOLO model with the Ultralytics SDK to detect objects and then a Norfair tracker to track their position. You can achieve the same result with any object detector, including the example with ONNX Runtime.
In this case we connect two Pipeless stages making use of the
user-data field of the frames to store the detections on the first stage and read them in the second.
Pipeless: Check the installation guide.
Python OpenCV, NumPy, Ultralytics and Norfair packages. Install them by running:
pip install opencv-python numpy ultralytics norfair
pipeless init my-project --template empty # Using the empty template we avoid the interactive shell
Feel free to change
my-projectby any name you want.
In this example we use two different stages. One is the same as the YOLO example and the other is the stage that performs the actual tracking.
wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/yolo"
wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/object-tracking"
The following command leaves Pipeless running on the current terminal
pipeless start --stages-dir .
In this case we concatenate the processing of two stages. The first stage takes care of performing object detection with YOLO and
passes that data to the next stage. The second stage takes care of performing the object tracking with Norfair.
Note the list of stages passed as
Open a new terminal and run:
pipeless add stream --input-uri "v4l2" --output-uri "screen" --frame-path "yolo,object-tracking"
This command assumes you have a webcam available, if you don't just change the input URI to a file with