This demo is a video analysis tool that counts and highlights objects in specific zones of a video. Each zone and the objects within it are marked in different colors, making it easy to see and count the objects in each area. The tool can save this enhanced video or display it live on the screen.
market-square-result.mp4
-
clone repository and navigate to example directory
git clone --depth 1 -b develop https://github.com/roboflow/supervision.git cd supervision/examples/count_people_in_zone
-
setup python environment and activate it [optional]
python3 -m venv venv source venv/bin/activate
-
install required dependencies
pip install -r requirements.txt
-
download
traffic_analysis.pt
andtraffic_analysis.mov
files./setup.sh
-
ultralytics
-
--source_weights_path
(optional): The path to the YOLO model's weights file. Defaults to"yolov8x.pt"
if not specified. -
--zone_configuration_path
: Specifies the path to the JSON file containing zone configurations. This file defines the polygonal areas in the video where objects will be counted. -
--source_video_path
: The path to the source video file that will be analyzed. -
--target_video_path
(optional): The path to save the output video with annotations. If not provided, the processed video will be displayed in real-time. -
--confidence_threshold
(optional): Sets the confidence threshold for the YOLO model to filter detections. Default is0.3
. -
--iou_threshold
(optional): Specifies the IOU (Intersection Over Union) threshold for the model. Default is0.7
.
-
-
inference
-
--roboflow_api_key
(optional): The API key for Roboflow services. If not provided directly, the script tries to fetch it from theROBOFLOW_API_KEY
environment variable. Follow this guide to acquire yourAPI KEY
. -
--model_id
(optional): Designates the Roboflow model ID to be used. The default value is"yolov8x-1280"
. -
--zone_configuration_path
: Specifies the path to the JSON file containing zone configurations. This file defines the polygonal areas in the video where objects will be counted. -
--source_video_path
: The path to the source video file that will be analyzed. -
--target_video_path
(optional): The path to save the output video with annotations. If not provided, the processed video will be displayed in real-time. -
--confidence_threshold
(optional): Sets the confidence threshold for the YOLO model to filter detections. Default is0.3
. -
--iou_threshold
(optional): Specifies the IOU (Intersection Over Union) threshold for the model. Default is0.7
.
-
horizontal-zone-config.json
: Defines zones divided horizontally across the frame.multi-zone-config.json
: Configures multiple zones with custom shapes and positions.quarters-zone-config.json
: Splits the frame into four equal quarters.vertical-zone-config.json
: Divides the frame into vertical zones of equal width.
-
ultralytics
python ultralytics_example.py \ --zone_configuration_path data/multi-zone-config.json \ --source_video_path data/market-square.mp4 \ --confidence_threshold 0.3 \ --iou_threshold 0.5
-
inference
python inference_example.py \ --roboflow_api_key <ROBOFLOW API KEY> \ --zone_configuration_path data/multi-zone-config.json \ --source_video_path data/market-square.mp4 \ --confidence_threshold 0.3 \ --iou_threshold 0.5
This demo integrates two main components, each with its own licensing:
-
ultralytics: The object detection model used in this demo, YOLOv8, is distributed under the AGPL-3.0 license. You can find more details about this license here.
-
supervision: The analytics code that powers the zone-based analysis in this demo is based on the Supervision library, which is licensed under the MIT license. This makes the Supervision part of the code fully open source and freely usable in your projects.