Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/merge.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ jobs:
# do not abort other jobs on job failure
fail-fast: false
matrix:
os: [ubuntu-22.04, windows-2019]
os: [ubuntu-22.04, windows-2022]
include:
- default_shell: bash -eo pipefail -l {0}
- os: windows-2019
- os: windows-2022
default_shell: cmd

runs-on: ${{ matrix.os }}
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,7 @@ and execute it from there.
|[person age gender detection](https://colab.research.google.com/github/DeGirum/PySDKExamples/blob/main/examples/applications/person_age_gender_detection.ipynb)|Person detection followed by object tracking, zone presence detection, age detection, and gender classification with result averaging among all occurrences of detected person in the zone.|
|[car wrong direction detection](https://colab.research.google.com/github/DeGirum/PySDKExamples/blob/main/examples/applications/car_wrong_direction_detection.ipynb)|Detect a car going in the wrong direction using object detection, object tracking, line cross counting, and event detection. When an event is detected, the notification is sent to the notification service of your choice and a video clip around that event is uploaded to S3-compatible storage of your choice.|
|[parking management](https://colab.research.google.com/github/DeGirum/PySDKExamples/blob/main/examples/applications/parking_management.ipynb)|Monitor a parking lot's occupancy using object detection and zone intrusion detection in a video. Zones defined for parking spaces are checked for occupancy and counted, and the video is annotated with occupancy/vacancy counts.|
|[smart nvr](https://colab.research.google.com/github/DeGirum/PySDKExamples/blob/main/examples/applications/smart_nvr.ipynb)|NVR triggered by object detection|


## Full List of Variables in `env.ini` Configuration File
Expand Down
233 changes: 233 additions & 0 deletions examples/applications/smart_nvr.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,233 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "438aa03a",
"metadata": {},
"source": [
"![Degirum banner](https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/degirum_banner.png)\n",
"## This notebook is an example of how to implement NVR triggered by object detection. \n",
"A video stream is processed by the person detection model. Once any person is detected, the notification event is generated and video clip is saved.\n",
"\n",
"This example uses `degirum_tools.streams` streaming toolkit.\n",
"\n",
"This script works with the following inference options:\n",
"\n",
"1. Run inference on DeGirum Cloud Platform;\n",
"2. Run inference on DeGirum AI Server deployed on a localhost or on some computer in your LAN or VPN;\n",
"3. Run inference on AI accelerator directly installed on your computer.\n",
"\n",
"To try different options, you need to specify the appropriate `hw_location` option. \n",
"\n",
"When running this notebook locally, you need to specify your cloud API access token in the [env.ini](../../env.ini) file, located in the same directory as this notebook.\n",
"\n",
"The script can use either local camera or web camera connected to the machine, or a video file. The camera index or URL or video file path needs to be specified in the code below by assigning `video_source`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88e17ec2",
"metadata": {},
"outputs": [],
"source": [
"# make sure degirum-tools package is installed\n",
"!pip show degirum-tools || pip install degirum-tools"
]
},
{
"cell_type": "markdown",
"id": "3ac1ad6f-2290-44fe-bcfd-4715f594ce57",
"metadata": {
"tags": []
},
"source": [
"#### Specify where do you want to run your inferences, model_zoo_url, model names for inference, and video source"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6d33374c-e516-4b5f-b306-d18bf6392c52",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# hw_location: where you want to run inference\n",
"# \"@cloud\" to use DeGirum cloud\n",
"# \"@local\" to run on local machine\n",
"# IP address for AI server inference\n",
"#\n",
"# model_zoo_url: url/path for model zoo\n",
"# cloud_zoo_url: valid for @cloud, @local, and ai server inference options\n",
"# '': ai server serving models from local folder\n",
"# path to json file: single model zoo in case of @local inference\n",
"#\n",
"# model_name: name of the model for detecting\n",
"#\n",
"# video_source: video source for inference\n",
"# camera index for local camera\n",
"# URL of RTSP stream\n",
"# URL of YouTube Video\n",
"# path to video file (mp4 etc)\n",
"#\n",
"# zones: list of zones for object tracking\n",
"# holdoff_sec: holdoff duration to suppress repeated notifications\n",
"# notification_config: configuration for Apprise notifications (see https://github.com/caronc/apprise)\n",
"# clip_duration: duration in frames of video clips to save\n",
"# storage_config: configuration for object storage to save video clips (can be S3 or local)\n",
"\n",
"import degirum as dg, degirum_tools\n",
"from degirum_tools import streams as dgstreams\n",
"\n",
"hw_location = \"@cloud\"\n",
"model_zoo_url = \"degirum/public\"\n",
"model_name = \"yolo_v5s_person_det--512x512_quant_n2x_orca1_1\"\n",
"video_source = \"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/images/WalkingPerson.mp4\"\n",
"zones = [\n",
" [[450, 100], [950, 100], [950, 600], [450, 600]],\n",
"]\n",
"holdoff_sec = 3.0\n",
"notification_config = \"json://console\" # just prints to console\n",
"clip_duration = 100\n",
"storage_config = degirum_tools.ObjectStorageConfig(\n",
" endpoint=\"./temp\", # Object storage endpoint URL or local path\n",
" access_key=\"\", # Access key for the storage account\n",
" secret_key=\"\", # Secret key for the storage account\n",
" bucket=\"nvr_clips\", # Bucket name for S3 or local directory name\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e036ab35-cc8f-4e67-bf5b-f01c470db2a4",
"metadata": {
"tags": []
},
"source": [
"#### The rest of the cells below should run without any modifications"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "65d4cd90",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# load model\n",
"model = dg.load_model(\n",
" model_name,\n",
" hw_location,\n",
" model_zoo_url,\n",
" degirum_tools.get_token(),\n",
" overlay_show_probabilities=True,\n",
" overlay_line_width=1,\n",
")\n",
"\n",
"#\n",
"# create analyzers:\n",
"#\n",
"\n",
"anchor = degirum_tools.AnchorPoint.CENTER\n",
"window_name = \"Live Display (press 'q' to quit)\"\n",
"event_name = \"object_detected\"\n",
"\n",
"# object tracker\n",
"object_tracker = degirum_tools.ObjectTracker(\n",
" track_thresh=0.35,\n",
" track_buffer=100,\n",
" match_thresh=0.9999,\n",
" trail_depth=10,\n",
" anchor_point=anchor,\n",
")\n",
"\n",
"# zone counter\n",
"zone_counter = degirum_tools.ZoneCounter(\n",
" zones,\n",
" use_tracking=True,\n",
" triggering_position=[anchor],\n",
" annotation_color=(0, 255, 0),\n",
" window_name=window_name, # attach display window for interactive zone adjustment\n",
")\n",
"\n",
"# event detector: object in zone\n",
"zone_detector = degirum_tools.EventDetector(\n",
" f\"\"\"\n",
" Trigger: {event_name}\n",
" when: ZoneCount\n",
" is greater than: 0\n",
" during: [10, frames]\n",
" for at least: [90, percent]\n",
" \"\"\",\n",
" show_overlay=False,\n",
")\n",
"\n",
"# event notifier\n",
"notifier = degirum_tools.EventNotifier(\n",
" event_name,\n",
" event_name,\n",
" message=\"{time}: person is detected in zone\",\n",
" holdoff=holdoff_sec,\n",
" notification_config=notification_config,\n",
" clip_save=True,\n",
" clip_duration=clip_duration,\n",
" clip_pre_trigger_delay=clip_duration // 2,\n",
" storage_config=storage_config,\n",
")\n",
"\n",
"degirum_tools.attach_analyzers(\n",
" model, [object_tracker, zone_counter, zone_detector, notifier]\n",
")\n",
"\n",
"#\n",
"# create gizmos\n",
"#\n",
"\n",
"# video source gizmo\n",
"cam_source = dgstreams.VideoSourceGizmo(video_source)\n",
"\n",
"# detection gizmo\n",
"detector = dgstreams.AiSimpleGizmo(model)\n",
"\n",
"# local display gizmo (just for debugging)\n",
"display = dgstreams.VideoDisplayGizmo(window_name, show_ai_overlay=True)\n",
"\n",
"# start composition\n",
"dgstreams.Composition(cam_source >> detector >> display).start()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "28e6038b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Binary file added tests/reference/smart_nvr_3.1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions tests/test_notebooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@
("applications/person_age_gender_detection.ipynb", "Masked.mp4", {4:2}, []),
("applications/car_wrong_direction_detection.ipynb", "TrafficHD_short.mp4", [3], []),
("applications/parking_management.ipynb", "TrafficHD_short.mp4", [6], []),
("applications/smart_nvr.ipynb", "Masked.mp4", [3], []),
]

# _imageless_notebooks is a list of notebooks without an image cell output
Expand Down
Loading