From c18cdb991088fbfc27feff447330518ed29794b5 Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Mon, 20 Jan 2025 15:27:48 -0800 Subject: [PATCH 1/8] Replaced degirum_cloud_token with degirum_tools.get_token(). Updated text blocks. Updated order of hardware and runtimes to match hub.degirum landing page. --- examples/basic/pysdk_hello_world.ipynb | 47 +++++++++++--------------- 1 file changed, 20 insertions(+), 27 deletions(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 76f4b65..1769a8e 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -33,15 +33,6 @@ "\n" ] }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [], - "source": [ - "degirum_cloud_token = ''" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -62,7 +53,7 @@ "\n", "4. **Display the Results**: The numeric results of the inference, including detected bounding boxes and class IDs, are printed to the console. Additionally, using `degirum_tools.Display`, the inference results are visualized with object overlays on the input image.\n", "\n", - "This code provides a simple yet powerful introduction to the DeGirum PySDK, demonstrating how to load a cloud-hosted model, run inference, and interpret the results in just a few lines of code." + "When you run this code, you should see hardware hosted on the cloud run an object classification model, identify detected objects, and show bounding boxes for those detected objects with only a few lines of code." ] }, { @@ -78,7 +69,7 @@ " model_name=\"yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1\",\n", " inference_host_address=\"@cloud\",\n", " zoo_url='degirum/models_hailort',\n", - " token=degirum_cloud_token,\n", + " token=degirum_tools.get_token(),\n", ")\n", "\n", "# perform AI model inference on given image source\n", @@ -95,7 +86,9 @@ "metadata": {}, "source": [ "### Unified AI Inference with DeGirum PySDK\n", - "We now highlight the core capabilities of the DeGirum PySDK, showcasing its flexibility and simplicity in deploying AI models across multiple hardware platforms. By specifying the appropriate model name and model zoo, users can seamlessly run different types of models, such as classification, detection, keypoint detection, or segmentation, using the same unified interface. The JSON configuration provided below illustrates how users can select the model name based on the desired AI task and hardware option, as well as how to choose the appropriate model zoo URL. This approach abstracts hardware-specific complexities, enabling a consistent experience for diverse AI tasks. Additionally, the `result.image_overlay` feature ensures that outputs—whether bounding boxes, keypoints, or segmentation masks—are visualized in a unified manner, simplifying result interpretation. Combined with `degirum_tools.Display`, this enables intuitive visualization of inference outputs directly overlaid on the input image. This example underscores PySDK’s design philosophy: different models, same code, unified visualization." + "We will now highlight the core capabilities of the DeGirum PySDK, showcasing its flexibility and simplicity in deploying AI models across multiple hardware platforms. By specifying the appropriate model name and model zoo, you can seamlessly run different types of models, such as classification, detection, keypoint detection, or segmentation, using the same unified interface. The JSON configuration provided in the code below enables you to select the hardware you want to use for the AI task you want to perform. We assign a model zoo URL as well as the model names from the DeGirum AI Hub for each AI task. Our approach abstracts hardware-specific complexities, driving a consistent experience for diverse AI tasks. \n", + "\n", + "In the code below, `degirum_tools.Display`, overlays an intuitive visualization of the model's inference results on the input image. This example underscores PySDK’s design philosophy: different models, same code, unified visualization." ] }, { @@ -119,6 +112,15 @@ " \"segmentation\": [\"yolov8n_relu6_coco_seg--640x640_quant_hailort_hailo8_1\"],\n", " },\n", " },\n", + " \"Google\": {\n", + " \"zoo_url\": \"degirum/models_tflite\",\n", + " \"model_names\": {\n", + " \"classification\": [\"yolov8s_silu_imagenet--224x224_quant_tflite_edgetpu_1\"],\n", + " \"detection\": [\"yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1\"],\n", + " \"keypoint_detection\": [\"yolov8n_relu6_widerface_kpts--640x640_quant_tflite_edgetpu_1\"],\n", + " \"segmentation\": [\"yolov8n_relu6_coco_seg--640x640_quant_tflite_edgetpu_1\"],\n", + " },\n", + " },\n", " \"Intel\": {\n", " \"zoo_url\": \"degirum/models_openvino\",\n", " \"model_names\": {\n", @@ -131,7 +133,7 @@ " \"DeGirum\": {\n", " \"zoo_url\": \"degirum/models_n2x\",\n", " \"model_names\": {\n", - " \"classification\": [\"yolov8s_silu_imagenet--224x224_quant_n2x_orca1_1\"],\n", + " \"classification\": [\"yolov8s _silu_imagenet--224x224_quant_n2x_orca1_1\"],\n", " \"detection\": [\"yolov8n_relu6_coco--640x640_quant_n2x_orca1_1\"],\n", " \"keypoint_detection\": [\"yolov8n_relu6_widerface_kpts--640x640_quant_n2x_orca1_1\"],\n", " \"segmentation\": [\"yolov8n_relu6_coco_seg--640x640_quant_n2x_orca1_1\"],\n", @@ -146,15 +148,6 @@ " \"segmentation\": [\"yolov8n_relu6_coco_seg--640x640_quant_rknn_rk3588_1\"],\n", " },\n", " },\n", - " \"Google\": {\n", - " \"zoo_url\": \"degirum/models_tflite\",\n", - " \"model_names\": {\n", - " \"classification\": [\"yolov8s_silu_imagenet--224x224_quant_tflite_edgetpu_1\"],\n", - " \"detection\": [\"yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1\"],\n", - " \"keypoint_detection\": [\"yolov8n_relu6_widerface_kpts--640x640_quant_tflite_edgetpu_1\"],\n", - " \"segmentation\": [\"yolov8n_relu6_coco_seg--640x640_quant_tflite_edgetpu_1\"],\n", - " },\n", - " },\n", "}\n", "\n", "source_images = {\n", @@ -167,7 +160,7 @@ "# Create widgets\n", "hardware_dropdown = widgets.Dropdown(\n", " options=model_configurations.keys(),\n", - " value=\"Intel\",\n", + " value=\"Hailo\",\n", " description=\"Hardware:\",\n", ")\n", "\n", @@ -205,7 +198,7 @@ " model_name=model_name,\n", " inference_host_address=inference_host_address,\n", " zoo_url=zoo_url,\n", - " token=degirum_cloud_token,\n", + " token=degirum_tools.get_token()\n", " )\n", " inference_result = model(source_image)\n", "\n", @@ -227,7 +220,7 @@ "source": [ "### Explore and Experiment with DeGirum PySDK\n", "\n", - "This example demonstrates how to perform AI inference using DeGirum PySDK with a pre-trained model hosted in the cloud. We encourage you to experiment further by trying out different **model zoos**, **model names**, and **image sources**. For instance, you can explore various AI tasks such as object detection, classification, keypoint detection, or segmentation by selecting models from different model zoos (e.g., `models_hailort`, `models_openvino`, `models_n2x`, etc.) and providing images of your choice. This flexibility showcases PySDK's unified API, allowing you to customize and test AI inference across multiple hardware platforms and tasks effortlessly. Dive in and discover the possibilities!" + "This example demonstrates how to perform AI inference using DeGirum PySDK with a pre-trained model hosted in the cloud. We encourage you to experiment further by trying out different **model zoos**, **model names**, and **image sources**. For instance, you can explore various AI tasks such as object detection, classification, keypoint detection, or segmentation by selecting models from different model zoos (e.g., `models_hailort`, `models_tflite`, `models_openvino`, etc.) and providing images of your choice. This flexibility showcases PySDK's unified API, allowing you to customize and test AI inference across multiple hardware platforms and tasks effortlessly. Dive in and discover the possibilities!" ] }, { @@ -239,7 +232,7 @@ "import degirum as dg, degirum_tools\n", "\n", "hw_location = \"@cloud\" # \"@cloud\" for cloud inference, \"@local\" for local inference, or IP address for AI server inference\n", - "model_zoo_url = \"degirum/models_hailort\" # models_hailort, models_openvino, models_n2x, models_rknn, models_tflite\n", + "model_zoo_url = \"degirum/models_hailort\" # models_hailort, models_tflite, models_openvino, models_n2x, models_rknn\n", "model_name = \"yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1\"\n", "image_source = \"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/Mask1.jpg\"\n", "\n", @@ -248,7 +241,7 @@ " model_name=model_name,\n", " inference_host_address=hw_location,\n", " zoo_url=model_zoo_url,\n", - " token=degirum_cloud_token\n", + " token=degirum_tools.get_token()\n", ")\n", "\n", "# perform AI model inference on given image source\n", From 14ba8b8d17c9e8166092472109222a61130b4ac9 Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Mon, 20 Jan 2025 15:39:45 -0800 Subject: [PATCH 2/8] Readded degirum_cloud_token. --- examples/basic/pysdk_hello_world.ipynb | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 1769a8e..4c9dd3a 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -33,6 +33,15 @@ "\n" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "degirum_cloud_token = 'dg_AJx4QK96F8xL1d1Ziq3rBRfWbV9NbaM5EBEfJ'" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -69,7 +78,7 @@ " model_name=\"yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1\",\n", " inference_host_address=\"@cloud\",\n", " zoo_url='degirum/models_hailort',\n", - " token=degirum_tools.get_token(),\n", + " token=degirum_cloud_token,\n", ")\n", "\n", "# perform AI model inference on given image source\n", @@ -198,7 +207,7 @@ " model_name=model_name,\n", " inference_host_address=inference_host_address,\n", " zoo_url=zoo_url,\n", - " token=degirum_tools.get_token()\n", + " token=degirum_cloud_token\n", " )\n", " inference_result = model(source_image)\n", "\n", @@ -241,7 +250,7 @@ " model_name=model_name,\n", " inference_host_address=hw_location,\n", " zoo_url=model_zoo_url,\n", - " token=degirum_tools.get_token()\n", + " token=degirum_cloud_token\n", ")\n", "\n", "# perform AI model inference on given image source\n", From 5f8a601baeaee98fef5d08307f2b7b50564be6a7 Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Mon, 20 Jan 2025 15:52:01 -0800 Subject: [PATCH 3/8] Changed degirum_cloud_token to a placeholder. --- examples/basic/pysdk_hello_world.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 4c9dd3a..2d3c192 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -39,7 +39,7 @@ "metadata": {}, "outputs": [], "source": [ - "degirum_cloud_token = 'dg_AJx4QK96F8xL1d1Ziq3rBRfWbV9NbaM5EBEfJ'" + "degirum_cloud_token = ''" ] }, { From 0ceec9d76ca29eb9c756b87f2a8167e732fa5902 Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Mon, 20 Jan 2025 16:28:08 -0800 Subject: [PATCH 4/8] Shortened "the DeGirum PySDK" --- examples/basic/pysdk_hello_world.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 2d3c192..2b2f9d3 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -48,7 +48,7 @@ "source": [ "### Running Your First Inference\n", "\n", - "In this section, we’ll demonstrate how to use the DeGirum PySDK to perform object detection on an example image using a cloud-hosted AI model. Here’s what the code does:\n", + "In this section, we’ll demonstrate how to use PySDK to perform object detection on an example image using a cloud-hosted AI model. Here’s what the code does:\n", "\n", "1. **Import Required Libraries**: The `degirum` package is used for loading and using AI models, and `degirum_tools` provides utility functions such as obtaining a token and displaying results.\n", "\n", @@ -95,7 +95,7 @@ "metadata": {}, "source": [ "### Unified AI Inference with DeGirum PySDK\n", - "We will now highlight the core capabilities of the DeGirum PySDK, showcasing its flexibility and simplicity in deploying AI models across multiple hardware platforms. By specifying the appropriate model name and model zoo, you can seamlessly run different types of models, such as classification, detection, keypoint detection, or segmentation, using the same unified interface. The JSON configuration provided in the code below enables you to select the hardware you want to use for the AI task you want to perform. We assign a model zoo URL as well as the model names from the DeGirum AI Hub for each AI task. Our approach abstracts hardware-specific complexities, driving a consistent experience for diverse AI tasks. \n", + "We will now highlight the core capabilities of PySDK, showcasing its flexibility and simplicity in deploying AI models across multiple hardware platforms. By specifying the appropriate model name and model zoo, you can seamlessly run different types of models, such as classification, detection, keypoint detection, or segmentation, using the same unified interface. The JSON configuration provided in the code below enables you to select the hardware you want to use for the AI task you want to perform. We assign a model zoo URL as well as the model names from the DeGirum AI Hub for each AI task. Our approach abstracts hardware-specific complexities, driving a consistent experience for diverse AI tasks. \n", "\n", "In the code below, `degirum_tools.Display`, overlays an intuitive visualization of the model's inference results on the input image. This example underscores PySDK’s design philosophy: different models, same code, unified visualization." ] @@ -229,7 +229,7 @@ "source": [ "### Explore and Experiment with DeGirum PySDK\n", "\n", - "This example demonstrates how to perform AI inference using DeGirum PySDK with a pre-trained model hosted in the cloud. We encourage you to experiment further by trying out different **model zoos**, **model names**, and **image sources**. For instance, you can explore various AI tasks such as object detection, classification, keypoint detection, or segmentation by selecting models from different model zoos (e.g., `models_hailort`, `models_tflite`, `models_openvino`, etc.) and providing images of your choice. This flexibility showcases PySDK's unified API, allowing you to customize and test AI inference across multiple hardware platforms and tasks effortlessly. Dive in and discover the possibilities!" + "This example demonstrates how to perform AI inference using PySDK with a pre-trained model hosted in the cloud. We encourage you to experiment further by trying out different **model zoos**, **model names**, and **image sources**. For instance, you can explore various AI tasks such as object detection, classification, keypoint detection, or segmentation by selecting models from different model zoos (e.g., `models_hailort`, `models_tflite`, `models_openvino`, etc.) and providing images of your choice. This flexibility showcases PySDK's unified API, allowing you to customize and test AI inference across multiple hardware platforms and tasks effortlessly. Dive in and discover the possibilities!" ] }, { From b3987160fcd32437cb2ca68dbce6aa76afccd11c Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Mon, 20 Jan 2025 16:33:36 -0800 Subject: [PATCH 5/8] Corrected a typo. --- examples/basic/pysdk_hello_world.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 2b2f9d3..a3d623e 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -142,7 +142,7 @@ " \"DeGirum\": {\n", " \"zoo_url\": \"degirum/models_n2x\",\n", " \"model_names\": {\n", - " \"classification\": [\"yolov8s _silu_imagenet--224x224_quant_n2x_orca1_1\"],\n", + " \"classification\": [\"yolov8s_silu_imagenet--224x224_quant_n2x_orca1_1\"],\n", " \"detection\": [\"yolov8n_relu6_coco--640x640_quant_n2x_orca1_1\"],\n", " \"keypoint_detection\": [\"yolov8n_relu6_widerface_kpts--640x640_quant_n2x_orca1_1\"],\n", " \"segmentation\": [\"yolov8n_relu6_coco_seg--640x640_quant_n2x_orca1_1\"],\n", From e319cdc375db274c56346cfd4d60cf59f1127fc5 Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Tue, 21 Jan 2025 15:05:21 -0800 Subject: [PATCH 6/8] Added links to YT videos and used .get_token() --- examples/basic/pysdk_hello_world.ipynb | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index a3d623e..825e734 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -25,23 +25,15 @@ "### Setting the token\n", "Since we’ll be using hardware hosted in the cloud by DeGirum, you’ll need to set a token to access these resources. This token serves as your secure key to connect to DeGirum’s cloud infrastructure and enables you to run the examples provided in this notebook seamlessly.\n", "\n", - "To guide you through the process of obtaining your token, refer to the video below: \n", + "To guide you through the process of obtaining your token and adding it as a new secret in Google Colab, refer to the videos below: \n", "\n", - "[![How to Obtain and Set Your Token](https://img.youtube.com/vi/PLACEHOLDER/0.jpg)](https://www.youtube.com/watch?v=PLACEHOLDER) \n", + "[![Generating a Token for the DeGirum AI Hub](https://img.youtube.com/vi/iyii0RzyFm8/0.jpg)](https://youtu.be/iyii0RzyFm8)\n", + "[![How to Add a New Secret in Google Colab](https://img.youtube.com/vi/GmevDVlT0OQ/0.jpg)](https://youtu.be/GmevDVlT0OQ) \n", "\n", "*Click on the thumbnail to watch the tutorial video.* \n", "\n" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "degirum_cloud_token = ''" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -78,7 +70,7 @@ " model_name=\"yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1\",\n", " inference_host_address=\"@cloud\",\n", " zoo_url='degirum/models_hailort',\n", - " token=degirum_cloud_token,\n", + " token=degirum_tools.get_token(),\n", ")\n", "\n", "# perform AI model inference on given image source\n", @@ -207,7 +199,7 @@ " model_name=model_name,\n", " inference_host_address=inference_host_address,\n", " zoo_url=zoo_url,\n", - " token=degirum_cloud_token\n", + " token=degirum_tools.get_token()\n", " )\n", " inference_result = model(source_image)\n", "\n", @@ -250,7 +242,7 @@ " model_name=model_name,\n", " inference_host_address=hw_location,\n", " zoo_url=model_zoo_url,\n", - " token=degirum_cloud_token\n", + " token=degirum_tools.get_token()\n", ")\n", "\n", "# perform AI model inference on given image source\n", From a09fd925bbc40bff7c921e7acf4f8fc4c3e6e20f Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Wed, 22 Jan 2025 09:17:18 -0800 Subject: [PATCH 7/8] Added code block to check get_token(). --- examples/basic/pysdk_hello_world.ipynb | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 825e734..5785d48 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -34,6 +34,24 @@ "\n" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You may verify your token with this code:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import degirum_tools\n", + "# print cloud token\n", + "degirum_tools.get_token()" + ] + }, { "cell_type": "markdown", "metadata": {}, From 3e08dd5825881f6b8528ea87508cea79fb4d4574 Mon Sep 17 00:00:00 2001 From: Raytao Xia Date: Wed, 22 Jan 2025 09:38:03 -0800 Subject: [PATCH 8/8] Removed text in .Display("AI Camera:[...]") --- examples/basic/pysdk_hello_world.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/basic/pysdk_hello_world.ipynb b/examples/basic/pysdk_hello_world.ipynb index 5785d48..e27c0bb 100644 --- a/examples/basic/pysdk_hello_world.ipynb +++ b/examples/basic/pysdk_hello_world.ipynb @@ -96,7 +96,7 @@ "\n", "# show results of inference\n", "print(inference_result) # numeric results\n", - "with degirum_tools.Display(\"AI Camera: Press q or x to Quit\") as display:\n", + "with degirum_tools.Display() as display:\n", " display.show_image(inference_result)" ] }, @@ -223,7 +223,7 @@ "\n", " print(\"Inference result:\")\n", " print(inference_result)\n", - " with degirum_tools.Display(\"AI Camera: Press x or q to quit\") as display:\n", + " with degirum_tools.Display() as display:\n", " display.show_image(inference_result)\n", "\n", "# Link the Run button to the inference function\n", @@ -268,7 +268,7 @@ "\n", "# show results of inference\n", "print(inference_result) # numeric results\n", - "with degirum_tools.Display(\"AI Camera\") as display:\n", + "with degirum_tools.Display() as display:\n", " display.show_image(inference_result)" ] }