Welcome to the Azure Cloud Workshop! In this hands-on session, you'll learn how to leverage powerful Azure AI services to extract insights from video content, make those insights searchable, and then interact with them using an intelligent large language model. This workshop demonstrates a practical end-to-end solution for intelligent video analysis, showcasing the seamless integration capabilities within the Azure ecosystem.
- Azure AI Video Indexer: How to upload videos and automatically extract rich metadata, including transcripts, keywords, sentiment, and more, in a JSON output format.
- Azure AI Search (formerly Azure Cognitive Search): How to ingest and index the JSON output from Video Indexer, making its content quickly searchable.
- Azure OpenAI Service: How to use a large language model to query and interact with the indexed video content, enabling natural language understanding and intelligent responses.
Before you begin, please ensure you have the following:
- An active Azure subscription. If you don't have one, you can sign up for a free Azure account.
- Basic understanding of Azure portal navigation.
- Familiarity with JSON format.
- (Optional but Recommended) A basic understanding of REST APIs and
curlor similar tools for making HTTP requests. - (Optional) Python installed on your local machine if you wish to run any provided sample scripts outside of a cloud shell.
This workshop is structured into the following key sections:
- Setting up Azure Resources: We'll provision all necessary Azure services (Video Indexer, AI Search, Azure OpenAI Service).
- Ingesting Video with Azure AI Video Indexer: You'll learn how to upload a sample video and trigger its analysis, then retrieve the comprehensive JSON output.
- Indexing Data with Azure AI Search: We'll walk through creating a search index and populating it with the insights extracted by Video Indexer.
- Interacting with Azure OpenAI Service: You'll send queries to your Azure OpenAI model, referencing the indexed video content to get intelligent answers.
Follow the steps outlined in the workshop guide (or your instructor's directions) to set up your environment and begin the hands-on labs.
A sample document has been provided demonstrating the core workflow:
- Video Ingestion: A sample video is processed using Azure AI Video Indexer.
- JSON Output: The resulting detailed JSON file, containing all extracted insights (transcripts, keywords, etc.), serves as the raw data.
- AI Search Indexing: This JSON output is then indexed into an Azure AI Search instance, making the video's content searchable.
- Azure OpenAI Interaction: Finally, the Azure OpenAI Service is used to query the AI Search index, allowing natural language questions to be answered based on the video's content.
Once you've completed the core workshop, consider these challenges to deepen your understanding and extend the solution:
- Dynamic YouTube Video Selection:
- Challenge: Modify the solution so that instead of working with a pre-defined video, you can pass a YouTube video URL or ID as part of the reference (e.g., in an API call or a function parameter).
- Goal: Make your solution dynamic, allowing the AI to answer questions spanning multiple videos.
- Building a User Interface: Create a simple web application or a Streamlit app to provide a user-friendly interface for uploading videos, querying the search index, and interacting with Azure OpenAI.
- Adding More Metadata: Explore other features of Azure AI Video Indexer (e.g., facial recognition, object detection) and integrate that additional metadata into your AI Search index.
- Custom Skills in AI Search: For more advanced scenarios, investigate how to create custom skills in Azure AI Search to further process or enrich your data before indexing.
We hope you enjoy this workshop and discover the immense potential of Azure AI services!
A simple web application that embeds Microsoft Video Indexer insights using an iframe and provides Azure OpenAI search functionality.
- Clean, modern UI with responsive design
- Embedded Video Indexer insights
- Azure OpenAI search with knowledge base integration
- AI-powered answers with source references
- Loading indicators and error handling
- Mobile-friendly layout
- Navigation between Video Indexer and AI Search pages
-
Install dependencies:
npm install
-
Start the development server:
npm start
Or for development with auto-reload:
npm run dev
-
Open your browser: The application will automatically open at
http://localhost:3000
npm start- Start the production servernpm run dev- Start the development server with auto-reloadnpm run build- Build the project (no build step required for static HTML)
videoindex-webapp/
├── index.html # Video Indexer page with embedded iframe
├── search.html # Azure OpenAI search page
├── package.json # NPM configuration
└── README.md # This file
Displays the embedded Video Indexer insights iframe. To use a different Video Indexer embed URL, simply replace the src attribute in the iframe.
Provides Azure OpenAI search functionality with the following features:
- Configuration for Azure OpenAI and Cognitive Search endpoints
- Search query interface
- AI-generated answers based on search results
- Display of source references with relevance scores
- Persistent configuration storage
The AI Search page requires the following Azure services:
- Azure OpenAI Service (with GPT-4 or other model deployment)
- Azure Cognitive Search (with an index containing your documents)
Enter your configuration details in the search page:
- Azure OpenAI Endpoint
- API Key
- Deployment Name
- Search Service Endpoint
- Search Service Key
- Index Name
- HTML5
- CSS3 (with modern features like Flexbox and CSS Grid)
- Vanilla JavaScript
- Azure OpenAI API
- Azure Cognitive Search API
- http-server (for local development)
MIT
