Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ MANIFEST
# exclude the provider.tf file on arc-enabled k8s microhack (contains subscription id)
03-Azure/01-03-Infrastructure/03_Hybrid_Azure_Arc_Kubernetes/**/provider.tf

# Exclude Arc credentials configuration files
03-Azure/01-03-Infrastructure/03_Hybrid_Azure_Arc_Kubernetes/**/arc-data-credentials.yaml

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,49 @@
# Challenge 4 - Deploy SQL Managed Instance to your cluster

In this challenge, you'll deploy Azure Arc-enabled data services to your K3s cluster, specifically focusing on SQL Managed Instance. This enables you to run Azure SQL Database services directly on your on-premises Kubernetes cluster while maintaining cloud-connected management, monitoring, and security capabilities.

Azure Arc-enabled data services provide:
* **Cloud-connected database services** running on your own infrastructure
* **Centralized management** through Azure portal, Azure CLI, and Azure Resource Manager
* **Automatic updates and patching** managed through Azure Arc
* **Built-in monitoring and observability** with Log Analytics integration
* **Enterprise-grade security** with Azure Active Directory integration

💡*Hint*: Arc data services require a data controller that acts as the control plane for all data services in the cluster. This controller manages the lifecycle, updates, and monitoring of database instances.

💡*Hint*: Custom locations allow you to use your Arc-enabled Kubernetes cluster as a deployment target for Azure services, creating a seamless hybrid cloud experience.

## Goal
* Deploy Azure Arc data controller to enable data services on your K3s cluster
* Create a SQL Managed Instance running on your on-premises Kubernetes cluster
* Configure monitoring and management capabilities for the data services

## Actions
* Install required Azure CLI extensions for Arc data services (`arcdata`)
* Enable custom locations feature on your Arc-enabled Kubernetes cluster
* Create a custom location that represents your cluster as an Azure deployment target
* Deploy the Azure Arc data controller with appropriate configuration for K3s
* Configure Log Analytics workspace integration for monitoring and telemetry
* Set up authentication credentials for monitoring dashboards (Grafana and Kibana)
* Create a SQL Managed Instance using the data controller
* Verify connectivity and management capabilities

## Success Criteria
* Azure Arc data controller is successfully deployed and running in your cluster (`kubectl get datacontrollers`)
* Custom location is created and visible in Azure portal under Azure Arc > Infrastructure > Custom locations
* Data controller appears in Azure portal under Azure Arc > Data services > Data controllers
* SQL Managed Instance is deployed and shows as "Ready" in both Kubernetes (`kubectl get sqlmi`) and Azure portal
* Monitoring dashboards (Grafana for metrics, Kibana for logs) are accessible and showing data
* You can connect to the SQL Managed Instance using Azure Data Studio or SQL Server Management Studio
* Telemetry and logs are flowing to the configured Log Analytics workspace

## Learning Resources
* [What are Azure Arc-enabled data services?](https://learn.microsoft.com/en-us/azure/azure-arc/data/overview)
* [Create Azure Arc data services cluster extension](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/conceptual-extensions)
* [Create a custom location on your arc-enabled k8s](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/custom-locations#create-custom-location)
* [Create the Arc data controller](https://learn.microsoft.com/en-us/azure/azure-arc/data/create-data-controller-direct-cli)
* [Deploy SQL Managed Instance on Arc-enabled Kubernetes](https://learn.microsoft.com/en-us/azure/azure-arc/data/create-sql-managed-instance)
* [Connect to SQL Managed Instance on Arc](https://learn.microsoft.com/en-us/azure/azure-arc/data/connect-managed-instance)

## Solution - Spoilerwarning
[Solution Steps](../walkthroughs/challenge-04/solution.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,7 @@ resource "azurerm_linux_virtual_machine" "onprem_master" {
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
disk_size_gb = 128 # P10 managed disk for better IOPS (500) and throughput (100 MB/s)
}

source_image_reference {
Expand Down Expand Up @@ -226,6 +227,7 @@ resource "azurerm_linux_virtual_machine" "onprem_worker" {
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
disk_size_gb = 128 # P10 managed disk for better IOPS (500) and throughput (100 MB/s)
}

source_image_reference {
Expand Down Expand Up @@ -267,6 +269,7 @@ resource "azurerm_linux_virtual_machine" "onprem_worker2" {
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
disk_size_gb = 128 # P10 managed disk for better IOPS (500) and throughput (100 MB/s)
}

source_image_reference {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ terraform apply tfplan
- Master node: Installs K3s server, configures networking, sets up kubeconfig
- Worker nodes: Wait for master, then join the cluster as K3s agents
3. **Cluster becomes ready** in ~5-10 minutes after VM deployment
4. **SSH access** is available immediately with the mhadmin user and your password
4. **SSH access** is available immediately with your user and your password

The expected output looks approximately like this depending on the start_index and end_index parameters:
```bash
Expand Down Expand Up @@ -173,7 +173,7 @@ rg_names_onprem = {
### 1. Access your cluster
```bash
# Set admin username (must match the admin_user value in fixtures.tfvars)
admin_user="<replace-with-admin-user-from-fixtures.tfvars>" # e.g., "mhadmin"
admin_user="<replace-with-admin-user-from-fixtures.tfvars>"

# Extract user number from Azure username (e.g., LabUser-37 -> 37)
azure_user=$(az account show --query user.name --output tsv)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
#!/bin/bash
# This script connects an existing K3s cluster to Azure Arc with Azure RBAC enabled
echo "Exporting environment variables"

# Extract user number from Azure username (e.g., LabUser-37 -> 37)
azure_user=$(az account show --query user.name --output tsv)
user_number=$(echo $azure_user | sed -n 's/.*LabUser-\([0-9]\+\).*/\1/p')

if [ -z "$user_number" ]; then
echo "Error: Could not extract user number from Azure username: $azure_user"
echo "Please make sure you're logged in as LabUser-XX"
exit 1
fi

echo "Detected user number: $user_number"

echo "Setting up kubectl access to the K3s cluster..."
# Get public ip of master node via Azure cli according to user-number
master_pip=$(az vm list-ip-addresses --resource-group "${user_number}-k8s-onprem" --name "${user_number}-k8s-master" --query "[0].virtualMachine.network.publicIpAddresses[0].ipAddress" --output tsv)

# Retrieve admin_user and admin_password from fixtures.tfvars
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
LAB_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
FIXTURES_FILE="$LAB_DIR/fixtures.tfvars"

if [ ! -f "$FIXTURES_FILE" ]; then
echo "Error: fixtures.tfvars not found at $FIXTURES_FILE"
echo "Please ensure fixtures.tfvars exists in the lab directory"
exit 1
fi

# Extract admin_user and admin_password from fixtures.tfvars
admin_user=$(grep -E '^\s*admin_user\s*=' "$FIXTURES_FILE" | sed -E 's/.*=\s*"(.*)".*/\1/')
admin_password=$(grep -E '^\s*admin_password\s*=' "$FIXTURES_FILE" | sed -E 's/.*=\s*"(.*)".*/\1/')

if [ -z "$admin_user" ] || [ -z "$admin_password" ]; then
echo "Error: Could not extract admin_user or admin_password from fixtures.tfvars"
echo "Please ensure fixtures.tfvars contains admin_user and admin_password variables"
exit 1
fi

echo "Using admin user: $admin_user"

# Create .kube directory if it doesn't exist
mkdir -p ~/.kube

# Copy the kubeconfig to standard location using sshpass for silent authentication
# and SSH options to accept host keys automatically
if ! command -v sshpass &> /dev/null; then
echo "Error: sshpass is not installed. Installing sshpass..."
sudo apt-get update -qq && sudo apt-get install -y -qq sshpass
fi

sshpass -p "$admin_password" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
${admin_user}@${master_pip}:/home/${admin_user}/.kube/config ~/.kube/config
# replace localhost address with the public ip of master node
sed -i "s/127.0.0.1/$master_pip/g" ~/.kube/config
# Now kubectl works directly on your local client - no need to ssh into the master node anymore
kubectl get nodes

# Set variables based on detected user number
export onprem_resource_group="${user_number}-k8s-onprem"
export arc_resource_group="${user_number}-k8s-arc"
export arc_cluster_name="${user_number}-k8s-arc-enabled"
export location="westeurope"

echo "Using resource groups: $onprem_resource_group (onprem) and $arc_resource_group (arc)"

# Registering Azure Arc providers
echo "Registering Azure Arc providers"
az provider register --namespace Microsoft.Kubernetes --wait
az provider register --namespace Microsoft.KubernetesConfiguration --wait
az provider register --namespace Microsoft.ExtendedLocation --wait

az provider show -n Microsoft.Kubernetes -o table
az provider show -n Microsoft.KubernetesConfiguration -o table
az provider show -n Microsoft.ExtendedLocation -o table

echo "Clear cached helm Azure Arc Helm Charts"
rm -rf ~/.azure/AzureArcCharts

# Installing Azure Arc k8s CLI extensions
echo "Checking if you have up-to-date Azure Arc AZ CLI 'connectedk8s' extension..."
az extension show --name "connectedk8s" &> extension_output
if cat extension_output | grep -q "not installed"; then
az extension add --name "connectedk8s"
else
az extension update --name "connectedk8s"
fi
rm extension_output
echo ""

echo "Checking if you have up-to-date Azure Arc AZ CLI 'k8s-configuration' extension..."
az extension show --name "k8s-configuration" &> extension_output
if cat extension_output | grep -q "not installed"; then
az extension add --name "k8s-configuration"
else
az extension update --name "k8s-configuration"
fi
rm extension_output
echo ""

echo "Connecting the cluster to Azure Arc"
az connectedk8s connect --name $arc_cluster_name \
--resource-group $arc_resource_group \
--location $location \
--infrastructure 'generic' \
--distribution 'k3s'

echo "Waiting for Arc connection to be established..."
sleep 30

echo "Verifying Arc connection status..."
az connectedk8s show --resource-group $arc_resource_group --name $arc_cluster_name --query "{name:name, connectivityStatus:connectivityStatus}"

echo "Creating a clusterRoleBinding for the user..."
kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$azure_user

echo ""
echo "✅ Azure Arc connection completed successfully!"
echo ""
echo "📋 Summary:"
echo " - Cluster: $arc_cluster_name"
echo " - Resource Group: $arc_resource_group"
echo " - Status: Connected"
echo ""
echo "🌐 You can view the cluster in Azure Portal:"
echo " https://portal.azure.com/#@/resource/subscriptions/$(az account show --query id --output tsv)/resourceGroups/$arc_resource_group/providers/Microsoft.Kubernetes/connectedClusters/$arc_cluster_name"
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
#!/bin/bash

# Bootstrap script for complete K3s + Azure Arc deployment
# This script:
# 1. Deploys K3s cluster using Terraform
# 2. Connects the cluster to Azure Arc
# 3. Provides verification and status checks

set -e # Exit on any error

echo "🚀 Starting complete K3s + Azure Arc bootstrap deployment"
echo "=================================================="

# Detect user information
azure_user=$(az account show --query user.name --output tsv)
user_number=$(echo $azure_user | sed -n 's/.*LabUser-\([0-9]\+\).*/\1/p')

if [ -z "$user_number" ]; then
echo "❌ Error: Could not extract user number from Azure username: $azure_user"
echo "Please make sure you're logged in as LabUser-XX"
exit 1
fi

echo "✅ Detected user number: $user_number"
echo "📧 Azure user: $azure_user"

# Determine script locations
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
LAB_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
TERRAFORM_DIR="$LAB_DIR"
ARC_CONNECT_SCRIPT="$SCRIPT_DIR/az_connect_k8s.sh"

echo "📁 Working directories:"
echo " Script dir: $SCRIPT_DIR"
echo " Lab dir: $LAB_DIR"
echo " Terraform dir: $TERRAFORM_DIR"

# Validate prerequisites
echo ""
echo "🔍 Validating prerequisites..."

# Check if terraform is available
if ! command -v terraform &> /dev/null; then
echo "❌ Terraform is not installed or not in PATH"
exit 1
fi

# Check if terraform files exist
if [ ! -f "$TERRAFORM_DIR/main.tf" ]; then
echo "❌ Terraform files not found in $TERRAFORM_DIR"
exit 1
fi

# Check if Arc connection script exists
if [ ! -f "$ARC_CONNECT_SCRIPT" ]; then
echo "❌ Arc connection script not found at $ARC_CONNECT_SCRIPT"
exit 1
fi

echo "✅ All prerequisites validated"

# Change to terraform directory
cd "$TERRAFORM_DIR"

echo ""
echo "🏗️ Phase 1: Deploying K3s cluster with Terraform"
echo "================================================"

# Setup terraform provider with current subscription
subscription_id=$(az account show --query id --output tsv)
echo "📋 Using subscription ID: $subscription_id"

echo "🔧 Updating provider.tf with current subscription..."
sed -i "s|subscription_id = \".*\"|subscription_id = \"$subscription_id\"|" provider.tf

# Initialize Terraform if needed
if [ ! -d ".terraform" ]; then
echo "⚙️ Initializing Terraform..."
terraform init
fi

# Plan and apply terraform
echo "📋 Creating Terraform plan..."
terraform plan -var-file=fixtures.tfvars -out=tfplan

echo "🚀 Applying Terraform deployment..."
terraform apply -parallelism=3 tfplan

# Verify deployment
echo "✅ Terraform deployment completed"

# Wait for VMs to be fully ready
echo "⏳ Waiting for VMs to be fully provisioned (60 seconds)..."
sleep 60

echo ""
echo "🔗 Phase 2: Connecting cluster to Azure Arc"
echo "============================================"

# Execute the Arc connection script
echo "🚀 Running Azure Arc connection script..."
bash "$ARC_CONNECT_SCRIPT"

echo ""
echo "🔍 Phase 3: Final verification and status"
echo "========================================="

# Additional verification steps
echo "📊 Cluster status:"
kubectl get nodes -o wide

echo ""
echo "🌐 Azure Arc status:"
az connectedk8s show --resource-group "${user_number}-k8s-arc" --name "${user_number}-k8s-arc-enabled" --query "{name:name, connectivityStatus:connectivityStatus, kubernetesVersion:kubernetesVersion}" -o table

echo ""
echo "🎉 Bootstrap deployment completed successfully!"
echo "=============================================="
echo ""
echo "📋 Summary:"
echo " 👤 User: $azure_user ($user_number)"
echo " 🏗️ On-premises RG: ${user_number}-k8s-onprem"
echo " ☁️ Azure Arc RG: ${user_number}-k8s-arc"
echo " 🔗 Arc Cluster: ${user_number}-k8s-arc-enabled"
echo ""
echo "🌐 View your cluster in Azure Portal:"
echo " https://portal.azure.com/#@/resource/subscriptions/$subscription_id/resourceGroups/${user_number}-k8s-arc/providers/Microsoft.Kubernetes/connectedClusters/${user_number}-k8s-arc-enabled"
echo ""
echo "💡 Next steps:"
echo " • Your K3s cluster is now running and connected to Azure Arc"
echo " • You can deploy Arc-enabled data services using the dataservice.sh script"
echo " • Use kubectl commands to interact with your cluster"
echo " • Explore Azure Arc features in the Azure Portal"
Loading