Container App Deployment Examples with AZD

March 27, 2026 ยท View on GitHub

This directory contains comprehensive examples for deploying containerized applications to Azure Container Apps using Azure Developer CLI (AZD). These examples demonstrate real-world patterns, best practices, and production-ready configurations.

๐Ÿ“š Table of Contents

Overview

Azure Container Apps is a fully managed serverless container platform that enables you to run microservices and containerized applications without managing infrastructure. When combined with AZD, you get:

  • Simplified Deployment: Single command deploys containers with infrastructure
  • Automatic Scaling: Scale to zero and scale out based on HTTP traffic or events
  • Integrated Networking: Built-in service discovery and traffic splitting
  • Managed Identity: Secure authentication to Azure resources
  • Cost Optimization: Pay only for resources you use

Prerequisites

Before getting started, ensure you have:

# Check AZD installation
azd version

# Check Azure CLI
az version

# Check Docker (for building custom images)
docker --version

# Authenticate for AZD deployments
azd auth login

# Optional: sign in to Azure CLI if you plan to run az commands directly
az login

Required Azure Resources:

  • Active Azure subscription
  • Resource group creation permissions
  • Container Apps environment access

Quick Start Examples

1. Simple Web API (Python Flask)

Deploy a basic REST API with Azure Container Apps.

Example: Python Flask API

# azure.yaml
name: flask-api-demo
metadata:
  template: flask-api-demo@0.0.1-beta
services:
  api:
    project: ./src/api
    language: python
    host: containerapp

Deployment Steps:

# Initialize from template
azd init --template todo-python-mongo

# Provision infrastructure and deploy
azd up

# Test the deployment
azd show
curl $(azd show --output json | jq -r '.services.api.endpoint')/health

Key Features:

  • Auto-scaling from 0 to 10 replicas
  • Health probes and liveness checks
  • Environment variable injection
  • Application Insights integration

2. Node.js Express API

Deploy a Node.js backend with MongoDB integration.

# Initialize Node.js API template
azd init --template todo-nodejs-mongo

# Configure environment variables
azd env set DATABASE_NAME todosdb
azd env set COLLECTION_NAME todos

# Deploy
azd up

# View logs via Azure Monitor
azd monitor --logs

Infrastructure Highlights:

// Bicep snippet from infra/main.bicep
resource containerApp 'Microsoft.App/containerApps@2023-05-01' = {
  name: 'api-${resourceToken}'
  location: location
  properties: {
    managedEnvironmentId: containerEnv.id
    configuration: {
      ingress: {
        external: true
        targetPort: 3000
        transport: 'auto'
      }
      secrets: [
        {
          name: 'mongodb-connection'
          value: mongoConnection
        }
      ]
    }
    template: {
      containers: [
        {
          name: 'api'
          image: containerImage
          env: [
            {
              name: 'DATABASE_URL'
              secretRef: 'mongodb-connection'
            }
          ]
        }
      ]
      scale: {
        minReplicas: 0
        maxReplicas: 10
      }
    }
  }
}

3. Static Frontend + API Backend

Deploy a full-stack application with React frontend and API backend.

# Initialize full-stack template
azd init --template todo-csharp-sql-swa-func

# Review configuration
cat azure.yaml

# Deploy both services
azd up

# Open the application
azd show --output json | jq -r '.services.web.endpoint' | xargs start

Production Examples

Example 1: Microservices Architecture

Scenario: E-commerce application with multiple microservices

Directory Structure:

microservices-demo/
โ”œโ”€โ”€ azure.yaml
โ”œโ”€โ”€ infra/
โ”‚   โ”œโ”€โ”€ main.bicep
โ”‚   โ”œโ”€โ”€ app/
โ”‚   โ”‚   โ”œโ”€โ”€ container-env.bicep
โ”‚   โ”‚   โ”œโ”€โ”€ product-service.bicep
โ”‚   โ”‚   โ”œโ”€โ”€ order-service.bicep
โ”‚   โ”‚   โ””โ”€โ”€ payment-service.bicep
โ”‚   โ””โ”€โ”€ core/
โ”‚       โ”œโ”€โ”€ storage.bicep
โ”‚       โ””โ”€โ”€ database.bicep
โ””โ”€โ”€ src/
    โ”œโ”€โ”€ product-service/
    โ”œโ”€โ”€ order-service/
    โ””โ”€โ”€ payment-service/

azure.yaml Configuration:

name: microservices-ecommerce
services:
  product-service:
    project: ./src/product-service
    language: python
    host: containerapp
    
  order-service:
    project: ./src/order-service
    language: csharp
    host: containerapp
    
  payment-service:
    project: ./src/payment-service
    language: nodejs
    host: containerapp

Deployment:

# Initialize project
azd init

# Set production environment
azd env new production

# Configure production settings
azd env set ENVIRONMENT production
azd env set MIN_REPLICAS 2
azd env set MAX_REPLICAS 50

# Deploy all services
azd up

# Monitor deployment
azd monitor --overview

Example 2: AI-Powered Container App

Scenario: AI chat application with Microsoft Foundry Models integration

File: src/ai-chat/app.py

from flask import Flask, request, jsonify
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
import openai

app = Flask(__name__)

# Use Managed Identity for secure access
credential = DefaultAzureCredential()
vault_url = "https://{vault-name}.vault.azure.net"
client = SecretClient(vault_url=vault_url, credential=credential)

@app.route('/api/chat', methods=['POST'])
def chat():
    user_message = request.json.get('message')
    
    # Get OpenAI key from Key Vault
    openai_key = client.get_secret("openai-api-key").value
    openai.api_key = openai_key
    
    response = openai.ChatCompletion.create(
        model="gpt-4.1",
        messages=[{"role": "user", "content": user_message}]
    )
    
    return jsonify({"response": response.choices[0].message.content})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8000)

File: azure.yaml

name: ai-chat-app
services:
  api:
    project: ./src/ai-chat
    language: python
    host: containerapp

File: infra/main.bicep

param location string = resourceGroup().location
param environmentName string

var resourceToken = uniqueString(subscription().id, environmentName, location)

// Container Apps Environment
module containerEnv './app/container-env.bicep' = {
  name: 'container-env-${resourceToken}'
  params: {
    location: location
    environmentName: environmentName
  }
}

// Key Vault for secrets
resource keyVault 'Microsoft.KeyVault/vaults@2023-02-01' = {
  name: 'kv-${resourceToken}'
  location: location
  properties: {
    sku: {
      family: 'A'
      name: 'standard'
    }
    tenantId: subscription().tenantId
    enableRbacAuthorization: true
  }
}

// Container App with Managed Identity
module aiChatApp './app/container-app.bicep' = {
  name: 'ai-chat-app-${resourceToken}'
  params: {
    location: location
    environmentId: containerEnv.outputs.environmentId
    containerImage: 'your-registry.azurecr.io/ai-chat:latest'
    keyVaultName: keyVault.name
  }
}

Deployment Commands:

# Set up environment
azd init --template ai-chat-app
azd env new dev

# Configure OpenAI
azd env set AZURE_OPENAI_ENDPOINT "https://your-openai.openai.azure.com/"
azd env set AZURE_OPENAI_DEPLOYMENT "gpt-4.1"

# Deploy
azd up

# Test the API
curl -X POST $(azd show --output json | jq -r '.services.api.endpoint')/api/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, how are you?"}'

Example 3: Background Worker with Queue Processing

Scenario: Order processing system with message queue

Directory Structure:

queue-worker/
โ”œโ”€โ”€ azure.yaml
โ”œโ”€โ”€ infra/
โ”‚   โ”œโ”€โ”€ main.bicep
โ”‚   โ”œโ”€โ”€ app/
โ”‚   โ”‚   โ”œโ”€โ”€ api.bicep
โ”‚   โ”‚   โ””โ”€โ”€ worker.bicep
โ”‚   โ””โ”€โ”€ core/
โ”‚       โ”œโ”€โ”€ storage-queue.bicep
โ”‚       โ””โ”€โ”€ servicebus.bicep
โ””โ”€โ”€ src/
    โ”œโ”€โ”€ api/
    โ””โ”€โ”€ worker/

File: src/worker/processor.py

import os
from azure.storage.queue import QueueClient
from azure.identity import DefaultAzureCredential

def process_orders():
    credential = DefaultAzureCredential()
    queue_url = os.getenv('AZURE_QUEUE_URL')
    
    queue_client = QueueClient.from_queue_url(
        queue_url=queue_url,
        credential=credential
    )
    
    while True:
        messages = queue_client.receive_messages(max_messages=10)
        for message in messages:
            # Process order
            print(f"Processing order: {message.content}")
            
            # Complete message
            queue_client.delete_message(message)

if __name__ == '__main__':
    process_orders()

File: azure.yaml

name: order-processing
services:
  api:
    project: ./src/api
    language: python
    host: containerapp
    
  worker:
    project: ./src/worker
    language: python
    host: containerapp

Deployment:

# Initialize
azd init

# Deploy with queue configuration
azd up

# Scale worker based on queue length
az containerapp update \
  --name worker \
  --resource-group rg-order-processing \
  --scale-rule-name queue-scaling \
  --scale-rule-type azure-queue \
  --scale-rule-metadata queueName=orders accountName=storageaccount

Advanced Patterns

Pattern 1: Blue-Green Deployment

# Create new revision without traffic
azd deploy api --revision-suffix blue --no-traffic

# Test the new revision
curl https://api--blue.nicegrass-12345.eastus.azurecontainerapps.io/health

# Split traffic (20% to blue, 80% to current)
az containerapp ingress traffic set \
  --name api \
  --resource-group rg-myapp \
  --revision-weight latest=80 blue=20

# Full cutover to blue
az containerapp ingress traffic set \
  --name api \
  --resource-group rg-myapp \
  --revision-weight blue=100

Pattern 2: Canary Deployment with AZD

File: .azure/dev/config.json

{
  "deploymentStrategy": "canary",
  "canary": {
    "initialTrafficPercentage": 10,
    "incrementPercentage": 10,
    "intervalMinutes": 5
  }
}

Deployment Script:

#!/bin/bash
# deploy-canary.sh

# Deploy new revision with 10% traffic
azd deploy api --revision-mode multiple

# Monitor metrics
azd monitor --service api --duration 5m

# Increase traffic gradually
for i in {20..100..10}; do
  echo "Increasing traffic to $i%"
  az containerapp revision set-traffic \
    --name api \
    --resource-group rg-myapp \
    --revision-weight latest=$i
  
  sleep 300  # Wait 5 minutes
done

Pattern 3: Multi-Region Deployment

File: azure.yaml

name: global-app
services:
  api:
    project: ./src/api
    language: python
    host: containerapp
    regions:
      - eastus
      - westeurope
      - southeastasia

File: infra/multi-region.bicep

param regions array = ['eastus', 'westeurope', 'southeastasia']

module containerApps './app/container-app.bicep' = [for region in regions: {
  name: 'app-${region}'
  params: {
    location: region
    environmentName: environmentName
  }
}]

// Traffic Manager for global load balancing
resource trafficManager 'Microsoft.Network/trafficManagerProfiles@2022-04-01' = {
  name: 'tm-global-app'
  location: 'global'
  properties: {
    trafficRoutingMethod: 'Performance'
    endpoints: [for i in range(0, length(regions)): {
      name: 'endpoint-${regions[i]}'
      type: 'Microsoft.Network/trafficManagerProfiles/externalEndpoints'
      properties: {
        target: containerApps[i].outputs.fqdn
        endpointStatus: 'Enabled'
      }
    }]
  }
}

Deployment:

# Deploy to all regions
azd up

# Verify endpoints
azd show --output json | jq '.services.api.endpoints'

Pattern 4: Dapr Integration

File: infra/app/dapr-enabled.bicep

resource containerApp 'Microsoft.App/containerApps@2023-05-01' = {
  name: 'dapr-app'
  properties: {
    configuration: {
      dapr: {
        enabled: true
        appId: 'order-service'
        appPort: 8000
        appProtocol: 'http'
      }
    }
    template: {
      containers: [
        {
          name: 'app'
          image: containerImage
        }
      ]
    }
  }
}

Application Code with Dapr:

from flask import Flask
from dapr.clients import DaprClient

app = Flask(__name__)

@app.route('/orders', methods=['POST'])
def create_order():
    with DaprClient() as client:
        # Save state
        client.save_state(
            store_name='statestore',
            key='order-123',
            value={'status': 'pending'}
        )
        
        # Publish event
        client.publish_event(
            pubsub_name='pubsub',
            topic_name='orders',
            data={'orderId': '123'}
        )
    
    return {'status': 'created'}

Best Practices

1. Resource Organization

# Use consistent naming conventions
azd env set AZURE_ENV_NAME "myapp-prod"
azd env set AZURE_LOCATION "eastus"

# Tag resources for cost tracking
azd env set AZURE_TAGS "Environment=Production,CostCenter=Engineering"

2. Security Best Practices

// Always use managed identity
resource containerApp 'Microsoft.App/containerApps@2023-05-01' = {
  identity: {
    type: 'SystemAssigned'
  }
}

// Store secrets in Key Vault
resource keyVault 'Microsoft.KeyVault/vaults@2023-02-01' = {
  properties: {
    enableRbacAuthorization: true
    networkAcls: {
      defaultAction: 'Deny'
      bypass: 'AzureServices'
    }
  }
}

// Use private endpoints
resource privateEndpoint 'Microsoft.Network/privateEndpoints@2023-04-01' = {
  properties: {
    subnet: {
      id: subnetId
    }
    privateLinkServiceConnections: [
      {
        name: 'containerapp-connection'
        properties: {
          privateLinkServiceId: containerApp.id
        }
      }
    ]
  }
}

3. Performance Optimization

# azure.yaml with performance settings
services:
  api:
    project: ./src/api
    host: containerapp
    resources:
      cpu: 1.0
      memory: 2Gi
    scale:
      minReplicas: 2
      maxReplicas: 20
      rules:
        - name: http-rule
          http:
            concurrent: 100

4. Monitoring and Observability

# Enable Application Insights
azd env set APPLICATIONINSIGHTS_CONNECTION_STRING "InstrumentationKey=..."

# View logs in real-time
azd monitor --logs
# Or use Azure CLI for Container Apps:
az containerapp logs show --name api --resource-group rg-myapp --follow

# Monitor metrics
azd monitor --live

# Create alerts
az monitor metrics alert create \
  --name high-cpu-alert \
  --resource-group rg-myapp \
  --scopes $(azd show --output json | jq -r '.services.api.resourceId') \
  --condition "avg CPU > 80" \
  --description "Alert when CPU exceeds 80%"

5. Cost Optimization

# Scale to zero when not in use
az containerapp update \
  --name api \
  --resource-group rg-myapp \
  --min-replicas 0

# Use spot instances for dev environments
azd env set CONTAINER_APP_REPLICA_TYPE "Spot"

# Set up budget alerts
az consumption budget create \
  --budget-name myapp-budget \
  --amount 100 \
  --time-grain Monthly \
  --threshold 80

6. CI/CD Integration

GitHub Actions Example:

name: Deploy to Azure Container Apps

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Setup AZD
        uses: Azure/setup-azd@v1
      
      - name: Login to Azure
        run: |
          azd auth login --client-id ${{ secrets.AZURE_CLIENT_ID }} \
            --client-secret ${{ secrets.AZURE_CLIENT_SECRET }} \
            --tenant-id ${{ secrets.AZURE_TENANT_ID }}
      
      - name: Deploy
        run: azd up --no-prompt
        env:
          AZURE_ENV_NAME: ${{ secrets.AZURE_ENV_NAME }}
          AZURE_LOCATION: ${{ secrets.AZURE_LOCATION }}

Common Commands Reference

# Initialize new container app project
azd init --template <template-name>

# Deploy infrastructure and application
azd up

# Deploy only application code (skip infrastructure)
azd deploy

# Provision only infrastructure
azd provision

# View deployed resources
azd show

# Stream logs using azd monitor or Azure CLI
azd monitor --logs
# az containerapp logs show --name <service-name> --resource-group <rg-name> --follow

# Monitor application
azd monitor --overview

# Clean up resources
azd down --force --purge

Troubleshooting

Issue: Container fails to start

# Check logs using Azure CLI
az containerapp logs show --name api --resource-group rg-myapp --tail 100

# View container events
az containerapp revision show \
  --name api \
  --resource-group rg-myapp \
  --revision latest

# Test locally
docker build -t api:local ./src/api
docker run -p 8000:8000 api:local

Issue: Can't access container app endpoint

# Verify ingress configuration
az containerapp show \
  --name api \
  --resource-group rg-myapp \
  --query properties.configuration.ingress

# Check if internal ingress is enabled
az containerapp ingress update \
  --name api \
  --resource-group rg-myapp \
  --external true

Issue: Performance problems

# Check resource utilization
az monitor metrics list \
  --resource $(azd show --output json | jq -r '.services.api.resourceId') \
  --metric "CPUPercentage,MemoryPercentage"

# Scale up resources
az containerapp update \
  --name api \
  --resource-group rg-myapp \
  --cpu 2.0 \
  --memory 4Gi

Additional Resources and Examples

Contributing

To contribute new container app examples:

  1. Create a new subdirectory with your example
  2. Include complete azure.yaml, infra/, and src/ files
  3. Add comprehensive README with deployment instructions
  4. Test deployment with azd up
  5. Submit a pull request

Need Help? Join the Microsoft Foundry Discord community for support and questions.