Incident Response
TSC mapping: CC7.3 (Incident evaluation), CC7.4 (Incident response), CC7.5 (Recovery from incidents)
SOC 2 auditors do not expect zero incidents. They expect a documented, tested process for detecting, classifying, containing, and recovering from incidents — and evidence that the process was followed.
1. Detection Pipeline
Connect Defender for Cloud, Sentinel, and Azure Monitor into an automated alert pipeline.
Defender for Cloud Alert
Microsoft Sentinel Incident → Action Group / Logic App → PagerDuty / Teams / Email
Azure Monitor Alert
Create an Action Group for security notifications
# Create an action group targeting the security team
az monitor action-group create \
--resource-group rg-security \
--name ag-security-oncall \
--short-name SecOncall \
--action email security-lead [email protected] \
--action sms oncall-sms +15555550100
Route Defender for Cloud HIGH alerts to an Action Group
# Create a security alert rule for Defender HIGH/CRITICAL alerts
az rest --method PUT \
--uri "https://management.azure.com/subscriptions/<sub>/providers/Microsoft.Security/automations/route-high-alerts?api-version=2019-01-01-preview" \
--body '{
"location": "eastus",
"properties": {
"isEnabled": true,
"scopes": [{"scopePath": "/subscriptions/<sub>"}],
"sources": [{
"eventSource": "Alerts",
"ruleSets": [{
"rules": [{
"propertyJPath": "Severity",
"propertyType": "String",
"expectedValue": "High",
"operator": "Equals"
}]
}]
}],
"actions": [{
"actionType": "LogicApp",
"logicAppResourceId": "/subscriptions/<sub>/resourceGroups/rg-security/providers/Microsoft.Logic/workflows/logic-security-alert"
}]
}
}'
Route Microsoft Sentinel incidents to Teams via Logic App
# Create a Logic App playbook for Sentinel incident notification
az logic workflow create \
--resource-group rg-security \
--name logic-sentinel-alert \
--definition @sentinel-teams-alert-playbook.json \
--location eastus
2. Incident Classification
| Severity | Defender for Cloud score | Example | Response SLA |
|---|---|---|---|
| P1 — Critical | 9–10 | Compromised credentials, cryptominer detected, data exfiltration | 30 minutes |
| P2 — High | 7–8 | Brute force attack, anomalous sign-in from new country, key vault access spike | 2 hours |
| P3 — Medium | 4–6 | Policy violations, misconfiguration alerts, exposed storage accounts | 24 hours |
| P4 — Low | 1–3 | Informational alerts, expected behaviour flagged as anomalous | Next business day |
3. Incident Response Runbook
Document and maintain this runbook. Auditors will ask to see it and may ask responders to walk through it.
Step 1 — Contain
# Isolate a compromised VM: detach its NIC and attach it to an isolation subnet
# First, get the NIC ID
NIC_ID=$(az vm show --resource-group rg-prod --name vm-compromised --query "networkProfile.networkInterfaces[0].id" -o tsv)
# Apply a quarantine NSG (deny all inbound and outbound)
az network nsg create \
--resource-group rg-security \
--name nsg-quarantine
# Associate the quarantine NSG with the NIC
az network nic update \
--ids $NIC_ID \
--network-security-group nsg-quarantine
# Disable a compromised Entra ID account immediately
az ad user update \
--id [email protected] \
--account-enabled false
# Revoke all active sessions and refresh tokens for a user
az rest --method POST \
--uri "https://graph.microsoft.com/v1.0/users/<user-id>/revokeSignInSessions"
# Disable a compromised service principal
az ad sp update \
--id <service-principal-id> \
--set "accountEnabled=false"
Step 2 — Investigate
# Query Entra ID audit logs for a specific user's activity (last 24 hours)
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query "
AuditLogs
| where TimeGenerated > ago(24h)
| where InitiatedBy.user.userPrincipalName == '[email protected]'
| project TimeGenerated, OperationName, Result, TargetResources
| order by TimeGenerated desc
" --output table
# Query Azure Activity Log for operations from a specific IP
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query "
AzureActivity
| where TimeGenerated > ago(24h)
| where CallerIpAddress == '<suspicious-ip>'
| project TimeGenerated, OperationNameValue, Caller, ResourceGroup, Level
| order by TimeGenerated desc
" --output table
# Get a Defender for Cloud alert detail
az security alert show \
--resource-group rg-prod \
--name <alert-name> \
--location eastus \
--query "{Name:alertName,Severity:severity,Description:description,RemediationSteps:remediationSteps}"
# Query Key Vault audit logs for unusual access
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query "
AzureDiagnostics
| where ResourceType == 'VAULTS'
| where TimeGenerated > ago(24h)
| where OperationName in ('SecretGet', 'KeyDecrypt', 'KeyUnwrap')
| project TimeGenerated, identity_claim_oid_g, requestUri_s, ResultSignature
| order by TimeGenerated desc
" --output table
Step 3 — Eradicate
- Remove malicious resources (VMs, app registrations, role assignments, runbooks created by attacker).
- Revoke all active sessions via
revokeSignInSessions. - Reset credentials for affected service principals; rotate Key Vault secrets and certificates.
- Patch or redeploy the affected system from a known-good IaC state.
- Verify no persistent access mechanisms remain (new admin accounts, new OAuth app consents, forwarding rules).
# Rotate a compromised Key Vault secret
az keyvault secret set \
--vault-name kv-soc2-prod \
--name compromised-secret \
--value "<new-strong-secret>"
# Remove a malicious role assignment
az role assignment delete \
--assignee <malicious-principal-id> \
--role Contributor \
--scope /subscriptions/<subscription-id>
Step 4 — Recover
# Restore an Azure SQL Database to a point in time
az sql db restore \
--dest-name prod-db-restored \
--resource-group rg-prod \
--server sql-prod \
--name prod-db \
--time "2024-01-15T03:00:00Z"
# Restore a VM from an Azure Backup recovery point
az backup restore restore-azurevm \
--resource-group rg-prod \
--vault-name rsv-prod \
--container-name iaasvmcontainerv2;rg-prod;vm-prod \
--item-name vm;iaasvmcontainerv2;rg-prod;vm-prod \
--restore-mode OriginalLocation \
--rp-name <recovery-point-name>
# Redeploy infrastructure from Terraform (IaC — preferred)
terraform apply -target=module.vm_prod -auto-approve
Step 5 — Post-incident review
Complete a post-incident review within 5 business days. Retain for audit evidence.
Incident ID: INC-YYYY-NNN
Date/Time detected:
Date/Time resolved:
Severity: P1 / P2 / P3 / P4
Timeline:
HH:MM — Alert triggered (Sentinel incident / Defender alert / manual detection)
HH:MM — On-call notified
HH:MM — Incident declared / severity assigned
HH:MM — Containment action taken
HH:MM — Root cause identified
HH:MM — System restored
HH:MM — Incident closed
Root cause:
Impact (systems affected, data involved, customers notified Y/N):
Containment actions taken:
Eradication actions taken:
Recovery actions taken:
Customer notification required? (Y/N)
If yes, date/method of notification:
Action items (owner, due date):
1.
2.
4. Microsoft Sentinel — Incident Management
Sentinel provides a structured, auditable incident lifecycle — with built-in severity classification, assignment, investigation graph, and timeline — that directly generates evidence for SOC 2 CC7.3–CC7.5.
# List open Sentinel incidents (severity: High or Critical)
az rest --method GET \
--uri "https://management.azure.com/subscriptions/<sub>/resourceGroups/rg-security/providers/Microsoft.OperationalInsights/workspaces/law-soc2-prod/providers/Microsoft.SecurityInsights/incidents?api-version=2022-11-01&\$filter=properties/severity eq 'High' and properties/status eq 'Active'" \
--query "value[*].[properties.incidentNumber,properties.title,properties.severity,properties.status,properties.createdTimeUtc]"
# Assign an incident to a responder
az rest --method PATCH \
--uri "https://management.azure.com/subscriptions/<sub>/resourceGroups/rg-security/providers/Microsoft.OperationalInsights/workspaces/law-soc2-prod/providers/Microsoft.SecurityInsights/incidents/<incident-id>?api-version=2022-11-01" \
--body '{"properties":{"owner":{"assignedTo":"[email protected]"},"status":"Active"}}'
Reference: Microsoft Sentinel documentation → · Sentinel incident management → · Sentinel playbooks (Logic Apps) →
5. Customer Notification Requirements
SOC 2 CC7.4 requires affected customers to be notified within a timeframe documented in your security policy and customer agreements:
- Define the notification SLA in your Terms of Service or DPA (commonly 72 hours for personal data incidents, 30 days for others).
- Identify who approves customer notifications (Legal, CISO, CEO).
- Maintain a customer contact list accessible during an incident.
- Prepare a notification email template in advance.
SOC 2 Evidence for Incident Response
| Evidence item | Retention |
|---|---|
| Documented incident response policy | Permanent |
| Incident response runbook (version-controlled in Git) | Permanent |
| Sentinel incident list and timelines | Available via Sentinel portal or Log Analytics |
| Defender for Cloud alert history | az security alert list |
| Post-incident review reports | 3 years minimum |
| Tabletop or live IR exercise records | Annual, retained 3 years |
| Entra ID audit log for incident period | Log Analytics workspace (retain 1+ year) |
| Customer notification records (if applicable) | 3 years minimum |
| Logic App playbook run history | Azure portal → Logic App → Run History |
Official references: