Incident Response
TSC mapping: CC7.3 (Incident evaluation), CC7.4 (Incident response), CC7.5 (Recovery from incidents)
SOC 2 auditors do not expect zero incidents. They expect a documented, tested process for detecting, classifying, containing, and recovering from incidents — and evidence that the process was followed.
1. Detection Pipeline
Connect Security Command Center, Event Threat Detection, and Cloud Monitoring into an automated alert pipeline.
SCC Finding (ETD / SHA)
Cloud Monitoring Alert → Pub/Sub → Cloud Function / webhook → PagerDuty / Slack
Log-based Alert
Create a Pub/Sub topic for SCC notifications
# Create a Pub/Sub topic for security findings
gcloud pubsub topics create scc-security-findings \
--project=<project-id>
# Create a Pub/Sub subscription for the on-call responder
gcloud pubsub subscriptions create scc-oncall-sub \
--topic=scc-security-findings \
--project=<project-id> \
--ack-deadline=60
# Grant SCC permission to publish to the topic
gcloud pubsub topics add-iam-policy-binding scc-security-findings \
--member="serviceAccount:service-<org-number>@gcp-sa-scc-notification.iam.gserviceaccount.com" \
--role="roles/pubsub.publisher" \
--project=<project-id>
Configure SCC notification feed for HIGH/CRITICAL findings
# Create an SCC notification config — only HIGH and CRITICAL findings
gcloud scc notifications create high-critical-findings \
--organization=<org-id> \
--description="Route HIGH and CRITICAL SCC findings to Pub/Sub" \
--pubsub-topic=projects/<project-id>/topics/scc-security-findings \
--filter="severity=\"HIGH\" OR severity=\"CRITICAL\""
# Verify the notification config
gcloud scc notifications list --organization=<org-id>
Route alerts to a webhook (e.g., PagerDuty or Slack)
# Create a Cloud Monitoring notification channel
gcloud beta monitoring channels create \
--display-name="Security Webhook" \
--type=webhook_tokenauth \
--channel-labels=url=https://events.pagerduty.com/integration/<key>/enqueue \
--project=<project-id>
# Create an alerting policy for log-based security events
gcloud alpha monitoring policies create \
--display-name="IAM Owner Role Assigned" \
--condition-display-name="Owner role binding created" \
--condition-filter='resource.type="audited_resource" AND protoPayload.methodName="SetIamPolicy" AND protoPayload.request.policy.bindings.role="roles/owner"' \
--notification-channels=$(gcloud beta monitoring channels list \
--filter="displayName='Security Webhook'" \
--format="value(name)" --project=<project-id>) \
--project=<project-id>
2. Incident Classification
| Severity | SCC score | Example | Response SLA |
|---|---|---|---|
| P1 — Critical | CRITICAL | Cryptomining detected, data exfiltration, compromised service account | 30 minutes |
| P2 — High | HIGH | Brute force login, anomalous API access from new region, open firewall to admin port | 2 hours |
| P3 — Medium | MEDIUM | Policy violations, public GCS bucket, unused service account key found | 24 hours |
| P4 — Low | LOW | Informational findings, expected behaviour flagged | Next business day |
3. Incident Response Runbook
Document and maintain this runbook. Auditors will ask to see it and may ask responders to walk through it.
Step 1 — Contain
# Isolate a compromised VM: block all network traffic with a deny-all firewall tag
gcloud compute instances add-tags vm-compromised \
--tags=quarantine \
--zone=us-central1-a \
--project=<project-id>
gcloud compute firewall-rules create deny-all-quarantine \
--project=<project-id> \
--network=prod-vpc \
--direction=INGRESS \
--priority=1 \
--action=DENY \
--rules=all \
--source-ranges=0.0.0.0/0 \
--target-tags=quarantine
gcloud compute firewall-rules create deny-all-quarantine-egress \
--project=<project-id> \
--network=prod-vpc \
--direction=EGRESS \
--priority=1 \
--action=DENY \
--rules=all \
--destination-ranges=0.0.0.0/0 \
--target-tags=quarantine
# Disable a compromised Cloud Identity / Workspace user
gcloud beta identity groups memberships update \
--group-email=all-users@<domain>.com \
--member-email=compromised-user@<domain>.com \
--roles=MEMBER
# Disable a compromised service account immediately
gcloud iam service-accounts disable \
compromised-sa@<project-id>.iam.gserviceaccount.com \
--project=<project-id>
# Revoke all access tokens for a service account
gcloud iam service-accounts keys list \
--iam-account=compromised-sa@<project-id>.iam.gserviceaccount.com \
--managed-by=user \
--format="value(name)" | \
xargs -I{} gcloud iam service-accounts keys delete {} \
--iam-account=compromised-sa@<project-id>.iam.gserviceaccount.com \
--quiet
Step 2 — Investigate
# Query Admin Activity logs for a specific principal (last 24 hours)
gcloud logging read \
'protoPayload.authenticationInfo.principalEmail="[email protected]" AND
timestamp>="<24h-ago-timestamp>"' \
--project=<project-id> \
--format="table(timestamp,protoPayload.methodName,protoPayload.resourceName,httpRequest.remoteIp)" \
--limit=200
# Query for activity from a specific IP address
gcloud logging read \
'protoPayload.requestMetadata.callerIp="<suspicious-ip>"' \
--project=<project-id> \
--format="table(timestamp,protoPayload.methodName,protoPayload.authenticationInfo.principalEmail)" \
--limit=200
# Get SCC finding details
gcloud scc findings list <org-id> \
--filter="name:<finding-name>" \
--format=json
# Query Secret Manager access logs for a specific secret
gcloud logging read \
'resource.type="audited_resource" AND
protoPayload.resourceName=~"secrets/prod-db-password" AND
protoPayload.methodName="google.cloud.secretmanager.v1.SecretManagerService.AccessSecretVersion"' \
--project=<project-id> \
--format="table(timestamp,protoPayload.authenticationInfo.principalEmail,httpRequest.remoteIp)"
# Check IAM changes in the last 7 days
gcloud logging read \
'protoPayload.methodName="SetIamPolicy" AND
timestamp>="<7-days-ago-timestamp>"' \
--organization=<org-id> \
--format="table(timestamp,protoPayload.authenticationInfo.principalEmail,resource.labels.project_id)"
Step 3 — Eradicate
- Remove malicious resources (VMs, service accounts, IAM bindings, Cloud Functions created by attacker).
- Rotate all secrets in Secret Manager that may have been accessed.
- Re-apply IAM policy from a known-good Terraform state.
- Scan all remaining resources with SCC / VM Threat Detection.
- Check for persistence mechanisms: new IAM bindings, org-level grants, IAP tunnels, OAuth app grants.
# Rotate a compromised Secret Manager secret
echo -n "new-strong-secret" | \
gcloud secrets versions add prod-db-password \
--data-file=- \
--project=<project-id>
# Remove a malicious IAM binding
gcloud projects remove-iam-policy-binding <project-id> \
--member="serviceAccount:[email protected]" \
--role="roles/owner"
# Re-apply IAM policy from Terraform
terraform apply -target=module.iam -auto-approve
Step 4 — Recover
# Restore a Cloud SQL database to a point in time
gcloud sql instances clone prod-pg prod-pg-restored \
--point-in-time=2024-01-15T03:00:00Z \
--project=<project-id>
# Restore a Compute Engine VM from a snapshot
gcloud compute disks create restored-disk \
--source-snapshot=<snapshot-name> \
--zone=us-central1-a \
--project=<project-id>
# Restore a Cloud Storage object version (if versioning enabled)
gcloud storage cp \
gs://prod-bucket/<object>#<generation-number> \
gs://prod-bucket/<object>
# Re-deploy from source (Terraform / Cloud Deploy — preferred)
gcloud deploy releases create release-$(date +%Y%m%d%H%M) \
--delivery-pipeline=prod-pipeline \
--region=us-central1 \
--project=<project-id>
Step 5 — Post-incident review
Complete a post-incident review within 5 business days. Retain for audit evidence.
Incident ID: INC-YYYY-NNN
Date/Time detected:
Date/Time resolved:
Severity: P1 / P2 / P3 / P4
Timeline:
HH:MM — Alert triggered (SCC finding / Cloud Monitoring alert / manual detection)
HH:MM — On-call notified
HH:MM — Incident declared / severity assigned
HH:MM — Containment action taken
HH:MM — Root cause identified
HH:MM — System restored
HH:MM — Incident closed
Root cause:
Impact (systems affected, data involved, customers notified Y/N):
Containment actions taken:
Eradication actions taken:
Recovery actions taken:
Customer notification required? (Y/N)
If yes, date/method of notification:
Action items (owner, due date):
1.
2.
4. Customer Notification Requirements
SOC 2 CC7.4 requires that affected customers are notified within a timeframe documented in your security policy and customer agreements:
- Define the notification SLA in your Terms of Service or DPA (commonly 72 hours for personal data incidents).
- Identify who approves customer notifications (Legal, CISO, CEO).
- Maintain a customer contact list accessible during an incident.
- Prepare notification templates in advance.
SOC 2 Evidence for Incident Response
| Evidence item | Retention |
|---|---|
| Documented incident response policy | Permanent |
| Incident response runbook (version-controlled in Git) | Permanent |
| SCC finding history | GCP Console → Security Command Center → Findings (export to CSV) |
| SCC notification configs | gcloud scc notifications list --organization=<org-id> |
| Cloud Audit Logs for incident period | Cloud Logging / log bucket (retain 1+ year) |
| Post-incident review reports | 3 years minimum |
| Tabletop or live IR exercise records | Annual, retained 3 years |
| Customer notification records (if applicable) | 3 years minimum |
| Pub/Sub message history for incident alerts | Cloud Logging → Pub/Sub audit logs |
Official references: