GCP Resource Protection
GCP offers resource-level protection mechanisms that prevent deletion even when IAM permissions would allow it. These are your last line of defense before a resource is destroyed.
Project Liens: Preventing Project Deletion
A Project Lien prevents a project from being deleted. This is the single most important protection for preventing catastrophic data loss because deleting a project destroys every resource inside it.
# Create a lien on a project to prevent deletion gcloud alpha resource-manager liens create \ --project=prod-web-app \ --restrictions=resourcemanager.projects.delete \ --reason="Production project - protected from deletion" \ --origin="ai-agent-guardrails" # List liens on a project gcloud alpha resource-manager liens list \ --project=prod-web-app # To remove a lien (requires resourcemanager.projects.updateLiens) gcloud alpha resource-manager liens delete LIEN_ID # Verify the project is protected gcloud projects describe prod-web-app --format="json(lifecycleState)"
resourcemanager.projects.updateLiens permission. Without this permission, the agent cannot remove the lien, making it impossible for the agent to delete the project.Compute Engine Instance Deletion Protection
Compute Engine instances support a deletionProtection flag that prevents the instance from being deleted:
# Enable deletion protection on an existing instance gcloud compute instances update prod-web-server \ --zone=us-central1-a \ --deletion-protection # Create a new instance with deletion protection enabled gcloud compute instances create prod-api-server \ --zone=us-central1-a \ --machine-type=e2-medium \ --deletion-protection \ --image-family=debian-12 \ --image-project=debian-cloud # Verify deletion protection is on gcloud compute instances describe prod-web-server \ --zone=us-central1-a \ --format="value(deletionProtection)" # Attempt to delete (this will FAIL) gcloud compute instances delete prod-web-server \ --zone=us-central1-a # ERROR: The resource 'prod-web-server' is protected against deletion
resource "google_compute_instance" "prod_server" { name = "prod-web-server" machine_type = "e2-medium" zone = "us-central1-a" # Enable GCP-level deletion protection deletion_protection = true boot_disk { initialize_params { image = "debian-cloud/debian-12" } } network_interface { network = "default" } # Terraform-level lifecycle protection lifecycle { prevent_destroy = true } }
Cloud SQL Deletion Protection and Backups
Cloud SQL instances support deletion protection and have built-in backup mechanisms:
# Enable deletion protection on an existing Cloud SQL instance gcloud sql instances patch prod-database \ --deletion-protection # Create a new instance with deletion protection + automated backups gcloud sql instances create prod-database \ --database-version=POSTGRES_15 \ --tier=db-custom-4-16384 \ --region=us-central1 \ --deletion-protection \ --backup-start-time=02:00 \ --enable-bin-log \ --enable-point-in-time-recovery \ --retained-backups-count=30 \ --retained-transaction-log-days=7 # Verify settings gcloud sql instances describe prod-database \ --format="value(settings.deletionProtectionEnabled)"
GKE Cluster Deletion Protection
Google Kubernetes Engine clusters can be protected against deletion:
# Create a GKE cluster with deletion protection gcloud container clusters create prod-cluster \ --zone=us-central1-a \ --num-nodes=3 \ --deletion-protection # Enable deletion protection on an existing cluster gcloud container clusters update prod-cluster \ --zone=us-central1-a \ --deletion-protection # Attempt to delete (this will FAIL) gcloud container clusters delete prod-cluster --zone=us-central1-a # ERROR: Cluster 'prod-cluster' has deletion protection enabled
Cloud Storage Protection
Cloud Storage offers multiple protection layers to prevent data loss:
Object Versioning
# Enable versioning on a bucket gcloud storage buckets update gs://company-critical-data \ --versioning # When versioning is on, "deleted" objects become noncurrent # They can be restored: gcloud storage objects list gs://company-critical-data \ --all-versions # Restore a specific version gcloud storage cp gs://company-critical-data/file.txt#1234567890 \ gs://company-critical-data/file.txt
Retention Policies and Bucket Lock
# Set a retention policy (objects cannot be deleted for 90 days) gcloud storage buckets update gs://company-critical-data \ --retention-period=90d # Lock the retention policy permanently (IRREVERSIBLE) gcloud storage buckets update gs://company-critical-data \ --lock-retention-period # Soft delete: keep deleted objects recoverable for 30 days gcloud storage buckets update gs://company-critical-data \ --soft-delete-duration=30d # Object hold: prevent deletion of specific objects gcloud storage objects update gs://company-critical-data/important.csv \ --temporary-hold
--lock-retention-period, it cannot be removed or shortened. Objects in the bucket cannot be deleted until the retention period expires. Use this only when you are certain about the retention duration.BigQuery Dataset and Table Protection
# Set a default table expiration (tables auto-expire but are not deleted early) bq update --default_table_expiration 0 my_project:production_dataset # Enable time travel (query data at a point in time, up to 7 days) bq update --max_time_travel_hours=168 my_project:production_dataset # Use table snapshots for point-in-time recovery bq cp my_project:production_dataset.orders@-3600000 \ my_project:production_dataset.orders_backup_1hr_ago # Set dataset-level IAM to restrict who can delete bq update --set_label=protection:critical my_project:production_dataset
Terraform prevent_destroy for GCP Resources
Terraform's prevent_destroy lifecycle rule adds an infrastructure-as-code level guardrail that prevents terraform destroy from removing critical resources:
# Cloud SQL with prevent_destroy resource "google_sql_database_instance" "main" { name = "prod-database" database_version = "POSTGRES_15" region = "us-central1" settings { tier = "db-custom-4-16384" deletion_protection_enabled = true backup_configuration { enabled = true point_in_time_recovery_enabled = true start_time = "02:00" transaction_log_retention_days = 7 backup_retention_settings { retained_backups = 30 } } } deletion_protection = true lifecycle { prevent_destroy = true } } # GCS bucket with prevent_destroy resource "google_storage_bucket" "critical_data" { name = "company-critical-data" location = "US" versioning { enabled = true } soft_delete_policy { retention_duration_seconds = 2592000 # 30 days } retention_policy { retention_period = 7776000 # 90 days in seconds } lifecycle { prevent_destroy = true } } # GKE cluster with prevent_destroy resource "google_container_cluster" "prod" { name = "prod-cluster" location = "us-central1-a" deletion_protection = true lifecycle { prevent_destroy = true } } # Project with prevent_destroy resource "google_project" "production" { name = "Production Web App" project_id = "prod-web-app" org_id = var.org_id lifecycle { prevent_destroy = true } }
deletion_protection argument) AND Terraform's prevent_destroy lifecycle rule. The GCP flag protects against gcloud CLI and API deletion, while prevent_destroy protects against terraform destroy.Resource Protection Summary
| GCP Service | Protection Mechanism | gcloud Flag | Terraform Argument |
|---|---|---|---|
| Projects | Project Liens | resource-manager liens create | N/A (use liens separately) |
| Compute Engine | Deletion Protection | --deletion-protection | deletion_protection = true |
| Cloud SQL | Deletion Protection + Backups | --deletion-protection | deletion_protection = true |
| GKE | Deletion Protection | --deletion-protection | deletion_protection = true |
| Cloud Storage | Versioning + Retention + Lock | --versioning, --retention-period | versioning, retention_policy |
| BigQuery | Time Travel + Snapshots | --max_time_travel_hours | max_time_travel_hours |
| All resources | Terraform lifecycle | N/A | prevent_destroy = true |
Lilly Tech Systems