Skip to main content

PSE Certification Preparation Guide: Section 4 — Managing operations (~19% of the exam)


Download PDF

This guide helps candidates preparing for the Google Cloud Professional Cloud Security Engineer (PSE) certification explore Section 4 of the exam through the lens of the Tech Equity RAD platform at https://radmodules.dev. Three modules are relevant to this section: GCP Services, which establishes the foundational shared infrastructure; App CloudRun, which deploys serverless containerised applications on Cloud Run; and App GKE, which deploys containerised workloads on GKE Autopilot.

You interact with each module by configuring its variables in the RAD UI deployment portal, then exploring the resulting infrastructure in the GCP Console. This guide maps each exam topic to the relevant variables you can configure and the console locations where you can observe the outcomes. It also highlights PSE objectives that are not currently implemented by these modules, providing guidelines for self-guided research and exploration.


4.1 Automating infrastructure and application security

Secure CI/CD and Binary Authorization

Concept: Integrating automated security gates into the deployment pipeline and enforcing cryptographic supply chain integrity for all running workloads.

In the RAD UI:

  • Secure CI/CD with Cloud Build: The enable_cicd_trigger variable (Group 7) integrates the source repository with Cloud Build. Secrets required during the build (e.g., API keys for test environments) are injected directly from Secret Manager into the build step — never written to build logs or exposed as plaintext in build configuration files. Built images are pushed to Artifact Registry, which automatically runs container vulnerability scanning (powered by Container Analysis) on every image push.
  • Binary Authorization: The enable_binary_authorization variable (Group 11 in GCP Services) enforces a policy requiring all container images deployed to Cloud Run or GKE Autopilot to carry a cryptographic attestation from a trusted attestor. The attestor signs images only after they have passed the Cloud Build pipeline's security checks. Images that were not built and signed through the approved pipeline are rejected at deploy time — even if a developer attempts a manual gcloud run deploy with an unsigned image.

Console Exploration: Navigate to Cloud Build > History to view pipeline execution records. Click on a build to see the individual steps, the Cloud Build service account used (note its minimal IAM permissions), and the Secret Manager references used for secret injection. Navigate to Artifact Registry > [repository], click an image digest, and open the Vulnerabilities tab to review the CVE scan results and severity breakdown. Navigate to Security > Binary Authorization to view the active enforcement policy and the configured attestors.

Real-world example: A company's Cloud Build pipeline pushes a new image to Artifact Registry, which triggers an automatic vulnerability scan. The scan detects a critical CVE (CVSS 9.8) in an outdated base image. The Binary Authorization attestor's signing logic refuses to attest the image, so no attestation is written. When the CD pipeline attempts to deploy the image to the GKE cluster, the Binary Authorization admission controller rejects the deployment with a policy violation. The engineer receives an alert, updates the base image, rebuilds, and the corrected image passes both the vulnerability scan and the attestation step. The unsigned image with the critical CVE can never run in production — even if a developer tries to deploy it manually, bypassing the CI/CD pipeline entirely.

💡 Additional Infrastructure Security Objectives & Learning Guidelines

  • Automating CVE Scanning in CI/CD: Research how to integrate Artifact Registry vulnerability scanning results into Cloud Build pipelines as a gate. A Cloud Build step can call the Container Analysis API to retrieve scan findings for the newly built image and fail the pipeline if any CRITICAL or HIGH severity CVEs are present — preventing vulnerable images from ever receiving a Binary Authorization attestation. Navigate to Artifact Registry > [image] > Security insights to explore the vulnerability data model.
  • VM and Container Image Hardening: Research Shielded VMs, which use three mechanisms to protect boot integrity: Secure Boot (prevents loading unsigned bootloaders and kernel modules), vTPM (virtual Trusted Platform Module that stores integrity measurements), and Integrity Monitoring (compares live boot measurements against a known-good baseline and alerts on deviation). For containers, study distroless base images (no shell, no package manager — minimal attack surface), running containers as non-root users, and using read-only root filesystems. Navigate to Compute Engine > VM instances > [instance] > Security to view the Shielded VM status for a running instance.
  • VM Patch Management: Research OS Patch Management in Compute Engine, which enables centralised scanning and automated patching of VM fleets. Patch deployments can target VMs by label, instance group, or zone and can be scheduled for defined maintenance windows to minimise operational impact. Compliance reports show which VMs in the fleet are out of date, with drill-down to the specific missing patches. Navigate to Compute Engine > Patch management to create a patch job and review the compliance dashboard.
  • Managing Policy and Drift Detection at Scale: Research Security Command Center's Security Health Analytics (SHA), which continuously evaluates all GCP resources in a project or organization against a library of security detectors — identifying open firewall rules allowing 0.0.0.0/0 on port 22, public Cloud Storage buckets, disabled audit logging, and over-privileged service accounts. Custom SHA modules allow you to author organization-specific detectors using CEL. Navigate to Security > Security Command Center > Findings and filter by source to explore SHA findings. Also research Cloud Asset Inventory, which tracks configuration changes to every GCP resource over time, enabling drift detection by comparing current state against a known-good baseline.

4.2 Configuring logging, monitoring, and detection

Cloud Monitoring, Alert Policies, and Audit Logs

Concept: Capturing all security-relevant events, making them accessible only to authorized analysts, and generating actionable alerts for anomalous activity.

In the RAD UI:

  • Custom Dashboards and Alert Policies: The modules automatically provision Cloud Monitoring dashboards and MQL-based alert policies. In GCP Services, alert_cpu_threshold, alert_memory_threshold, and alert_disk_threshold (Group 17) monitor infrastructure health baselines that can indicate resource abuse.
  • Notification Channels: support_users (Group 1) and notification_alert_emails (Group 17) are mapped to Cloud Monitoring Notification Channels, routing alerts to designated operators when thresholds are breached.
  • Audit Log Generation: All Terraform operations executed via Cloud Build generate Admin Activity audit logs, providing an immutable, chronological record of every infrastructure change, IAM binding modification, and API call made during deployment.

Console Exploration: Navigate to Monitoring > Alerting to review the MQL-based alert policies and their notification channel bindings. Navigate to Logging > Logs Explorer. Filter by logName="cloudaudit.googleapis.com/activity" to view Admin Activity audit logs showing who changed what and when. Filter by logName="cloudaudit.googleapis.com/data_access" to view Data Access logs (note: these must be explicitly enabled per service and can generate high volume — they are disabled by default to control costs).

Real-world example: A security operations team enables Data Access audit logs for BigQuery across their organization and exports them via a log sink to a dedicated BigQuery dataset for forensic analysis. When their SIEM raises an alert about an anomalous 50 GB data export at 03:00 UTC by a rarely-used service account, the analyst queries the log table directly in BigQuery using SQL to identify every table accessed, the full query text, bytes billed, and the source IP — reconstructing the complete scope of the potential data exfiltration in under five minutes rather than days of manual log correlation.

💡 Additional Logging and Detection Objectives & Learning Guidelines

  • VPC Flow Logs: Enable VPC Flow Logs on subnets to capture metadata about all accepted and rejected network flows (source/destination IP and port, bytes transferred, protocol, latency). Flow logs are written to Cloud Logging and can be exported to BigQuery for analysis. Navigate to VPC network > Subnets > [subnet] > Edit > Flow logs: On and set an appropriate sampling rate (1.0 = 100% for forensic completeness; lower for cost reduction). Flow logs are essential for detecting lateral movement, exfiltration attempts, and unexpected inter-service communication.
  • Cloud NGFW Logs and Packet Mirroring: Enable logging on individual Cloud NGFW firewall rules (VPC network > Firewall rules > [rule] > Logs: On) to capture which rules matched each connection attempt — providing evidence of blocked intrusion attempts and validating that allow rules are behaving as expected. For deeper inspection, research Packet Mirroring, which copies full packet payloads (not just flow metadata) from specified VM instances to a collector Internal Load Balancer connected to a network analysis or IDS tool. This enables full packet capture for east-west traffic forensics.
  • Cloud Intrusion Detection System (Cloud IDS): Research Cloud IDS, which uses Palo Alto Networks threat detection engine to inspect traffic for malware signatures, spyware, C2 (command-and-control) communication patterns, and known exploitation attempts. Cloud IDS receives mirrored traffic via Packet Mirroring from a monitored subnet and writes threat findings to Cloud Logging and Security Command Center. Navigate to Network Security > Cloud IDS to explore endpoint and traffic mirroring configuration. Understand that Cloud IDS is a passive detection tool (no blocking) — pair it with Cloud NGFW for automated blocking based on IDS findings.
  • Designing an Effective Logging Strategy: A complete logging strategy must answer: What is collected? (All Admin Activity logs — always on; Data Access logs — enable selectively for sensitive services; VPC Flow Logs and firewall rule logs for network visibility.) Where is it stored? (Default log bucket for operational use; dedicated locked log bucket with Bucket Lock for compliance immutability.) Who can access it? (Use IAM roles/logging.viewer scoped to specific log views for analysts; restrict roles/logging.admin to a security team.) How long is it retained? (Minimum 400 days for Admin Activity logs per most compliance frameworks — use log sinks to Cloud Storage with Bucket Lock for long-term archival beyond the default 30-day retention.)
  • Designing Secure Access to Logs: Sensitive logs (Data Access logs for BigQuery, Cloud SQL, Secret Manager) often contain PII or reveal business-sensitive patterns. Use Log Views within Log Buckets to give different teams access to different subsets of logs without sharing all log data. Navigate to Logging > Log Storage > Log Buckets > [bucket] > Log Views to create a view filtered to a specific resource type or log name. Grant roles/logging.viewAccessor on the specific view rather than the entire bucket.
  • Exporting Logs via Log Sinks: Research log sinks (log routers) to export logs to: Cloud Storage (long-term archival at low cost — use Bucket Lock for WORM compliance), BigQuery (SQL-based forensic analysis and anomaly detection), or Pub/Sub (real-time streaming to a SIEM or downstream processing pipeline). Navigate to Logging > Log Router > Create sink to configure a sink with an appropriate inclusion filter. Study aggregated sinks, which collect logs from all projects within a folder or organization into a single centralized export destination — essential for organizations that need a unified security data lake across hundreds of projects.
  • Log Analytics: Research Log Analytics, an upgraded log bucket feature enabling SQL-based querying of log data directly in the Cloud Logging console without exporting to BigQuery. Navigate to Logging > Log Analytics and run ad-hoc queries — for example, identifying the top 10 principals by number of SetIamPolicy calls in the past 7 days, or calculating the 99th-percentile request latency per Cloud Run service revision. Log Analytics is faster for operational queries than exporting to BigQuery and does not incur BigQuery query costs.
  • Configuring and Monitoring Security Command Center (SCC): Research SCC as the centralized security management platform that consolidates findings from Security Health Analytics, Container Threat Detection (runtime threat detection for GKE workloads), Event Threat Detection (detects threats in Cloud Logging streams, including brute force, data exfiltration, and crypto-mining), Virtual Machine Threat Detection (memory-based threat detection for Compute Engine), and Web Security Scanner. Navigate to Security > Security Command Center > Dashboard to review the overall security posture score and open findings by severity. Configure SCC notifications to Pub/Sub for integration with ticketing systems or SIEM workflows by navigating to SCC > Settings > Notifications.