$ cat topics/kubernetes-troubleshooting

# Kubernetes Troubleshooting Scenarios

---
> Scenario #1: Zombie Pods Causing NodeDrain to Hang
> Scenario #2: API Server Crash Due to Excessive CRD Writes
> Scenario #3: Node Not Rejoining After Reboot
> Scenario #4: Etcd Disk Full Causing API Server Timeout
> Scenario #5: Misconfigured Taints Blocking Pod Scheduling
> Scenario #6: Kubelet DiskPressure Loop on Large Image Pulls
> Scenario #7: Node Goes NotReady Due to Clock Skew
> Scenario #8: API Server High Latency Due to Event Flooding
> Scenario #9: CoreDNS CrashLoop on Startup
> Scenario #10: Control Plane Unavailable After Flannel Misconfiguration
> Scenario #11: kube-proxy IPTables Rules Overlap Breaking Networking
> Scenario #12: Stuck CSR Requests Blocking New Node Joins
> Scenario #13: Failed Cluster Upgrade Due to Unready Static Pods
> Scenario #14: Uncontrolled Logs Filled Disk on All Nodes
> Scenario #15: Node Drain Fails Due to PodDisruptionBudget Deadlock
> Scenario #16: CrashLoop of Kube-Controller-Manager on Boot
> Scenario #17: Inconsistent Cluster State After Partial Backup Restore
> Scenario #18: kubelet Unable to Pull Images Due to Proxy Misconfig
> Scenario #19: Multiple Nodes Marked Unreachable Due to Flaky Network Interface
> Scenario #20: Node Labels Accidentally Overwritten by DaemonSet
> Scenario #21: Cluster Autoscaler Continuously Spawning and Deleting Nodes
> Scenario #22: Stale Finalizers Preventing Namespace Deletion
> Scenario #23: CoreDNS CrashLoop Due to Invalid ConfigMap Update
> Scenario #24: Pod Eviction Storm Due to DiskPressure
> Scenario #25: Orphaned PVs Causing Unscheduled Pods
> Scenario #26: Taints and Tolerations Mismatch Prevented Workload Scheduling
> Scenario #27: Node Bootstrap Failure Due to Unavailable Container Registry
> Scenario #28: kubelet Fails to Start Due to Expired TLS Certs
> Scenario #29: kube-scheduler Crash Due to Invalid Leader Election Config
> Scenario #30: Cluster DNS Resolution Broken After Calico CNI Update
> Scenario #31: Node Clock Drift Causing Authentication Failures
> Scenario #32: Inconsistent Node Labels Causing Scheduling Bugs
> Scenario #33: API Server Slowdowns from High Watch Connection Count
> Scenario #34: Etcd Disk Full Crashing the Cluster
> Scenario #35: ClusterConfigMap Deleted by Accident Bringing Down Addons
> Scenario #36: Misconfigured NodeAffinity Excluding All Nodes
> Scenario #37: Outdated Admission Webhook Blocking All Deployments
> Scenario #38: API Server Certificate Expiry Blocking Cluster Access
> Scenario #39: CRI Socket Mismatch Preventing kubelet Startup
> Scenario #40: Cluster-Wide Crash Due to Misconfigured Resource Quotas
> Scenario #41: Cluster Upgrade Failing Due to CNI Compatibility
> Scenario #42: Failed Pod Security Policy Enforcement Causing Privileged Container Launch
> Scenario #43: Node Pool Scaling Impacting StatefulSets
> Scenario #44: Kubelet Crash Due to Out of Memory (OOM) Errors
> Scenario #45: DNS Resolution Failure in Multi-Cluster Setup
> Scenario #46: Insufficient Resource Limits in Autoscaling Setup
> Scenario #47: Control Plane Overload Due to High Audit Log Volume
> Scenario #48: Resource Fragmentation Causing Cluster Instability
> Scenario #49: Failed Cluster Backup Due to Misconfigured Volume Snapshots
> Scenario #50: Failed Deployment Due to Image Pulling Issues
> Scenario #51: High Latency Due to Inefficient Ingress Controller Configuration
> Scenario #52: Node Draining Delay During Maintenance
> Scenario #53: Unresponsive Cluster After Large-Scale Deployment
> Scenario #54: Failed Node Recovery Due to Corrupt Kubelet Configuration
> Scenario #55: Resource Exhaustion Due to Misconfigured Horizontal Pod Autoscaler
> Scenario #56: Inconsistent Application Behavior After Pod Restart
> Scenario #57: Cluster-wide Service Outage Due to Missing ClusterRoleBinding
> Scenario #58: Node Overcommitment Leading to Pod Evictions
> Scenario #59: Failed Pod Startup Due to Image Pull Policy Misconfiguration
> Scenario #60: Excessive Control Plane Resource Usage During Pod Scheduling
> Scenario #61: Persistent Volume Claim Failure Due to Resource Quota Exceedance
> Scenario #62: Failed Pod Rescheduling Due to Node Affinity Misconfiguration
> Scenario #63: Intermittent Network Latency Due to Misconfigured CNI Plugin
> Scenario #64: Excessive Pod Restarts Due to Resource Limits
> Scenario #65: Cluster Performance Degradation Due to Excessive Logs
> Scenario #66: Insufficient Cluster Capacity Due to Unchecked CronJobs
> Scenario #67: Unsuccessful Pod Scaling Due to Affinity/Anti-Affinity Conflict
> Scenario #68: Cluster Inaccessibility Due to API Server Throttling
> Scenario #69: Persistent Volume Expansion Failure
> Scenario #70: Unauthorized Access to Cluster Resources Due to RBAC Misconfiguration
> Scenario #71: Inconsistent Pod State Due to Image Pull Failures
> Scenario #72: Pod Disruption Due to Insufficient Node Resources
> Scenario #73: Service Discovery Issues Due to DNS Resolution Failures
> Scenario #74: Persistent Volume Provisioning Delays
> Scenario #75: Deployment Rollback Failure Due to Missing Image
> Scenario #76: Kubernetes Master Node Unresponsive After High Load
> Scenario #77: Failed Pod Restart Due to Inadequate Node Affinity
> Scenario #78: ReplicaSet Scaling Issues Due to Resource Limits
> Scenario #79: Missing Namespace After Cluster Upgrade
> Scenario #80: Inefficient Resource Usage Due to Misconfigured Horizontal Pod Autoscaler
> Scenario #81: Pod Disruption Due to Unavailable Image Registry
> Scenario #82: Pod Fails to Start Due to Insufficient Resource Requests
> Scenario #83: Horizontal Pod Autoscaler Under-Scaling During Peak Load
> Scenario #84: Pod Eviction Due to Node Disk Pressure
> Scenario #85: Failed Node Drain Due to In-Use Pods
> Scenario #86: Cluster Autoscaler Not Scaling Up
> Scenario #87: Pod Network Connectivity Issues After Node Reboot
> Scenario #88: Insufficient Permissions Leading to Unauthorized Access Errors
> Scenario #89: Failed Pod Upgrade Due to Incompatible API Versions
> Scenario #90: High CPU Utilization Due to Inefficient Application Code
> Scenario #91: Resource Starvation Due to Over-provisioned Pods
> Scenario #92: Unscheduled Pods Due to Insufficient Affinity Constraints
> Scenario #93: Pod Readiness Probe Failure Due to Slow Initialization
> Scenario #94: Incorrect Ingress Path Handling Leading to 404 Errors
> Scenario #95: Node Pool Scaling Failure Due to Insufficient Quotas
> Scenario #96: Pod Crash Loop Due to Missing ConfigMap
> Scenario #97: Kubernetes API Server Slowness Due to Excessive Logging
> Scenario #98: Pod Scheduling Failure Due to Taints and Tolerations Misconfiguration
> Scenario #99: Unresponsive Dashboard Due to High Resource Usage
> Scenario #100: Resource Limits Causing Container Crashes
> Scenario #101: Pod Communication Failure Due to Network Policy Misconfiguration
> Scenario #102: DNS Resolution Failure Due to CoreDNS Pod Crash
> Scenario #103: Network Latency Due to Misconfigured Service Type
> Scenario #104: Inconsistent Pod-to-Pod Communication Due to MTU Mismatch
> Scenario #105: Service Discovery Failure Due to DNS Pod Resource Limits
> Scenario #106: Pod IP Collision Due to Insufficient IP Range
> Scenario #107: Network Bottleneck Due to Single Node in NodePool
> Scenario #108: Network Partitioning Due to CNI Plugin Failure
> Scenario #109: Misconfigured Ingress Resource Causing SSL Errors
> Scenario #110: Cluster Autoscaler Fails to Scale Nodes Due to Incorrect IAM Role Permissions
> Scenario #111: DNS Resolution Failure Due to Incorrect Pod IP Allocation
> Scenario #112: Failed Pod-to-Service Communication Due to Port Binding Conflict
> Scenario #113: Pod Eviction Due to Network Resource Constraints
> Scenario #114: Intermittent Network Disconnects Due to MTU Mismatch Between Nodes
> Scenario #115: Service Load Balancer Failing to Route Traffic to New Pods
> Scenario #116: Network Traffic Drop Due to Overlapping CIDR Blocks
> Scenario #117: Misconfigured DNS Resolvers Leading to Service Discovery Failure
> Scenario #118: Intermittent Latency Due to Overloaded Network Interface
> Scenario #119: Pod Disconnection During Network Partition
> Scenario #121: Pod-to-Pod Communication Blocked by Network Policies
> Scenario #122: Unresponsive External API Due to DNS Resolution Failure
> Scenario #123: Load Balancer Health Checks Failing After Pod Update
> Scenario #124: Pod Network Performance Degradation After Node Upgrade
> Scenario #125: Service IP Conflict Due to CIDR Overlap
> Scenario #126: High Latency in Inter-Namespace Communication
> Scenario #127: Pod Network Disruptions Due to CNI Plugin Update
> Scenario #128: Loss of Service Traffic Due to Missing Ingress Annotations
> Scenario #129: Node Pool Draining Timeout Due to Slow Pod Termination
> Scenario #130: Failed Cluster Upgrade Due to Incompatible API Versions
> Scenario #131: DNS Resolution Failure for Services After Pod Restart
> Scenario #132: Pod IP Address Changes Causing Application Failures
> Scenario #133: Service Exposure Failed Due to Misconfigured Load Balancer
> Scenario #134: Network Latency Spikes During Pod Autoscaling
> Scenario #135: Service Not Accessible Due to Incorrect Namespace Selector
> Scenario #136: Intermittent Pod Connectivity Due to Network Plugin Bug
> Scenario #137: Failed Ingress Traffic Routing Due to Missing Annotations
> Scenario #138: Pod IP Conflict Causing Service Downtime
> Scenario #139: Latency Due to Unoptimized Service Mesh Configuration
> Scenario #139: DNS Resolution Failure After Cluster Upgrade
> Scenario #140: Service Mesh Sidecar Injection Failure
> Scenario #141: Network Bandwidth Saturation During Large-Scale Deployments
> Scenario #142: Inconsistent Network Policies Blocking Internal Traffic
> Scenario #143: Pod Network Latency Caused by Overloaded CNI Plugin
> Scenario #144: TCP Retransmissions Due to Network Saturation
> Scenario #145: DNS Lookup Failures Due to Resource Limits
> Scenario #146: Service Exposure Issues Due to Incorrect Ingress Configuration
> Scenario #147: Pod-to-Pod Communication Failure Due to Network Policy
> Scenario #148: Unstable Network Due to Overlay Network Misconfiguration
> Scenario #149: Intermittent Pod Network Connectivity Due to Cloud Provider Issues
> Scenario #150: Port Conflicts Between Services in Different Namespaces
> Scenario #151: NodePort Service Not Accessible Due to Firewall Rules
> Scenario #152: DNS Latency Due to Overloaded CoreDNS Pods
> Scenario #153: Network Performance Degradation Due to Misconfigured MTU
> Scenario #154: Application Traffic Routing Issue Due to Incorrect Ingress Resource
> Scenario #155: Intermittent Service Disruptions Due to DNS Caching Issue
> Scenario #156: Flannel Overlay Network Interruption Due to Node Failure
> Scenario #157: Network Traffic Loss Due to Port Collision in Network Policy
> Scenario #158: CoreDNS Service Failures Due to Resource Exhaustion
> Scenario #159: Pod Network Partition Due to Misconfigured IPAM
> Scenario #160: Network Performance Degradation Due to Overloaded CNI Plugin
> Scenario #161: Network Performance Degradation Due to Overloaded CNI Plugin
> Scenario #162: DNS Resolution Failures Due to Misconfigured CoreDNS
> Scenario #163: Network Partition Due to Incorrect Calico Configuration
> Scenario #164: IP Overlap Leading to Communication Failure Between Pods
> Scenario #165: Pod Network Latency Due to Overloaded Kubernetes Network Interface
> Scenario #166: Intermittent Connectivity Failures Due to Pod DNS Cache Expiry
> Scenario #167: Flapping Network Connections Due to Misconfigured Network Policies
> Scenario #168: Cluster Network Downtime Due to CNI Plugin Upgrade
> Scenario #169: Inconsistent Pod Network Connectivity in Multi-Region Cluster
> Scenario #170: Pod Network Partition Due to Network Policy Blocking DNS Requests
> Scenario #171: Network Bottleneck Due to Overutilized Network Interface
> Scenario #172: Network Latency Caused by Overloaded VPN Tunnel
> Scenario #173: Dropped Network Packets Due to MTU Mismatch
> Scenario #174: Pod Network Isolation Due to Misconfigured Network Policy
> Scenario #175: Service Discovery Failures Due to CoreDNS Pod Crash
> Scenario #176: Pod DNS Resolution Failure Due to CoreDNS Configuration Issue
> Scenario #177: DNS Latency Due to Overloaded CoreDNS Pods
> Scenario #178: Pod Network Degradation Due to Overlapping CIDR Blocks
> Scenario #179: Service Discovery Failures Due to Network Policy Blocking DNS Traffic
> Scenario #180: Intermittent Network Connectivity Due to Overloaded Overlay Network
> Scenario #181: Pod-to-Pod Communication Failure Due to CNI Plugin Configuration Issue
> Scenario #182: Sporadic DNS Failures Due to Resource Contention in CoreDNS Pods
> Scenario #183: High Latency in Pod-to-Node Communication Due to Overlay Network
> Scenario #184: Service Discovery Issues Due to DNS Cache Staleness
> Scenario #185: Network Partition Between Node Pools in Multi-Zone Cluster
> Scenario #186: Pod Network Isolation Failure Due to Missing NetworkPolicy
> Scenario #187: Flapping Node Network Connectivity Due to MTU Mismatch
> Scenario #188: DNS Query Timeout Due to Unoptimized CoreDNS Config
> Scenario #189: Traffic Splitting Failure Due to Incorrect Service LoadBalancer Configuration
> Scenario #190: Network Latency Between Pods in Different Regions
> Scenario #191: Port Collision Between Services Due to Missing Port Ranges
> Scenario #192: Pod-to-External Service Connectivity Failures Due to Egress Network Policy
> Scenario #193: Pod Connectivity Loss After Network Plugin Upgrade
> Scenario #194: External DNS Not Resolving After Cluster Network Changes
> Scenario #195: Slow Pod Communication Due to Misconfigured MTU in Network Plugin
> Scenario #196: High CPU Usage in Nodes Due to Overloaded Network Plugin
> Scenario #197: Cross-Namespace Network Isolation Not Enforced
> Scenario #198: Inconsistent Service Discovery Due to CoreDNS Misconfiguration
> Scenario #199: Network Segmentation Issues Due to Misconfigured CNI
> Scenario #200: DNS Cache Poisoning in CoreDNS
> Scenario #201: Unauthorized Access to Secrets Due to Incorrect RBAC Permissions
> Scenario #202: Insecure Network Policies Leading to Pod Exposure
> Scenario #203: Privileged Container Vulnerability Due to Incorrect Security Context
> Scenario #204: Exposed Kubernetes Dashboard Due to Misconfigured Ingress
> Scenario #205: Unencrypted Communication Between Pods Due to Missing TLS Configuration
> Scenario #206: Sensitive Data in Logs Due to Improper Log Sanitization
> Scenario #207: Insufficient Pod Security Policies Leading to Privilege Escalation
> Scenario #208: Service Account Token Compromise
> Scenario #209: Lack of Regular Vulnerability Scanning in Container Images
> Scenario #210: Insufficient Container Image Signing Leading to Unverified Deployments
> Scenario #211: Insecure Default Namespace Leading to Unauthorized Access
> Scenario #212: Vulnerable OpenSSL Version in Container Images
> Scenario #213: Misconfigured API Server Authentication Allowing External Access
> Scenario #214: Insufficient Node Security Due to Lack of OS Hardening
> Scenario #215: Unrestricted Ingress Access to Sensitive Resources
> Scenario #216: Exposure of Sensitive Data in Container Environment Variables
> Scenario #217: Inadequate Container Resource Limits Leading to DoS Attacks
> Scenario #218: Exposure of Container Logs Due to Insufficient Log Management
> Scenario #219: Using Insecure Docker Registry for Container Images
> Scenario #220: Weak Pod Security Policies Leading to Privileged Containers
> Scenario #221: Unsecured Kubernetes Dashboard
> Scenario #222: Using HTTP Instead of HTTPS for Ingress Resources
> Scenario #223: Insecure Network Policies Exposing Internal Services
> Scenario #224: Exposing Sensitive Secrets in Environment Variables
> Scenario #225: Insufficient RBAC Permissions Leading to Unauthorized Access
> Scenario #226: Insecure Ingress Controller Exposed to the Internet
> Scenario #227: Lack of Security Updates in Container Images
> Scenario #228: Exposed Kubelet API Without Authentication
> Scenario #229: Inadequate Logging of Sensitive Events
> Scenario #230: Misconfigured RBAC Allowing Cluster Admin Privileges to Developers
> Scenario #231: Insufficiently Secured Service Account Permissions
> Scenario #232: Cluster Secrets Exposed Due to Insecure Mounting
> Scenario #233: Improperly Configured API Server Authorization
> Scenario #234: Compromised Image Registry Access Credentials
> Scenario #235: Insufficiently Secured Cluster API Server Access
> Scenario #236: Misconfigured Admission Controllers Allowing Insecure Resources
> Scenario #237: Lack of Security Auditing and Monitoring in Cluster
> Scenario #238: Exposed Internal Services Due to Misconfigured Load Balancer
> Scenario #239: Kubernetes Secrets Accessed via Insecure Network
> Scenario #240: Pod Security Policies Not Enforced
> Scenario #241: Unpatched Vulnerabilities in Cluster Nodes
> Scenario #242: Weak Network Policies Allowing Unrestricted Traffic
> Scenario #243: Exposed Dashboard Without Authentication
> Scenario #244: Use of Insecure Container Images
> Scenario #245: Misconfigured TLS Certificates
> Scenario #246: Excessive Privileges for Service Accounts
> Scenario #247: Exposure of Sensitive Logs Due to Misconfigured Logging Setup
> Scenario #248: Use of Deprecated APIs with Known Vulnerabilities
> Scenario #249: Lack of Security Context in Pod Specifications
> Scenario #250: Compromised Container Runtime
> Scenario #251: Insufficient RBAC Permissions for Cluster Admin
> Scenario #252: Insufficient Pod Security Policies Leading to Privilege Escalation
> Scenario #253: Exposed Service Account Token in Pod
> Scenario #254: Rogue Container Executing Malicious Code
> Scenario #255: Overly Permissive Network Policies Allowing Lateral Movement
> Scenario #256: Insufficient Encryption for In-Transit Data
> Scenario #257: Exposing Cluster Services via LoadBalancer with Public IP
> Scenario #258: Privileged Containers Running Without Seccomp or AppArmor Profiles
> Scenario #259: Malicious Container Image from Untrusted Source
> Scenario #260: Unrestricted Ingress Controller Allowing External Attacks
> Scenario #261: Misconfigured Ingress Controller Exposing Internal Services
> Scenario #262: Privileged Containers Without Security Context
> Scenario #263: Unrestricted Network Policies Allowing Lateral Movement
> Scenario #264: Exposed Kubernetes Dashboard Without Authentication
> Scenario #265: Use of Vulnerable Container Images
> Scenario #266: Misconfigured Role-Based Access Control (RBAC)
> Scenario #267: Insecure Secrets Management
> Scenario #268: Lack of Audit Logging
> Scenario #269: Unrestricted Access to etcd
> Scenario #270: Absence of Pod Security Policies
> Scenario #271: Service Account Token Mounted in All Pods
> Scenario #272: Sensitive Logs Exposed via Centralized Logging
> Scenario #273: Broken Container Escape Detection
> Scenario #274: Unauthorized Cloud Metadata API Access
> Scenario #275: Admin Kubeconfig Checked into Git
> Scenario #276: JWT Token Replay Attack in Webhook Auth
> Scenario #277: Container With Hardcoded SSH Keys
> Scenario #278: Insecure Helm Chart Defaults
> Scenario #279: Shared Cluster with Overlapping Namespaces
> Scenario #280: CVE Ignored in Base Image for Months
> Scenario #281: Misconfigured PodSecurityPolicy Allowed Privileged Containers
> Scenario #282: GitLab Runners Spawning Privileged Containers
> Scenario #283: Kubernetes Secrets Mounted in World-Readable Volumes
> Scenario #284: Kubelet Port Exposed on Public Interface
> Scenario #285: Cluster Admin Bound to All Authenticated Users
> Scenario #286: Webhook Authentication Timing Out, Causing Denial of Service
> Scenario #287: CSI Driver Exposing Node Secrets
> Scenario #288: EphemeralContainers Used for Reconnaissance
> Scenario #289: hostAliases Used for Spoofing Internal Services
> Scenario #290: Privilege Escalation via Unchecked securityContext in Helm Chart
> Scenario #291: Service Account Token Leakage via Logs
> Scenario #292: Escalation via Editable Validating WebhookConfiguration
> Scenario #293: Stale Node Certificates After Rejoining Cluster
> Scenario #294: ArgoCD Exploit via Unverified Helm Charts
> Scenario #295: Node Compromise via Insecure Container Runtime
> Scenario #296: Workload with Wildcard RBAC Access to All Secrets
> Scenario #297: Malicious Init Container Used for Reconnaissance
> Scenario #298: Ingress Controller Exposed /metrics Without Auth
> Scenario #299: Secret Stored in ConfigMap by Mistake
> Scenario #300: Token Reuse After Namespace Deletion and Recreation
> Scenario #301: PVC Stuck in Terminating State After Node Crash
> Scenario #302: Data Corruption on HostPath Volumes
> Scenario #303: Volume Mount Fails Due to Node Affinity Mismatch
> Scenario #304: PVC Not Rescheduled After Node Deletion
> Scenario #305: Long PVC Rebinding Time on StatefulSet Restart
> Scenario #306: CSI Volume Plugin Crash Loops Due to Secret Rotation
> Scenario #307: ReadWriteMany PVCs Cause IO Bottlenecks
> Scenario #308: PVC Mount Timeout Due to PodSecurityPolicy
> Scenario #309: Orphaned PVs After Namespace Deletion
> Scenario #310: StorageClass Misconfiguration Blocks Dynamic Provisioning
> Scenario #311: StatefulSet Volume Cloning Results in Data Leakage
> Scenario #312: Volume Resize Not Reflected in Mounted Filesystem
> Scenario #313: CSI Controller Pod Crash Due to Log Overflow
> Scenario #314: PVs Stuck in Released Due to Missing Finalizer Removal
> Scenario #315: CSI Driver DaemonSet Deployment Missing Tolerations for Taints
> Scenario #316: Mount Propagation Issues with Sidecar Containers
> Scenario #317: File Permissions Reset on Pod Restart
> Scenario #318: Volume Mount Succeeds but Application Can't Write
> Scenario #319: Volume Snapshot Restore Includes Corrupt Data
> Scenario #320: Zombie Volumes Occupying Cloud Quota
> Scenario #321: Volume Snapshot Garbage Collection Fails
> Scenario #322: Volume Mount Delays Due to Node Drain Stale Attachment
> Scenario #323: Application Writes Lost After Node Reboot
> Scenario #324: Pod CrashLoop Due to Read-Only Volume Remount
> Scenario #325: Data Corruption on Shared Volume With Two Pods
> Scenario #326: Mount Volume Exceeded Timeout
> Scenario #327: Static PV Bound to Wrong PVC
> Scenario #328: Pod Eviction Due to DiskPressure Despite PVC
> Scenario #329: Pod Gets Stuck Due to Ghost Mount Point
> Scenario #330: PVC Resize Broke StatefulSet Ordering
> Scenario #331: ReadAfterWrite Inconsistency on Object Store-Backed CSI
> Scenario #332: PV Resize Fails After Node Reboot
> Scenario #333: CSI Driver Crash Loops on VolumeAttach
> Scenario #334: PVC Binding Fails Due to Multiple Default StorageClasses
> Scenario #335: Zombie VolumeAttachment Blocks New PVC
> Scenario #336: Persistent Volume Bound But Not Mounted
> Scenario #337: CSI Snapshot Restore Overwrites Active Data
> Scenario #338: Incomplete Volume Detach Breaks Node Scheduling
> Scenario #339: App Breaks Due to Missing SubPath After Volume Expansion
> Scenario #340: Backup Restore Process Created Orphaned PVCs
> Scenario #341: Cross-Zone Volume Binding Fails with StatefulSet
> Scenario #342: Volume Snapshot Controller Race Condition
> Scenario #343: Failed Volume Resize Blocks Rollout
> Scenario #344: Application Data Lost After Node Eviction
> Scenario #345: Read-Only PV Caused Write Failures After Restore
> Scenario #346: NFS Server Restart Crashes Pods
> Scenario #347: VolumeBindingBlocked Condition Causes Pod Scheduling Delay
> Scenario #348: Data Corruption from Overprovisioned Thin Volumes
> Scenario #349: VolumeProvisioningFailure on GKE Due to IAM Misconfiguration
> Scenario #350: Node Crash Triggers Volume Remount Loop
> Scenario #351: VolumeMount Conflict Between Init and Main Containers
> Scenario #352: PVCs Stuck in “Terminating” Due to Finalizers
> Scenario #353: Misconfigured ReadOnlyMany Mount Blocks Write Operations
> Scenario #354: In-Tree Plugin PVs Lost After Driver Migration
> Scenario #355: Pod Deleted but Volume Still Mounted on Node
> Scenario #356: Ceph RBD Volume Crashes Pods Under IOPS Saturation
> Scenario #357: ReplicaSet Using PVCs Fails Due to VolumeClaimTemplate Misuse
> Scenario #358: Filesystem Type Mismatch During Volume Attach
> Scenario #359: iSCSI Volumes Fail After Node Kernel Upgrade
> Scenario #360: PVs Not Deleted After PVC Cleanup Due to Retain Policy
> Scenario #361: Concurrent Pod Scheduling on the Same PVC Causes Mount Conflict
> Scenario #362: StatefulSet Pod Replacement Fails Due to PVC Retention
> Scenario #363: HostPath Volume Access Leaks Host Data into Container
> Scenario #364: CSI Driver Crashes When Node Resource Is Deleted Prematurely
> Scenario #365: Retained PV Blocks New Claim Binding with Identical Name
> Scenario #366: CSI Plugin Panic on Missing Mount Option
> Scenario #367: Pod Fails to Mount Volume Due to SELinux Context Mismatch
> Scenario #368: VolumeExpansion on Bound PVC Fails Due to Pod Running
> Scenario #369: CSI Driver Memory Leak on Volume Detach Loop
> Scenario #370: Volume Mount Timeout Due to Slow Cloud API
> Scenario #371: Volume Snapshot Restore Misses Application Consistency
> Scenario #372: File Locking Issue Between Multiple Pods on NFS
> Scenario #373: Pod Reboots Erase Data on EmptyDir Volume
> Scenario #374: PVC Resize Fails on In-Use Block Device
> Scenario #375: Default StorageClass Prevents PVC Binding to Custom Class
> Scenario #376: Ceph RBD Volume Mount Failure Due to Kernel Mismatch
> Scenario #377: CSI Volume Cleanup Delay Leaves Orphaned Devices
> Scenario #378: Immutable ConfigMap Used in CSI Sidecar Volume Mount
> Scenario #379: PodMount Denied Due to SecurityContext Constraints
> Scenario #380: VolumeProvisioner Race Condition Leads to Duplicated PVC
> Scenario #381: PVC Bound to Deleted PV After Restore
> Scenario #382: Unexpected Volume Type Defaults to HDD Instead of SSD
> Scenario #383: ReclaimPolicy Retain Caused Resource Leaks
> Scenario #384: ReadWriteOnce PVC Mounted by Multiple Pods
> Scenario #385: VolumeAttach Race on StatefulSet Rolling Update
> Scenario #386: CSI Driver CrashLoop Due to Missing Node Labels
> Scenario #387: PVC Deleted While Volume Still Mounted
> Scenario #388: In-Tree Volume Plugin Migration Caused Downtime
> Scenario #389: Overprovisioned Thin Volumes Hit Underlying Limit
> Scenario #390: Dynamic Provisioning Failure Due to Quota Exhaustion
> Scenario #391: PVC Resizing Didn’t Expand Filesystem Automatically
> Scenario #392: StatefulSet Pods Lost Volume Data After Node Reboot
> Scenario #393: VolumeSnapshots Failed to Restore with Immutable Fields
> Scenario #394: GKE Autopilot PVCs Stuck Due to Resource Class Conflict
> Scenario #395: Cross-Zone Volume Scheduling Failed in Regional Cluster
> Scenario #396: Stuck Finalizers on Deleted PVCs Blocking Namespace Deletion
> Scenario #397: CSI Driver Upgrade Corrupted Volume Attachments
> Scenario #398: Stale Volume Handles After Disaster Recovery Cutover
> Scenario #399: Application Wrote Outside Mounted Path and Lost Data
> Scenario #400: Cluster Autoscaler Deleted Nodes with Mounted Volumes
> Scenario #401: HPA Didn't Scale Due to Missing Metrics Server
> Scenario #402: CPU Throttling Prevented Effective Autoscaling
> Scenario #403: Overprovisioned Pods Starved the Cluster
> Scenario #404: HPA and VPA Conflicted, Causing Flapping
> Scenario #405: Cluster Autoscaler Didn't Scale Due to Pod Affinity Rules
> Scenario #406: Load Test Crashed Cluster Due to Insufficient Node Quotas
> Scenario #407: Scale-To-Zero Caused Cold Starts and SLA Violations
> Scenario #408: Misconfigured Readiness Probe Blocked HPA Scaling
> Scenario #409: Custom Metrics Adapter Crashed, Breaking Custom HPA
> Scenario #410: Application Didn’t Handle Scale-In Gracefully
> Scenario #411: Cluster Autoscaler Ignored Pod PriorityClasses
> Scenario #412: ReplicaSet Misalignment Led to Excessive Scale-Out
> Scenario #413: StatefulSet Didn't Scale Due to PodDisruptionBudget
> Scenario #414: Horizontal Pod Autoscaler Triggered by Wrong Metric
> Scenario #415: Prometheus Scraper Bottlenecked Custom HPA Metrics
> Scenario #416: Kubernetes Downscaled During Rolling Update
> Scenario #417: KEDA Failed to Scale on Kafka Lag Metric
> Scenario #418: Spike in Load Exceeded Pod Init Time
> Scenario #419: Overuse of Liveness Probes Disrupted Load Balance
> Scenario #420: Scale-In Happened Before Queue Was Drained
> Scenario #421: Node Drain Race Condition During Scale Down
> Scenario #422: HPA Disabled Due to Missing Resource Requests
> Scenario #423: Unexpected Overprovisioning of Pods
> Scenario #424: Autoscaler Failed During StatefulSet Upgrade
> Scenario #425: Inadequate Load Distribution in a Multi-AZ Setup
> Scenario #426: Downscale Too Aggressive During Traffic Dips
> Scenario #427: Insufficient Scaling Under High Ingress Traffic
> Scenario #428: Nginx Ingress Controller Hit Rate Limit on External API
> Scenario #429: Resource Constraints on Node Impacted Pod Scaling
> Scenario #430: Memory Leak in Application Led to Excessive Scaling
> Scenario #431: Inconsistent Pod Scaling During Burst Traffic
> Scenario #432: Auto-Scaling Hit Limits with StatefulSet
> Scenario #433: Cross-Cluster Autoscaling Failures
> Scenario #434: Service Disruption During Auto-Scaling of StatefulSet
> Scenario #435: Unwanted Pod Scale-down During Quiet Periods
> Scenario #436: Cluster Autoscaler Inconsistencies with Node Pools
> Scenario #437: Disrupted Service During Pod Autoscaling in StatefulSet
> Scenario #438: Slow Pod Scaling During High Load
> Scenario #439: Autoscaler Skipped Scale-up Due to Incorrect Metric
> Scenario #440: Scaling Inhibited Due to Pending Jobs in Queue
> Scenario #441: Scaling Delayed Due to Incorrect Resource Requests
> Scenario #442: Unexpected Pod Termination Due to Scaling Policy
> Scenario #443: Unstable Load Balancing During Scaling Events
> Scenario #444: Autoscaling Ignored Due to Resource Quotas
> Scenario #445: Delayed Scaling Response to Traffic Spike
> Scenario #446: CPU Utilization-Based Scaling Did Not Trigger for High Memory Usage
> Scenario #447: Inefficient Horizontal Scaling of StatefulSets
> Scenario #448: Autoscaler Skipped Scaling Events Due to Flaky Metrics
> Scenario #449: Delayed Pod Creation Due to Node Affinity Misconfigurations
> Scenario #450: Excessive Scaling During Short-Term Traffic Spikes
> Scenario #451: Inconsistent Scaling Due to Misconfigured Horizontal Pod Autoscaler
> Scenario #452: Load Balancer Overload After Quick Pod Scaling
> Scenario #453: Autoscaling Failed During Peak Traffic Periods
> Scenario #454: Insufficient Node Resources During Scaling
> Scenario #455: Unpredictable Pod Scaling During Cluster Autoscaler Event
> Scenario #456: CPU Resource Over-Commitment During Scale-Up
> Scenario #457: Failure to Scale Due to Horizontal Pod Autoscaler Anomaly
> Scenario #458: Memory Pressure Causing Slow Pod Scaling
> Scenario #459: Node Over-Provisioning During Cluster Scaling
> Scenario #460: Autoscaler Fails to Handle Node Termination Events Properly
> Scenario #461: Node Failure During Pod Scaling Up
> Scenario #462: Unstable Scaling During Traffic Spikes
> Scenario #463: Insufficient Node Pools During Sudden Pod Scaling
> Scenario #464: Latency Spikes During Horizontal Pod Scaling
> Scenario #465: Resource Starvation During Infrequent Scaling Events
> Scenario #466: Autoscaler Delayed Reaction to Load Decrease
> Scenario #467: Node Resource Exhaustion Due to High Pod Density
> Scenario #468: Scaling Failure Due to Node Memory Pressure
> Scenario #469: Scaling Latency Due to Slow Node Provisioning
> Scenario #470: Slow Scaling Response Due to Insufficient Metrics Collection
> Scenario #471: Node Scaling Delayed Due to Cloud Provider API Limits
> Scenario #472: Scaling Overload Due to High Replica Count
> Scenario #473: Failure to Scale Down Due to Persistent Idle Pods
> Scenario #474: Load Balancer Misrouting After Pod Scaling
> Scenario #475: Cluster Autoscaler Not Triggering Under High Load
> Scenario #476: Autoscaling Slow Due to Cloud Provider API Delay
> Scenario #477: Over-provisioning Resources During Scaling
> Scenario #478: Incorrect Load Balancer Configuration After Node Scaling
> Scenario #478: Incorrect Load Balancer Configuration After Node Scaling
> Scenario #479: Autoscaling Disabled Due to Resource Constraints
> Scenario #480: Resource Fragmentation Leading to Scaling Delays
> Scenario #481: Incorrect Scaling Triggers Due to Misconfigured Metrics Server
> Scenario #482: Autoscaler Misconfigured with Cluster Network Constraints
> Scenario #483: Scaling Delays Due to Resource Quota Exhaustion
> Scenario #484: Memory Resource Overload During Scaling
> Scenario #485: HPA Scaling Delays Due to Incorrect Metric Aggregation
> Scenario #486: Scaling Causing Unbalanced Pods Across Availability Zones
> Scenario #487: Failed Scaling due to Insufficient Node Capacity for StatefulSets
> Scenario #488: Uncontrolled Resource Spikes After Scaling Large StatefulSets
> Scenario #489: Cluster Autoscaler Preventing Scaling Due to Underutilized Nodes