Kubernetes v1.36 drops on April 22, 2026 — three weeks away. Most teams will upgrade without reading the changelog. A handful will catch the breaking changes in staging. The rest will find out in production.
This post covers the three changes that will actually matter for your platform team, plus a checklist to run before you upgrade.
Change 1: HPA Scale-to-Zero Is Finally Default-On
The HPAScaleToZero feature gate has been sitting in alpha since Kubernetes v1.16. Ten years of "coming soon." In v1.36, it's enabled by default.
What this means: your Horizontal Pod Autoscaler can now scale workloads down to zero replicas — not just to one. For dev and staging environments that sit idle nights and weekends, the savings are real. Early benchmarks show dev environment costs dropping from ~$450/month to ~$120/month (73% reduction).
What you need to do
Set minReplicas: 0 in your HPA spec to opt in:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 0 # <-- previously had to be 1
maxReplicas: 10
metrics:
- type: External
external:
metric:
name: queue_depth
target:
type: Value
value: "10"
The catch: scaling from zero requires an external trigger. Native HPA still needs a non-zero metric to wake from zero — which means you'll want KEDA for event-driven workloads (SQS depth, Kafka lag, cron schedules). HPA scale-to-zero works best for internal tooling, preview environments, and batch processors.
Mandatory readiness probe: Without one, Kubernetes can't tell when a pod is ready to handle traffic after cold start. Add it or you'll route traffic to pods that aren't ready:
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
Stabilization window: Prevent flapping — set stabilizationWindowSeconds: 300 to require five minutes of idleness before scale-down triggers.
Who benefits most
- Ephemeral preview environments (one per PR): kill them at night automatically
- Dev namespaces: idle 14 hours out of 24 on weekdays, all weekend
- Batch jobs and async workers: zero replicas until the queue fills
Who should leave minReplicas: 1: anything with latency SLAs where cold-start delay would page someone.
Change 2: Ingress NGINX Is Dead — You Need a Migration Plan Today
The Kubernetes community officially announced in November 2025 that Ingress NGINX would receive best-effort maintenance until March 2026. That window is closed. There are no more releases, no security patches, no bug fixes.
If you're still running ingress-nginx in production, you're now running unmaintained software in a security-critical position (it sits in front of all your cluster traffic). CVE-2020-8554 never fully closed. New CVEs will not be patched.
The migration options, in order of recommendation
Option A: Gateway API (recommended for greenfield or teams with bandwidth)
The Kubernetes Gateway API hit GA in October 2023. It's the designed successor to Ingress, with expressive routing, traffic splitting, and header matching built in. Supported by Envoy-based controllers (Istio, Cilium, kgateway).
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app-route
spec:
parentRefs:
- name: my-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: my-app-service
port: 80
Option B: Traefik (easiest drop-in for existing Ingress resources)
Traefik supports native NGINX compatibility mode — the closest thing to a zero-annotation migration. For teams that need to move fast and can't rewrite all their Ingress manifests, this is the pragmatic choice.
Option C: F5 NGINX Ingress Controller
F5 maintains an Apache 2.0 licensed NGINX Ingress Controller with a dedicated engineering team. If your organization is standardized on NGINX tooling and doesn't want to retrain on new APIs, this is a maintained path forward.
Option D: Cloud-managed
On EKS: AWS Load Balancer Controller. On GKE: GKE Ingress or GKE Gateway. Less portable, but fully managed and security-patched.
What to do this week
kubectl get ingress -A | grep nginx— inventory every Ingress resource using the nginx class- Check your cert-manager integrations (cert-manager works with Gateway API, but needs config updates)
- Pick your replacement and test in a non-production namespace
- Don't wait for v1.36 — this migration should have started in Q1
Change 3: Security Hardening That Will Surprise Complacent Clusters
gitRepo volume: removed, not deprecated
The gitRepo volume type has been deprecated since v1.11 — fifteen years of deprecation warnings. In v1.36, it's gone. If you have any manifests using gitRepo volumes, your pods won't schedule after upgrade.
Why it mattered: gitRepo volumes cloned a git repository directly into a container at mount time. The implementation ran git as root on the node, creating a trivial path to node-level code execution. It was never safe.
Migration: Use an init container with a git-sync sidecar, or pull your repo artifacts into a container image at build time:
initContainers:
- name: git-sync
image: registry.k8s.io/git-sync/git-sync:v4.2.0
args:
- --repo=https://github.com/my-org/my-config
- --depth=1
- --one-time
volumeMounts:
- name: config-volume
mountPath: /repo
externalIPs on Services: deprecated, with a timeline
The spec.externalIPs field on Service objects is deprecated in v1.36, with removal planned for v1.43. This field has been a known MitM attack vector (CVE-2020-8554) — it lets any cluster user route arbitrary external IPs to a service, potentially hijacking cluster traffic.
You won't break on v1.36, but you will start seeing deprecation warnings. Use this as your prompt to migrate:
- LoadBalancer services for cloud-managed external ingress
- NodePort for simple port exposure
- Gateway API for production traffic routing
Ephemeral image pull tokens (graduating)
Rather than storing static imagePullSecrets (long-lived credentials that rotate badly), v1.36 continues advancing the move to ephemeral Kubernetes Service Account tokens for authenticating image pulls. Short-lived, pod-scoped, auto-rotating. Less blast radius when a secret leaks.
Platform teams running private registries should plan to adopt this pattern — it will eventually become the default.
Your Pre-Upgrade Checklist
Run this before upgrading to v1.36:
Breaking changes (will fail):
-
grep -r "gitRepo" your-manifests/— remove anygitRepovolumes - Inventory and migrate off
ingress-nginxif still in use
Deprecation warnings (won't fail yet, but need a plan):
-
kubectl get svc -A -o json | jq '.items[] | select(.spec.externalIPs != null) | .metadata'— find services usingexternalIPs - Schedule Ingress → Gateway API migration if not already started
Optimization opportunities:
- Identify dev/staging namespaces where
minReplicas: 0would save cost - Add readiness probes to any deployments that lack them (required for scale-to-zero)
- Review image pull secrets for transition to ephemeral SA tokens
Networking:
- If using SELinux-enforcing nodes, test pod startup times post-upgrade (SELinux fast labeling is now GA and default — generally faster, but worth validating your specific volumes)
The Bigger Pattern
Each Kubernetes release follows the same arc: alpha flags that sat for years finally graduate to default-on. Deprecated APIs that nobody cleaned up get removed. Security debt from the 2015–2018 era gets paid down.
v1.36 is a "quiet significance" release — no flashy headline features, but HPA scale-to-zero and Ingress NGINX's end-of-life together represent years of accumulated decisions finally landing at the same time.
The teams that stay ahead of this aren't doing anything heroic. They have a runbook, they run the deprecation checks before each upgrade, and they track the CHANGELOG. That's it.
How We Help Platform Teams
At RankEdge, we work primarily with content-heavy B2B SaaS teams on SEO and GEO — but the platform engineering problems we describe above are the same ones our clients' engineering teams face. We've seen what happens when content infrastructure (CDN configs, redirect chains, Core Web Vitals) gets the same "we'll fix it eventually" treatment as deprecated Kubernetes APIs.
If your engineering org is also thinking about content infrastructure and SEO — how your site's technical foundation affects AI citation rates and search visibility — that's exactly what we help with.
Kubernetes v1.36 releases April 22, 2026. The official sneak peek and release information page are the authoritative sources for what ships.