
TL;DR (Executive Action Summary)
- TLS cutover is a hard boundary (not a suggestion): From February 24, 2026, DigiCert will stop accepting public TLS certificate requests with validity greater than 199 days, and certificates issued from that date have a 199-day maximum validity. This is the practical cutover for many operators—renewal velocity increases immediately.
- The 200→100→47-day roadmap is already defined: The CA/Browser Forum Baseline Requirements set a phased reduction: 200 days from March 15, 2026, 100 days from March 15, 2027, and 47 days from March 15, 2029.
- CRA adds a compliance clock: CRA reporting rules require early warning within 24 hours, full notification within 72 hours, and defined final reporting windows for actively exploited vulnerabilities and severe incidents.
- Top hidden risk isn’t expiry: The systemic failure mode is trust anchor drift—roots/intermediates/cross-signing changes out of sync across EVSE, local controllers, and backend validation paths.
- First investment to protect uptime: System-led automation (ACME + inventory + staged rollout) plus edge continuity (local validation/caching, evidence logs, and time-sync governance).
Introduction: 2026 turns Plug & Charge into an operational system
In 2026, Plug & Charge (P&C) stops being a “set-and-forget” feature and becomes a continuous operating system.
The ISO 15118 trust plane (PKI + TLS + revocation + updates) is now governed by timelines that do not tolerate manual workflows.
To understand the system boundary—what ISO 15118 is responsible for vs what OCPP is responsible for—start with our companion piece:
ISO 15118 vs OCPP deployment reality in 2026.
The immediate pressure is TLS lifecycle compression. Operationally, you cannot “wait until March.”
DigiCert will stop accepting public TLS requests exceeding 199 days starting February 24, 2026,
and certificates issued from that day onward will have a 199-day maximum validity.
DigiCert also emphasizes a critical operational detail: the maximum allowed validity is governed by the issuance date, not when the order is placed.
At the same time, the EU Cyber Resilience Act (CRA) introduces a second clock: reporting rules require
24-hour early warning and 72-hour notification for actively exploited vulnerabilities and severe incidents impacting products with digital elements.
This guide focuses on architecture and risk controls for operating ISO 15118 certificates under these constraints.
2024–2026 Milestones & Required Actions (Text Gantt)
| Window | 2024 H2 | 2025 H1 | 2025 H2 | 2026 Feb 24 | 2026 Mar 15 | 2026 Sep 11 |
|---|---|---|---|---|---|---|
| External change | CA transition signals | Pilot automation | Trust anchor drills | DigiCert 199-day issuance begins | 200-day BR cap phase begins | CRA reporting obligations active (per guidance) |
| What to do | Inventory endpoints | ACME pilot + telemetry | Offline strategy + trust-store rollout | Freeze manual renewal paths | Full system-led renewals | Run CRA tabletop + evidence drills |
Operational note: February 24, 2026 is often the real cutover point because issuance behavior changes then for major CAs.
Policy note: The phased lifetime reductions are defined in Baseline Requirements (200/100/47 days).
The Life Cycle Landscape: Provisioning → Operation → Renewal → Revocation
Lifecycle map (what you must be able to operate)
- OEM provisioning: Keys generated/injected; root of trust established (HSM/secure element).
- Contract enrollment: Contract certificates bound to user contracts (ecosystem-dependent).
- EVSE commissioning: Trust-store baselines, policies, and time-sync baselines established.
- Operational validation: TLS handshakes, chain building, revocation checking, policy enforcement.
- Renewal / re-issuance: Automation + staged rollout + rollback.
- Revocation / incident response: Compromise/mis-issuance/exploitation → revoke/rotate/recover.
- Recovery & reconciliation: Restore service while preserving auditability and billing integrity.
The underestimated failure point: Trust Anchor Drift
Most “mysterious P&C failures” in multi-OEM environments aren’t a single expired certificate—they’re
path validation failures caused by trust anchor drift:
- New roots/intermediates appear (multi-root reality).
- Cross-signing changes alter feasible chains.
- Backend trust stores update faster than EVSE/local controllers.
- Revocation artifacts go stale at the edge.
Treat trust anchor updates as a safety-critical change process:
- Versioned trust stores
- Canary rollouts
- Rollback plans
- Telemetry on validation failures by issuer/serial/path
- An explicit owner for “who updates what, when”
Cross-signing and path-building failures (2026 reality): In multi-root ISO 15118 ecosystems,
Plug & Charge often fails not because a certificate is invalid, but because the EVSE cannot build a valid
certificate path after cross-signing changes (new intermediates, bridge CAs, re-issued chains).
As more OEMs and PKI domains join, path complexity increases. If edge trust stores (EVSE/local controllers)
lag behind backend updates, TLS handshakes can fail even when backend certificates appear “valid” in isolation.
Figure 1 (Recommended Visual): Path Validation in Multi-Root ISO 15118
(Show V2G Root / OEM Root / Contract Root, intermediates, and cross-sign bridges.
Highlight where a newly cross-signed intermediate breaks path-building on EVSE if trust stores are not updated in sync.)Core message: Most P&C outages blamed on “PKI” are actually path validation failures driven by cross-signing drift and unsynchronized trust stores.
ACME & Automation: Human-led vs System-led under 199/200-day lifetimes
Why manual renewal becomes a deterministic outage generator
Short lifetimes make renewals continuous. DigiCert’s move to 199 days from February 24, 2026
makes this operational immediately for many fleets. And the broader industry timeline is already defined:
200 days (from March 15, 2026), then 100 days, then 47 days.
For any fleet, renewal events scale as:
Renewal events per year ≈ N × (365 / L)
Where N is the number of TLS endpoints and L is certificate lifetime (days).
As L decreases, human-led renewal becomes mathematically incompatible with uptime targets.
Scenario (Board-level sizing)
For a CPO operating 5,000 endpoints, a 199-day lifetime implies:
Renewal events/year ≈ 5000 × (365 / 199) ≈ 9,171
At this scale, even a 1% human error rate translates to roughly
92 certificate-driven outages per year—before accounting for peak-hour impact,
SLA penalties, or cascading failures across a hub.
ACME in charging networks: what it should automate
ACME (Automated Certificate Management Environment) turns renewals into policy-driven operations for:
- EVSE ↔ backend TLS
- Local Controller / Edge Proxy TLS
- Site gateways and hub controllers
System-led workflow (architecture pattern)
- Inventory every endpoint (issuer, serial, chain, expiry, last rotation).
- Renew-before policy (renew at a fixed threshold, not “near expiry”).
- Hardware-backed keys where feasible; avoid exporting private keys.
- Staged rollout with health checks (handshake + authorization + session start).
- Automatic rollback on elevated failure rates.
- Evidence logs for every issuance/deploy (compliance-grade traceability).
Human-led vs System-led
- Human-led: Tickets, spreadsheets, late renewals, ambiguous ownership, risky emergency changes.
- System-led: Deterministic policies, automated issuance, controlled rollout, continuous telemetry, auditable evidence.
Revocation Checks: the “P&C Killer” (CRL vs OCSP, weak networks, and defensible policies)
Why OCSP/CRL fail in garages and depots
- Weak/intermittent LTE/5G
- Restricted egress (firewalls/captive portals)
- Latency-sensitive validation steps
- External dependencies (OCSP responders, CRL distribution points)
Result: EVSE can initiate a session but fails to complete revocation validation reliably.
CRL vs OCSP: practical tradeoffs
- CRL: heavier downloads, but cacheable and refreshable on schedule (good for edge continuity).
- OCSP: lightweight per request, but often requires live reachability at the weakest edge.
In 2026, the correct posture is layered:
- Scheduled CRL caching for resilience
- OCSP where connectivity is reliable
- Explicit policy for degraded conditions
Why “soft-fail” is becoming harder to defend
Historically, “soft-fail” (allow session if revocation checks time out) preserved availability.
In 2026, soft-fail becomes harder to justify because:
- Lifetimes are shorter (less tolerance for stale assumptions)
- CRA’s reporting clock forces stronger incident discipline and evidence trails
A defensible design requires explicit, documented policy:
- Hard-fail for public/high-risk environments
- Grace-with-evidence for closed fleets (limited window + compensating controls)
- Evidence logging for every degraded decision
Architectural mitigations (patterns, not product promises)
Pattern 1: Edge pre-validation + caching
- Cache CRLs with defined freshness windows
- Cache intermediates and validated chains
- Pre-fetch during “good connectivity” periods
Pattern 2: OCSP stapling (where feasible)
OCSP stapling shifts revocation proof delivery away from the weakest edge—reducing live dependency on CA infrastructure during session establishment.
Implementation note (embedded reality): In EVSE environments, confirm stapling-related extension support
in your embedded TLS stack and build configuration (e.g., mbedTLS, wolfSSL) and validate behavior across legacy hardware,
because feature completeness and memory/RTOS constraints vary.
Pattern 3: Multi-root trust governance
- Unified trust store update channel for multiple OEM anchors
- Canary updates + rollback when path-building errors spike
Pattern 4: Time Sync Governance (non-negotiable)
- NTP policy (or PTP where appropriate)
- Drift monitoring and alert thresholds
- Defined behavior when clocks are untrusted
Offline Continuity: keeping Plug & Charge usable during edge-to-cloud disconnects
What offline continuity is (and is not)
Offline continuity is not “bypassing PKI.” It is controlled degradation that preserves:
- Integrity of keys and trust stores
- Auditability for billing and incident response
- Explicit limits on what can be validated locally (and for how long)
Local Controllers / Edge Proxies as availability primitives
- Maintain local trust caches (anchors/intermediates/CRLs)
- Enforce limited local authorization policies
- Buffer metering/logs for later reconciliation
- Reduce WAN blast radius by acting as the local endpoint for EVSE
Figure 2 (Recommended Visual): Edge Proxy as a Trust Cache in Weak-Network Sites
(Show EVSEs connecting to an on-site Edge Proxy/Local Controller. The proxy maintains cached trust anchors/intermediates,
scheduled CRL refresh, time-sync monitoring, and evidence logs; it buffers events to the cloud CSMS/PKI when uplink is unstable.)Core message: Edge proxies reduce live dependency on external OCSP/CRL endpoints and enable controlled offline continuity without bypassing PKI.
CRA & VMP: from Sept 2026 reporting deadlines to an auditable operating model
CRA reporting rules: design to the 24h/72h clock
CRA reporting rules require manufacturers to notify actively exploited vulnerabilities and severe incidents having an impact
on the security of products with digital elements:
- Early warning within 24 hours of becoming aware
- Full notification within 72 hours
- Final report within defined windows (depending on incident class)
A large-scale Plug & Charge disruption caused by mass revocation or a trust-anchor compromise may qualify
as a severe incident depending on impact and exploitation evidence.
Vulnerability Management Process (VMP): minimum viable capabilities
- Fleet truth: asset + version inventory (EVSE firmware, controller images, trust store versions).
- SBOM integration (dynamic): SBOM mapped to deployable artifacts; continuous correlation to vulnerability intelligence.
- VEX-driven exposure management: Maintain VEX statements to distinguish “present but not exploitable” from “exploitable in our deployment,” enabling credible scoping within the T+24h window.
- Why VEX matters under the 24-hour clock: SBOM tells you what is present; VEX helps you determine what is exploitable, reducing false alarms and preventing operations teams from chasing non-exploitable noise.
- Intake & triage: supplier advisories, CVEs, internal findings; prioritize exploitability + exposure.
- T+24h scoping workflow: SBOM + VEX + inventory correlation to identify affected populations; initial containment decisions; evidence capture.
- T+72h notification workflow: confirmed scope, mitigations, rollout/rollback plan, comms record.
- Final report workflow: validation evidence + root cause + prevention improvements after corrective measure availability.
- Patch cadence engineering: staged rollout, rollback plans, signed artifacts, verification gates.
- Chain of Trust enforcement: secure boot + secure firmware updates; signing keys protected in HSM/secure elements.
- Evidence-first logging: cert events, trust store changes, revocation failures, time sync health.
High-severity trust scenario: If revocation is triggered by a compromised root or issuing key,
treat it as a top-severity trust incident requiring immediate containment, fleet-wide trust-store actions,
and CRA-aligned reporting readiness depending on impact and exploitation evidence.
CRA Incident Response Countdown Checklist (Operational Template)
T+0 (Detection / Awareness)
- Freeze evidence: logs, cert events, trust store versions, time sync status
- Identify affected surfaces: EVSE firmware, local controllers, backend TLS endpoints
- Engage PKI provider / backend security contact
T+24h (Early warning readiness)
- Core objective: Use SBOM + VEX + fleet inventory to determine affected population and submit an evidence-backed early warning
- Decide containment: revoke/rotate, trust-store rollback, site isolation
- Draft early warning package: scope, mitigations underway, interim posture
T+72h (Full notification readiness)
- Confirm affected populations by region/site; provide remediation plan + rollout method
- Produce customer/operator comms and escalation record
Final report window
- Submit final report aligned to CRA requirements (timing depends on incident class)
- Post-fix validation evidence + lessons learned
Cost & Risk Quantification (Templates you can plug into your fleet)
Manual renewal labor cost model
Let:
N= number of TLS endpoints (EVSE + controllers + gateways + managed backend nodes)L= cert lifetime (days)t= human time per renewal (hours)c= fully loaded labor cost (USD/hour)
Cost_labor ≈ N × (365 / L) × t × c
Outage risk model (expiry or failed deploy)
Let:
P_miss= probability of missed/failed renewal per cycleH_down= expected downtime hours per incidentC_hour= hourly business impact (lost revenue, penalties, SLA credits)
Cost_outage ≈ P_miss × H_down × C_hour
Decision Guide: When Online Revocation Checks Fail (OCSP/CRL Timeout)
- Public site or closed fleet/depot?
- Public → prefer Hard-fail (or strictly controlled grace only with evidence + compensating controls)
- Fleet/depot → Grace-with-evidence may be acceptable for limited windows
- Is network reliability predictable?
- Yes → Online OCSP/CRL + monitoring
- No → Edge pre-validation + caching (CRL refresh windows, cached chains)
- Can you reduce online dependency at session time?
- Where feasible → adopt OCSP stapling pattern (push proof closer to the edge)
- Do you have evidence logging + Time Sync Governance?
- If not → fix these first; degraded-mode policies are hard to defend without them
Practical Responsibility Matrix (Boundaries that prevent outages)
| Role | Issuance | Validation | Reporting | Update Cadence |
|---|---|---|---|---|
| CPOs | TLS/identity strategy; enforce automated renewal; maintain endpoint inventory; plan for CA cutover behavior (199-day issuance from Feb 24 for DigiCert) | Define hard/soft-fail policy; revocation artifact freshness; Time Sync Governance (NTP/PTP, drift monitoring, alerts) | Operate incident playbooks; drive CRA-aligned reporting readiness (24h/72h/final) | Continuous expiry monitoring; trust-store refresh; emergency trust-anchor changes; time-sync audits |
| EVSE OEMs | Hardware-backed key storage; device identity posture; automation hooks; secure boot/update primitives | TLS posture; chain building; revocation behavior; trust-store management; secure boot + secure firmware update chain | Product vulnerability handling; advisories; remediation packages; support operator reporting with technical facts | Regular releases + emergency patches; defined support windows; key rotation playbooks |
| Backend / V2G PKI Providers | Contract ecosystem issuance (where in scope); CA/RA ops; issuance policy | Backend validation; OCSP/CRL availability; trust anchor governance | Provide incident/vulnerability facts; support CRA timeline evidence packages | Frequent policy/trust-anchor updates; OCSP/CRL resilience engineering; continuous monitoring |
Glossary
- PKI: Public Key Infrastructure (issuance, validation, trust anchors, revocation)
- ACME: Automated Certificate Management Environment (automated issuance/renewal)
- OCSP / CRL: Online Certificate Status Protocol / Certificate Revocation List
- OCSP Stapling: Server presents revocation proof to reduce live OCSP dependency
- Trust Anchors: Root/intermediate certificates your validators trust
- SBOM: Software Bill of Materials (component inventory for vulnerability scoping)
- VEX: Vulnerability Exploitability eXchange (exploitability status statements)
- TLS 1.3: Modern TLS profile; handshake + certificate validation remain latency-sensitive
- VMP: Vulnerability Management Process (intake, triage, patching, reporting, evidence)
Forward-Looking Risk: Crypto Agility and PQC Readiness
While 2026 is dominated by short TLS lifetimes and CRA reporting, charging infrastructures should begin evaluating
crypto-agility. With long-lived assets (vehicles and chargers), architectures should avoid hardware lock-in by ensuring
HSM/secure elements and embedded stacks can support future algorithm and certificate profile updates without requiring a hardware refresh.
FAQ
Can Plug & Charge work offline?
Partially—by design. Offline P&C is controlled degradation using local trust caching (anchors/intermediates/CRLs where feasible),
explicit grace policies, and buffered audit logs for reconciliation. It should not bypass PKI; it should reduce live cloud dependency
while preserving integrity and auditability.
How often do we need to renew certificates under 199/200-day lifetimes?
Plan for multiple renewal cycles per year per endpoint. For many operators, the operational cutover starts
February 24, 2026 because DigiCert will issue public TLS certificates with a maximum 199-day validity from that date.
At the broader ecosystem level, Baseline Requirements define a phased reduction to 200/100/47 days.
What triggers CRA reporting obligations?
CRA reporting rules require 24-hour early warning and 72-hour notification for actively exploited vulnerabilities and severe incidents,
plus final reporting windows. A large-scale P&C trust disruption (e.g., malicious revocation or validation compromise) may qualify depending
on impact and exploitation evidence; a CRA-ready VMP should support SBOM + VEX + fleet inventory scoping inside the first 24 hours.



































