By Simon Pamplin, CTO, Certes
Let’s be clear from the outset – PQC is not about infrastructure, it’s not about upgrading your firewalls – it’s about data.
PQC transformation is not a future roadmap item. It is an engineering problem that already sits on your desk.
If you own architecture in a bank or large enterprise, the pressure is coming from every direction at once. You are expected to prepare for post-quantum cryptography while supporting decades of legacy systems. You are enabling AI across environments which you do not fully control. And you are being asked to prove, with evidence, that sensitive data remains protected under all conditions.
Most organisations are trying to solve this with the wrong tools.
They are treating PQC as an infrastructure upgrade. In practice, that means large-scale change programmes, long timelines, and heavy dependency on systems that are already difficult to manage.
That approach is slow, expensive and at the end of the project still does not solve the original problem – how do you protect the data you are responsible for from getting into the wrong hands?
The Engineering Constraint Everyone Recognises
Ask any CTO what a full PQC migration involves and the answer is predictable. Thousands of applications. Legacy platforms that cannot be modified. Multiple clouds. Third-party dependencies. Limited change windows and don’t get me started on how on earth you secure the edge devices!
Now add the reality of active threats.
Attackers are already in your network, capturing encrypted traffic, collecting it to decrypt later. At the same time, most breaches are not sophisticated break-ins. They rely on stolen credentials, compromised user accounts and valid access paths.
This creates a hard constraint. You cannot re-architect everything fast enough to keep pace with both current threats and future decryption risk. And once you have finished all the changes and a new set of algorithms is approved, you have to start all over again.
So the question becomes practical: what can you change that actually scales?
Why PQC Fails as an Infrastructure Project
Treating PQC as a stack-wide upgrade introduces two problems.
First, timelines extend beyond what risk allows. Multi-year transformation programmes do not align with threats that are already active.
Second, dependency chains multiply. Every application, network segment, and cloud service becomes part of the critical path. Oh and you may not (will not) control all the parts of that chain.
From an engineering standpoint, this is fragile.
We have spent years building layered controls across networks, identity systems, and cloud environments. Yet data is still exposed once those controls are bypassed. And they are bypassed regularly.
Legacy approaches continue to assume that if the environment is trusted, the data will be safe. That assumption is repeatedly proven wrong.
Changing the Unit of Protection
v7 is built around a different model.
Instead of trying to make the infrastructure quantum-ready, we focus on the target of all attacks, the data itself. We enforce protection directly on the data flows that matter. Each application flow becomes the control point. Each data flow is cryptographically segmented from another giving you the spin off benefit of lateral movement protection.
This is the key engineering shift, a change of mindset, a move from defensive to offensive security.
Protection is no longer tied to physical network boundaries or infrastructure components. It is applied at the point where data starts its journey as it leaves the application/system, and it stays with that data regardless of where it goes.
Security is attached to the data and not the infrastructure. Where it goes so does the security over networks you control and those you don’t.
That removes a large part of the dependency problem.
What This Looks Like in Practice
In v7, each protected flow operates as its own controlled boundary:
- Quantum-safe algorithms are applied to data in motion between application endpoints
- Policies define exactly what, where and when data can be accessed
- Enforcement happens transparently, inline, from core systems to the edge
- Keys are generated and owned by you, never exposed externally, never transmitted in their entirety across the network.
- Each data flow has its own key and its own rotation schedule, automated and changed frequently, without adding operational overhead.
This model gives you consistency across environments that are otherwise inconsistent by design.
Whether data is moving across AWS, Azure, GCP, private infrastructure, or edge locations, the enforcement model does not change. Control remains with the organisation, not the platform.
It also gives you crypto agility. As standards evolve, policies can be updated centrally and applied across flows without touching applications or redesigning networks.
Protection That Holds Under Failure
Every architecture looks strong on paper when controls behave as expected.
The real test is what happens when they fail.
Endpoints are compromised. Credentials are misused. Cloud configurations drift. AI introduces new access paths that did not exist six months ago.
v7 is engineered with those conditions assumed.
Protection does not depend on perfect infrastructure. It ensures that data remains unusable unless policy explicitly allows it to be accessed.
That leads to concrete outcomes:
- Lateral movement is prevented
- Ransomware cannot convert access into leverage
- Sensitive data is prevented from accidentally being exposed to untrusted AI systems
- Compromised environments do not automatically lead to compromised data
Access, on its own, stops being enough.
Deploying Without Re-Architecture
None of this matters if it requires rebuilding the enterprise.
v7 is designed as a transparent overlay so it can be deployed into existing estates without disruption:
- No application changes
- No network redesign
- No dependency on identity system overhaul
- No downtime
You can start with a single critical flow, payments, trading, regulated data, and expand from there.
This is how PQC transformation becomes achievable. Not as a single, large programme, but as a controlled rollout aligned to business priorities.
Timelines shift from years to weeks.
Closing the Engineering Gap
From a CTO standpoint, the problem is clear.
There is a gap between what current architectures can guarantee and what regulators, boards, and threat conditions now demand.
That gap sits at the data layer.
v7 is built to close it by enforcing protection directly where it matters, at the flow level, across any application, any infrastructure, anywhere.
It allows you to move forward on PQC, support AI adoption, and meet regulatory compliance expectations without waiting for a full rebuild of your environment.
If you are responsible for making this work in practice, not in theory, this is the problem we set out to solve.
My team and I are always open to a deeper technical discussion on how this model fits into real-world architectures.