On July 30th, a paper was published revealing a devastating attack on the Post-Quantum Algorithm (PQA) SIKE. The algorithm was well-respected as a candidate for further study in the fourth round of the NIST PQA process. It was particularly favoured by large technology companies such as AWS, Cloudflare and Google who were experimenting with and deploying SIKE on their cloud infrastructure. The attack showed that keys for the most secure instances of SIKE could be recovered in under a day on a single core, says Daniel Shiu, chief cryptographer at Arqit.
This was the third piece of cryptanalysis in 2022 that has caused us to reassess the security of NIST candidates (after Buellens’ “Breaking Rainbow takes a weekend on a laptop” and the Israeli Defence Force’s analysis of lattice algorithms such as KYBER, Saber, and Dilithium). This boom in cryptanalysis (including earlier results on GeMMS and SPHINCS+) might be expected as the list of candidates thins out and more attention is focussed on the survivors, but it is a sign that five years after the process was started, we still do not have a mature understanding of the security of PQAs. In the past, similar experiences with Public Key Cryptography (PKC) have seen cryptographers scrambling to increase keys sizes that were supposed to have been secure for millions of years,
dealing with legacy insecure systems and vulnerable to downgrading cyberattacks such as DROWN and LOGJAM. Why then are we rushing to this redesign of Internet cryptography?
The wrong answer to the wrong question
The NIST PQA process arises from a desire for “drop-in” replacements for existing key establishment and authentication methods on the Internet. This admirable goal of migrating to quantum-safe cryptography with the minimum disruption to users is proving hard to achieve. The PQA methods have greater resource requirements than their classical predecessors, whether in terms of bandwidth, computation, size of codebase or combinations of these.
The existing Internet protocols are proving to be a poor fit for the new algorithms, so that major changes to Internet communication are being considered. The SIKE attack is particularly painful from this point of view as it was the PQA with the best bandwidth properties by far. Beyond the world of standards, there are also the challenges of securely implementing the complex new ideas in code, of integrating that code into products and moving users away from legacy products. Major projects to manage the transition over the coming decades are now being started.
All this effort is to keep as close as possible to the dream of drop-in replacement, but it is worth taking a step back and considering why the existing methods were first adopted and how they are already evolving due to a changing Internet. PKC was introduced to the Internet of the early 90s, when much of the traffic was still based around enterprise mainframes, connectivity was sporadic, and services might be unavailable for days or weeks at a time.
The Public Key Infrastructure (PKI) of certificates allowed key establishment and authentication to be passively mediated offline so that Certifying Authorities did not have to be constantly available. Since then, the Internet has evolved through the AOL era of PCs and dial up, the laptop and Wi-Fi era, the smart phone and server farm era and is now moving towards the Internet of things. The core Internet (a concept that was meaningless in the 90s) assures us of very reliable connectivity and availability, while lightweight edge devices (another 21st century Internet concept) push us towards reducing bandwidth, computational burden, and memory footprint.
Major Internet service providers are most acutely aware of these changes and have been influencing a reduction in the use of PKC, using federated authentication methods (such as when we are invited to sign in using Google or Facebook) and session tickets to refresh keys without using PKC. The TLS1.3 standard is especially forward looking in compatibility with keys established without PKC. These methods are also quantum safe when founded in the well-established and assured methods of symmetric cryptography.
Stronger, simpler encryption
Symmetric cryptography is more robust than PKC and also massively more efficient in terms of bandwidth, computation, and memory footprint. These desiderata for the smartphone and server farm era of the Internet become critical requirements when we start to think of IoT. Now, although primarily thought of as a means of bulk encryption requiring shared secret keys, symmetric cryptography can also provide authentication using Hashed Message Authentication Codes (HMACs) and key establishment using methods such as Kerberos.
Symmetric key establishment was not thought a good solution to the unreliable Internet of the 90s due to the need for reliable connectivity to and availability of a Key Distribution Centre. In the 21st century, these concerns go away. In particular, the era of Cloud computing allows us to decentralise Key Management Services and have extremely high availability and reduced latency at a global scale. Methods can be added to split trust away from the mediation services and provide end-to-end security. The methods of symmetric key establishment are robust, efficient, quantum-safe and integrate well with existing standards such as TLS and IPsec.
Demonstrations with smart cities and unmanned vehicles show that systems can be transitioned to these methods today, quickly and with minimal disruption to existing services and users. New IoT deployments become much easier to roll out using MQTT without the burden of PKI. The savings on energy consumption and memory requirements drive down costs and extend lifetimes of devices. These benefits in turn broaden use cases and increase the value of IoT approaches. All that is needed is the willingness to move away from the misplaced and unachievable dream of finding a new, drop-in way of solving the problems of the Internet of the 90s.
The author is Daniel Shiu, chief cryptographer at Arqit.
Comment on this article below or via @jcIoTnow.