How can we establish trust in the technologies we offer as providers? This question is indisputably a key concern that merits attention and yet is rarely discussed by cybersecurity solution companies and developers. Let’s take a closer look at issue to better understand its strategic implications.
Understanding the general context
Recent events have shown that security solutions come under fire as soon as the slightest doubt is raised about their effectiveness or reliability. For example, the Snowden affair revealed to the world the existence of the ANT catalogue listing implants and other backdoors used in network security solutions to protect and defend the interests of the United States. Although this comes as no surprise, this information has now been publicly disclosed.
Of course, these backdoors could have been clumsily put in place for technical reasons or be linked to 0-day vulnerabilities. Many of the providers involved claimed that they did not deliberately weaken their solutions. However, beyond the potential impact on national sovereignty, backdoors can have other dramatic consequences. We saw the disastrous effects of the leak revealed by the Shadow Brokers, highlighting several MS Windows vulnerabilities used by the NSA as potential backdoors. WannaCry, NotPetya, and more recently Bad Rabbit ransomware were able to spread quickly because of such flaws.
This situation brings to light a major challenge that cybersecurity solution providers are facing. Our technologies manipulate and inspect sensitive files, process and store personal data, encrypt confidential information, access resources whose use is regulated, manage digital identities, analyse traffic and behaviour, and more. How can we guarantee our customers and our ecosystem that these operations are reliable? How can we respect sovereignty with all of the international political tension? We all know that the digital economy can only flourish in an environment of trust. Many of these questions remain unanswered.
For network security solution providers, this question is particularly vital given that traffic encryption is one of the pillars of a reliable digital account. According to Gartner, 80% of companies’ traffic on the web will be encrypted by 2019, which is a good thing. This also means that an increasing number of attacks by malicious programs (including ransomware) will go through HTTPS to hide the initial infection and take control of communications. Faced with this situation, Gartner recommends that companies and organisations formalise a multi-year plan to implement HTTPS decryption solutions and an inspection program. This SSL inspection technique is based on the man-in-the-middle method, meaning that we will be creating a vulnerability in communications and secure exchanges. A weakness in the products carrying out decryption and SSL inspection could then cause the entire chain of trust to collapse.
In order to find a way through this, the first port of call is usually the tests that are carried out by external companies who specialise in security technology assessment. They are quite capable of evaluating the effectiveness of protection mechanisms. However, these tests — which can also prove to be very expensive — do not really focus on the security design per se.
We could also rely on the framework defined by the Common Criteria, which has been adopted by 26 countries. Even so, the provider is the one that defines the scope of the evaluation, called the security target, and this can be limited to a small part of the audited software. Unfortunately, only certain countries measure the importance and evaluate the relevance of this security target. In short, the increased number of Common Criteria assurance levels makes it difficult for customers to follow.
There are also bug bounty programs, static code analysis software, or independent audits to detect and fix flaws. These initiatives effectively improve technology security, sometimes as early as the design phase; however, it is difficult to present them as a guarantee to users.
Lastly, official certifications play an important role. For example, in France, ANSSI (the French national information security agency) evaluates the level of reliability of security products using a specific qualification framework, which is an extension of Common Criteria principles. This framework defines three qualification levels based on predefined security targets. As a result, it is easier to understand them. Depending on the qualification level, an independent code audit is performed on components that are essential to security, such as encryption. Potential flaws are also evaluated, along with the physical development environment. This method provides proof that products are robust and that there are no vulnerabilities that could open up a backdoor.
A general framework is needed
The fact that this qualification framework is only recognised in France poses a problem. For example, Germany and the United Kingdom each have their own framework, from the BSI (federal office for information security) and the NCSC (National Cyber Security Centre) respectively. The current situation is therefore neither scalable nor financially acceptable for most providers, since they would need to be certified in each country. In order to create a single digital market in Europe that maintains confidence and ensures European sovereignty, we need to implement certifications that are recognised by all European countries. The European Commission seems to have understood this message, as it has recently launched an initiative to create a general certification framework in Europe. This measure will constitute a major step forward, provided that it is based on the experience and evaluation criteria of countries that already know what they are doing and does not weaken requirements to accommodate the laggards.
The light at the end of the tunnel
In the end, a framework that instills trust in security technologies will inevitably be developed through better collaboration and cooperation among all stakeholders in the cyber ecosystem. Ongoing exchange between the public and private sectors, the creation of alliances among cybersecurity solution providers, and customer involvement in the development process (i.e. collaborative design) will undoubtedly increase the reliability and effectiveness of protection hardware.