In the past decade, artificial intelligence (AI) has changed from the tremendous stylistic device of classic science fiction films to one of the most successful new technologies – and one of the most popular catchwords across industries. In cybersecurity, the advances made by AI have been comparatively mixed and often relatively inefficient.
Today, AI is being used or planned in just about every industry – manufacturing, agriculture, healthcare, transportation, banking, finance, retail. It is the leading technology trend of our time and can be found everywhere, from voice-activated consumer devices to factory robots. We know this because many of the companies in these areas advertise their use of AI. And what looks more modern than the announcement of a new AI-based project? AI as a marketing tool is just as important as its efficiency for products and services.
As in other areas, AI has also proven to be a driver of innovation in cybersecurity over the past five years. However, the progress here has been comparatively mixed and often relatively inefficient. For example, cybersecurity companies still rely primarily on systems that AI does not support to identify weaknesses in their codebase or sophisticated attackers in their networks.
This lack of progress is due to a culture of secrecy combined with the empty marketing promises that shape the use of AI in the cybersecurity industry. This development is in stark contrast to other areas of application of AI, such as computer vision, speech recognition or the understanding of natural language. Companies have created a common starting point in the form of available benchmarks, conferences and workshops. This is how innovations are shared and drive progress.
The prevailing culture of secrecy and marketing hype in cybersecurity has significantly negatively impacted AI development in IT security. On the one hand, companies doing their own AI research are discouraged from sharing their findings because they know their competitors wouldn’t. On the other hand, the lack of transparency concerning AI technologies enables free riders to position themselves with inefficient AI systems in the market.
The international technology giants like Google, Amazon and Facebook are much more open for their AI research. The main reason for this is their need for top talent, including the best AI researchers in academia. However, these people often only accept a job if they can continue to publish their work, which leads to greater openness to their research on AI. On the other hand, a secret is kept about AI in the security sector. Numerous cybersecurity companies claim to work with AI when they use the multiplication tables of statistics in reality.
These companies are trying to make it appear that they are using a secret algorithm behind closed doors. They’re praying to distract from what’s behind the scenes. There is also a degree of obfuscation and even condescension. Many security companies claim to use AI and machine learning, but when asked about the basics, one often hears answers like: “You wouldn’t and that. Just trust us. It’s tooIt’splex to explain. We can’t do that because our competitors would then copy us. This defensive stance not only damages the credibility of the security industry as a whole; it also makes real AI innovators suffer from the vortex created by these dubious actors.
When cybersecurity firms shy away from disclosing details about their AI technology, it becomes more difficult to distinguish genuine innovations from empty marketing promises. To make real progress in using AI to solve cybersecurity problems, like spotting stealth attacks on networks or lousy code in software supply chains, cybersecurity AI innovators need to embrace the openness and scientific culture of the general AI community.
Cybersecurity Must Not Be A Non-Transparent Market
This narrow-minded way of thinking must come to an end. Otherwise, our industry will become an opaque market, where the lowest quality products succeed because they cost less to develop, damaging the reputation of cybersecurity in general and that of AI in cybersecurity in particular. Clever regulatory requirements could represent an option to steer the information strategy of the providers regarding their solutions and the use of AI in uniform channels.
Another option, on the other hand, would be for companies to take the proactive step of opening up. The more open the providers are, the more they will inspire other providers to follow suit and provide more specific information to buyers – this would, at the same time, create an incentive for providers to develop new methods for innovating AI applications in cybersecurity.
This is what drives Sophos and our team of Sophos data scientists. Sophos is active in the cybersecurity AI space with several initiatives to encourage the community to be more open and less hype. This also includes obligations to publish scientific articles on the AI systems used in the products, the provision of broad and high-quality benchmarks for other research groups from the private and academic sectors, and the disclosure of the sources of the technologies to the general research community.