Technology & Innovation

Putting machine learning to work on your cyber-security front line

July 10, 2017

Global

July 10, 2017

Global
Ophir Bleiberg

VP, Head of Emerging Products, Research and Product Management

Ophir Bleiberg leads the charter on emerging products, including Imperva CounterBreach, and is also responsible for SecureSphere product management and data security research. He has held several positions of increasing responsibility at Imperva, starting as a system architect, then as a director of the server team, and also as a senior director of the application security, management and infrastructure team. Prior to Imperva he held management positions at Personeta as a R&D manager and a group leader of application infrastructure. Ophir also served in the Israel Defense Forces. He is a graduate of Tel Aviv University with a degree in computer sciences and computational biology. 

Cyber-security is a top concern for all businesses with the annual global cost of cyber-security predicted to reach US$6tn by 2021, up from US$3tn in 2015.

As recently as this month, the British Chamber of Commerce published a report suggesting that This increase in malicious activity targeting business data runs parallel to a comprehensive, global lack of qualified and talented security staff to fill in essential cyber-security positions. 

Any attempt to highlight weaknesses or irregularities in an organisation’s cyber defences is exceptionally time consuming and requires staff with analytic business intelligence skills and someone who understands attacks on data from a technical perspective. There are three layers that security professionals should be concerned with:

  • Monitoring and visibility: Solutions that allow you to see what’s going on and monitor who is accessing data in your network.
  • Policy layer: Having someone define what is okay and not okay in an organisation and putting the technology in place to enforce these policies.
  • The third and most time-consuming layer is being able to identify things that “slip through the cracks” – these are behaviours that do not directly violate an explicit company policy. Alternatively, if policies are too strict and are constantly violated in order for businesses to run smoothly, then someone needs to review these violations and check their viability.

Yet, a study by , a jobs board, suggests that almost every country is in dire need of more cyber-security professionals or a viable alternative that can fill the gap. However, recent technological developments might provide a more structured response to this ongoing problem.

Working with the machines

There are very few areas of technological development that have created as much excitement as machine learning - defined as AI (artificial intelligence), it provides computers with the ability to learn without being explicitly programmed. Machine learning is at the forefront of technological innovation in a variety of industries; self-driving cars, social media channels scanning for hate-speech and even Amazon’s suggested purchases all use the technology.

Luckily, machine learning techniques can be put to work to effectively address the three essential areas of cyber-security. What’s more, in many areas, machine learning is often more dependable than its human counterpart and allows highly skilled staff to focus their attention on more strategic areas.

Machine learning products are programmed to learn as much as they can about any given situation. Therefore, a properly programmed piece of machine learning software could perform the same preventative and analytical security measures as a member of staff. However, machine learning requires “test driving.” Just like self-driving cars that need to be trained over many different roads and scenarios before being deemed safe and useable, so do security machine learning algorithms. Effective algorithms have been trained over multiple data sets and scenarios, and preferably multiple customers, to become fully effective in the long term.

Threats from the inside

For example, let’s look at the issue of insider threats. It is most worrying when people are already actively engaged in work on data within the organisation attempt to compromise it, mostly for financial gain but also in some cases for politically charged motives. .

There are specific machine learning products that can detect threats at a much greater speed than human agents and can help to prevent potentially devastating consequences. One such example comes from a customer in the transport industry, who detected a pattern of unusual access to a data set. This turned out to be a person attempting to masquerade as an application to access a database only ever accessed by machines. When this was brought to the customer’s attention, it became apparent that the data set in question was only meant to be accessed by federal authorities, and most likely contained personal (and potentially very sensitive) information about users.

Less malicious threats have been identified – referred to as careless threats. One such example is an endpoint that was detected as scanning entire systems of files from the central system. This looked likely to be an insider threat at first glance, but it turned out the employee in question had backup software running on his laptop to back up his personal photos. It is worth noting, however, that even though this employee had no malicious intent, the net effect of their action was backing up company data to storage that was not subject to company policy or standard enterprise security measures such as password complexity constraints, multi-factor authentication or data purge policies. Ultimately, the data was put at risk.

The ability to differentiate and analyse vast sums of data is what makes machine learning so relevant for cyber-security professionals. With an increase in the amount and ferocity of cyber-attacks highly likely, and the security skills shortage showing no signs of disappearing, companies need to turn to alternatives to make sure they avoid the cataclysmic data breaches we have seen with alarming frequency in recent years. 

 

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of The Economist Intelligence Unit Limited (EIU) or any other member of The Economist Group. The Economist Group (including the EIU) cannot accept any responsibility or liability for reliance by any person on this article or any of the information, opinions or conclusions set out in the article.

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week