Addressing Security Vulnerabilities in Infrastructure

Addressing Security Vulnerabilities in Infrastructure

Getting your Trinity Audio player ready...

With dozens of breaches and millions left violated, 2017 has witnessed a historic amount of hacking. This year has been stained with numerous hacking incidents, including WannaCry, Petya and Cloudbleed. Of these many cases, the Equifax data breach can be crowned the most significant hack of the year, having exposed the personal data of nearly 148 million people.

Late last year, we found out that Uber was hacked in 2016 – an incident that held hostage the information of 57 million customers. Uber responded by paying a ransom of $100,000 to the hackers – and tried to keep it quiet. The damage to the Equifax and Uber brands will be difficult to calculate, but some estimates put it in the billions of dollars.

 

Many businesses are finding that their software infrastructure becomes increasingly challenging to secure every year. Some organizations have turned to purchasing cyber security insurance to mitigate their financial losses from this trend. PwC estimates that by 2020, businesses will spend $7.5 billion for cyber security insurance.

Why?

The mission to secure outward-facing, software infrastructure systems has become incredibly chaotic, thanks to the following obstacles: the proliferation of open source, a poor accumulation of institutional software memory, unknown software components deilvered in third-party binaries, and a very low-level priority placed on engineering debt.

Proliferation of Open Source Code

More than 90 percent of the software written these days integrates open source code. Such code is used in operating systems, network platforms and applications. This trend will only continue to grow because, by leveraging open source, developers can lower assembly costs and quickly add innovations.

Whether software code is proprietary or open source, it harbors security vulnerabilities. Because of its transparency, open source code tends be better engineered than a comparable piece of proprietary code. And thanks to its flexibility, open source code is extensively used. This means that a security vulnerability in a piece of open source code is likely to exist across a multitude of applications and platforms. Consequently, open source software vulnerabilities become low-hanging fruit for hackers to target and attack.

Poor Accumulation of Institutional Software Memory

In just a few years, the makeup of an organization’s total computing system – through bug fixes and functionality additions – becomes fairly opaque to those who must manage and secure it. Even systems that have software additions that were assembled, integrated and documented using industry best practices are challenging to manage and effectively secure, due to a shortage of accumulated institutional software memory.

While organizations benefit from purchasing code from custom software development firms, their individual team makeup changes over time. Some of the custom software vendors and ISVs go out of business. Additionally, in-house engineering teams experience team composition changes. So, even though organizations may own all of the source code for their platforms, years of software project accumulations – from multiple sources – make it a painful and time-consuming endeavor to find where all of the source code resides, let alone understand it.

These conditions make for a very poor accounting of, and insights into, the accumulated legacy code base for almost every computing platform.

Unknown Software Components Deilvered in Third-Party Binaries

Most of the custom software in today’s enterprise is sourced externally or contains code from third-party vendors that is built using open source code components. By sourcing third-party code instead of developing software on their own, enterprises lower their overall development costs and quickly add innovative capabilities to remain competititve. Additionally, it is nearly impossible to know what open source code elements reside in acquired off-the-shelf software.

Procuring software in this manner increases efficiency because it saves months or years of originally required development time. Interestingly, this code is almost always deliverd in binary format. Though this delivery protects the third-party development teams’ intellectual property, it makes it almost impossible to accurately account for all of the open source components that reside in the all of the binaries provided. This problem is compounded when an enterprise platform is updated by different software vendors, over extended periods of time, and integrated with off-the-shelf applications.

Low-Level Priority for Engineering Debt

Every business, non-profit and government organization looks to increase their productivity. Software development teams are given a primary goal of increasing functionality, a secondary goal of stability and a tertiary goal of scalability. As engineering teams develop new revs of software, they are alerted to potential security vulnerabilities that need to be patched. Unfortunately, the software development world demonstrates a tendency to give this obligation a very low priority. This lack of urgency may push patches to a later rev, with very infrequent real-time patch administration. Subsequently, known security vulnerabilities can go unpatched for significant periods of time, further exacerbating the engineering debt and a company’s vulnerability.

Businesses and organizations must begin to recognize that maintaining the integrity of their current computing infrastructure is just as critical as adding new functionality and capabilities. Given all of the issues raised about code and documentation accuracy, there is one way businesses can begin to accurately understand their existing code base – binary scanning.

Actually See Into Your Code

Newer binary code scanners evaluate all raw binary to positively identify what open source components, and what versions, are in the code. The scanners then compare their findings to established, frequently updated databases of known security vulnerabilities. Binary scanners can examine library function or other software exclusively delivered in binary format sans disassembly.

In order to do all of these things, the newer binary code scanners forego reverse engineering and look for code fingerprints that enable them to accurately catalog what code has actually been employed. In addition to finding security vulnerabilities, these scanners provide an accurate accounting of an organization’s code base. They can also be used to examine new code.

While purchasing cyber insurance is a way to mitigate security-related losses, organizations should look to proactively address how they manage their legacy code base. They must reprioritize engineering debt issues while gaining a much truer understanding of their existing and newly acquired code. Doing so will lower cyber insurance premiums while significantly reducing the potential for breaches that have occurred to organizations like Equifax. Software security teams and MSPs must refrain from neglecting these necessary changes at their business’s peril.

Courtesy : Securitymagazine

Loading