By Taylor Armerding, Security Expert at Synopsys Software Integrity Group
It’s a pretty good bet that if a car dealership posted an ad announcing that the criminal underground was selling keys that could unlock owners’ vehicles, but that it was offering free replacement locks that wouldn’t be vulnerable, the responses would be quick and universal.
Apparently, we’re not quite there when it comes to software. A recent survey of 340 information security professionals found that 27% of organisations worldwide acknowledged that they had been breached because of unpatched vulnerabilities.
Organisations still ignoring patches
That should be no surprise. The ever-lengthening list of headlines about breaches — some catastrophic like the SingHealth cyberattack in Singapore — reflect the reality that organisations still ignore patches for publicly reported vulnerabilities.
There is, of course, more than one way to view those statistics. “Only” one in four might sound like a positive — that means 75% weren’t breached. But most people’s expectation for physical security would be more like one chance in several hundred that a thief could defeat their home security system.
And keep in mind that these aren’t “zero-day” vulnerabilities that haven’t been seen before — these are known bugs or flaws, with patches available. The victims simply failed to apply the available patches.
Why can’t organisations prevent data breaches?
The report also offers plenty of reasons why organisations are vulnerable.
•While 59% of respondents said they could detect new hardware or software added to their network within minutes or hours, 31% said it would take days, weeks or even months. Another 11% said they couldn’t detect it at all.
•More than a third (35%) said they used automatic discovery solutions on less than half of their software and hardware assets. Another 13% said they didn’t use automatic discovery at all.
•While a large majority reported doing some kind of vulnerability scanning, 39% said they did it monthly or less often than that.
•A large majority (74%) reported that they fixed vulnerabilities in a month or less, but that still leaves the “one-in-four” that don’t. And while about half reported applying patches in two weeks or less, that means the other half don’t.
•For creators and vendors of software products, the survey also came with a warning. A majority of respondents said their organisations would, in some cases, stop using a product because of vulnerabilities. Few — only 6% — said they did it frequently, but another 31% said they did it occasionally and 44% said while it was rare, it happens. And 82% said a patch for a disclosed vulnerability should be available within two weeks or less.
Patching is fundamental
Why is rigorous patching — a fundamental of good security — not close to 100%? That is the multi-million-dollar question. According to an IBM study, the average cost of a data breach last year was $3.86 million.
Indeed, breaches are costly in multiple ways. Among the potential damages are loss of reputation, a drop in market value, compliance fines and legal liability. While most companies survive them, they can be an existential threat.
The irony is that it doesn’t have to be this way. While bulletproof security is impossible, organisations are not defenceless. There are multiple tools and other measures available that can improve the security of networks, applications and systems enough to prompt all but the most expert and motivated hackers to look for easier targets.
How to prevent data breaches
Doing that comes down to two fundamentals:
First, application vendors need to build security into the software of their products before they ever hit the market. It won’t be perfect, but it can come close. And in application security, unlike in most sports, close matters.
Second, organisations that use software — and all of them do — need to know what they have, and keep it up to date. As has now become a cliché (because it is true), you can’t protect what you don’t even know you have.
Prevent data breaches by shifting left
The template for how to do the first is now well established. It is preached from the podiums of every security conference. The universal expression is “shift left,” which means conducting security testing from the beginning and throughout the software development life cycle (SDLC). Don’t “save” it for penetration testing at the end.
There is a comprehensive menu of tools to help developers find and fix bugs. They include (but are not limited to):
•Architecture risk analysis (ARA) – About half of the software defects that create security problems are flaws in design. ARA identifies those flaws and determines the level of risks to business information assets.
•Static application security testing (SAST) – This helps teams find and fix security and quality weaknesses in proprietary code as it is being developed.
•Dynamic application security testing (DAST) – This tool tests applications while they are running, simulating an attack by a hacker.
•Interactive application security testing (IAST) – This also tests running applications, but unlike DAST, it uses code instrumentation to observe application behaviour and data flow. It’s useful for CI/CD (continuous integration/continuous delivery) development environments, where the priorities are speed and automation.
•Software composition analysis (SCA) – Almost every application in existence today is built, at least in part, on open source software components. SCA finds those components, along with any associated security vulnerabilities that have been reported against them.
•Pen testing – This is best done at the end of development, and is considered an extension of DAST. The goal is to find vulnerabilities in web applications and services and then try to exploit them so developers can fix them before a product hits the market.
Of course, at the end of all that, even if software is close to perfect, vulnerabilities will inevitably be discovered, either by bad guys who’ll exploit them, or by good guys who’ll report them to the makers before going public.
Don’t forget to patch, patch, patch
And that leads to the second fundamental: Know what you have and keep it up to date — as in, patch, patch, patch. That applies both to vendors and their customers.
Tim Erlin, Vice President of Product Management and Strategy at Tripwire, said besides secure development practices, vendors need “a process for remediation of any discovered vulnerabilities. For vendors, the problem isn’t really fixed until their customers actually apply a patch or other mitigation.”
Justin Hutchings, Senior Product Manager of Security at GitHub, the code-sharing and publishing service that also manages and stores revisions of projects, agreed. Obviously, it is the responsibility of companies to disclose and provide fixes for vulnerabilities in their software.
But once the vulnerability has been disclosed, “it’s the responsibility of downstream software projects and IT organisations to patch vulnerabilities,” he said.
And the reality, which confirms the Tripwire findings, is that not all of them do. “In the last year, we’ve sent nearly 27 million security vulnerability alerts to vulnerable software projects on GitHub,” he said.
Consider moving to the cloud
“While security vulnerability alerts provide users with information to secure their projects, industry data show that more than 70% of vulnerabilities remain unpatched after 30 days, and many can take as much as a year to patch,” said Hutchings.
One way to improve on that, he said, is to move business-critical software to the cloud. “Software-as-a-service apps tend to be patched much faster because they’re all centrally managed and don’t rely on thousands of customers’ individual upgrade cycles,” he said.
Yes, all of these measures cost money. But they are all much cheaper than dealing with the fallout from a major breach.