Monday, May 23, 2016

Next-Generation Vulnerability Assessment and Management (NVAM) by Tasawar Jalali, MBA, CISSP, SCCP, CEH

(This paper was sent to me and I worked with the author a bit before posting it. Full disclosure, I do hold stock in Tenable)

  


  

Stop Building Silos
NG-VAM - A New Approach to Security

By Tasawar Jalali, MBA, CISSP, SCCP, CEH
Abstract

The emergence of a whole new world of cyberspace has, and is more or less like an alien territory today—where there are very few knowns—and mostly unknowns. In this era of interconnected and interdependent technology, the nature and definition of security are going through a fundamental transformation. The revolution in information security technologies is altering everything – from how we secure and design our defense in depth and how we respond to ever increasing threats. A cyber-security system – like any system is made of a number of parts that have the complex level of inter-connectivity and inter-dependencies, designed to achieve the desired goal. In spite of this inter-connectivity and inter-dependencies of these parts, there is currently no culture of a collective approach to identify, detect, respond and protect from the increasing threats. All technologies still tend to work in silos.

This paper is written for information security professionals who are responsible for developing and implementing security policies and managing risk. I have tried to address shortcomings and risks of operating in information silos and how such deficiencies can be addressed by Next-Generation Vulnerability Assessment and Management (NVAM) Solution.




Introduction

I will be surprised to see any small and midsize enterprise (SME) that does not have cutting edge technology in place to secure its assets. It's not uncommon these days to for such organizations to have state of the art security controls in place like Next Generation Firewall, IDP, IDS, IPS, NAC, PKI, SIEM's, VA Scanners, etc. All these controls work in isolation or silos without communicating with each other. Many organizations realized this problem early on.  To effectively combat network attacks, it was imperative for all such controls to work in conjunction with each other. This lead to a hyper growth of security information and event management (SIEM) market.

Although originally designed and built for compliance, organizations started using SIEM's for data aggregation and data correlation to track breaches and thwart information security threats. SIEM's help enable security teams to detect and respond to internal and external attacks by analyzing machine data streaming from security technologies, such as endpoints, servers, and networks.

Next Generation SIEM's like Splunk provide correlation searches that present a unified view of security across heterogeneous vendor data formats. Splunk does this based on search-time mappings to a common set of field names and tags that can be defined at any time after the data is captured, indexed, and available for an immediate search. This means that you do not need to write parsers before you can start collecting and searching the data. However, you do need to define the field extractions and tags for each data format before reports, and correlation searches will work on that data. These tags and field extractions for data formats are defined in their add-ons, but they pose a new challenge to already data fatigued SOC and IT teams. Organizations will require extremely skilled IT Security Engineers to implement such solutions, and this introduces data fatigue to already short-staffed IT and SOC's teams. Instead of relying on one individual, why not have a dedicated R&D take on this task and provide accurate and detailed reports with relevant information about security attacks and breaches.

Challenges

Despite having multilayer defense-in-depth architecture in place, organizations are experiencing many security breaches every year because of a failure in malware detection and inability to correlate data across the network. The volume of threats is so high that there is only one way to manage that firehose of information. In 2008, the year the Conficker worm infected millions of computers in more than 190 countries, there were estimated to be 1 million known viruses and malware [1]. It is now in the hundreds of millions. Counting has become meaningless as modern malware is customized, polymorphic, and often composed of multiple pieces of independent and unique malware. Ransomware variations have been doubling every year for the past two years.

The traditional reactive approach creates a "window of opportunity," often measured in weeks or months, which uses a distributed model, and the limitations are evident: every day tens of thousands of new signatures must be sent to each and every endpoint.  "The median number of days that attackers were present on a victim's network before being discovered was 146 days in 2015" [2]. Today, with nearly one million new malicious threats detected every day, even the best heuristics, a traditional model cannot keep pace.  The idea of maintaining a "blacklist" of all known bad software is simply not sustainable given these numbers.  The Canada Post example which included a .doc attachment that was detected by only four anti-malware engines (out of 56 checked) upon receipt, illustrates that organizations cannot solely rely on traditional antivirus for malware detection [3].

The growth of targeted attacks has only continued. Attacks today are focused, not opportunistic and driven by human interaction. Advanced cyber-attacks are not just about malware. They are about achieving objectives.

Just deploying top of the line security technologies that operate in silos and provide a dump of raw data into an already strained organization doesn't help to narrow the security problem, it compounds it. Gartner has it right. "Cyber threat intelligence needs to include much more than raw data"[4]. It requires rich contextual information, continuous monitoring ability and tight integration with Cyber Threat Intelligence (CTI).

Traditional defense in depth is not okay anymore. Current AV solutions and Firewalls are not detecting a good percentage of malware and viruses, and reasons are AV, and FW's were created in an era where attacks were widespread and spread across millions of systems. Today attacks are targeted and more focused and more sophisticated and seen on few end points and sometimes in just one organization. We need contextual information, which includes an understanding of the past, present and future tactics, and techniques and procedures (TTPs) of a wide variety of adversaries. It must also include the linkage between the technical indicators (e.g., IP addresses and domains associated with threats or hashes that "fingerprint" malicious files), adversaries, their motivations and intents, and information about who is being targeted. 

Who or what wrote the file before it was launched? What else was the system doing at that time? As the file may have appeared hours, days or even weeks earlier, answering those questions requires the ability to look back in time, or roll back the tape. Most attacks come from known vulnerabilities or stolen credentials.  The historical and live record is equally important to determine scope—the systems, users and configuration changes that are impacted by an attack. In most advanced attacks, there is rarely just one artifact, one file or one configuration change made by the attacker to establish persistence. They may have left multiple files, even if only one has executed. They may have jumped to other processes to steal credentials or infect other systems in your organization. Given any thread (e.g., a file, a user, a system), unraveling goes in both directions—tracing the activity to its source to identify the root cause, and following the activity to its destinations to determine scope.

Next Generation Vulnerability Assessment & Management (NVAM)

The new paradigm of Vulnerability Assessment and Management (VAM) changes the way Risk Management is achieved by providing the real-time data and necessary correlation needed to unravel and understand how an attack occurs and integrate Actionable Threat Intelligence to provide visibility to unknown zero-day malware and Advanced Persistent Threats (APT's). By viewing the historical activity that is captured on a centralized console, you can quickly determine the root cause.  For example, consider what happens when you discover a C&C traffic originating from your network. If you just terminate the process, re-image the machine, you have only addressed a symptom, not the cause. Who or what launched the process and how? What if there is a process running on your system that was not detected by your Anti-Virus or Anti-malware, which are heavily dependent on signature updates.

NVAM solution must monitor key indicators of compromise like Unusual Outbound Network Traffic, Anomalies in Privileged User Account Activity, Mismatched Port-Application Traffic i.e. DNS over port 80, Suspicious Registry or System File Changes, and DNS Request Anomalies/DNS exfiltration to name few. A key feature of VASM is the ability to correlate all this data from different Attack Surface Components in the environment like "channels, methods, and data items" [5]. For example Channels (e.g., sockets), invokes the system's methods (e.g., API), and sends (receives) data items (e.g., input strings), in real-time and ability to integrate actionable threat intelligence from multiple CTI providers. Threat intelligence combines advanced malware analysis with deep threat analytics and content to empower security teams to defend proactively against attacks and malware outbreaks.

Effective NVAM solution must have following five characteristics:

1.     Identify both types of attack vectors (Exploit driven attacks and unknown/zero-day exploits)
2.     Eliminate silos by data aggregation
3.     Capture Forensic info of attacks
4.     Passive and non-intrusive monitoring
5.     Integrated, timely and actionable CTI

Recommendation

As an attacker to perform a successful attack, you have to go through a list of steps, and you have to be successful in each and every step. For example, the first step might be getting vulnerability exploit into an organization and second phase is exploiting that vulnerability and then maybe downloading malware, install malware, and establish C&C. You need to stop an attacker at one of these steps to foil entire attack, and each of these opportunities where we can halt the attack is called a kill point and set of these kill points is known as the kill chain.  To offer most effective kill chain, we must have a way to aggregate and correlate all the relevant information and data in any given environment.  In my research of different technologies available to assess and manage risk, I evaluated several vendors that included Qualys, Rapid7, and Tenable. All of these vendors have their strengths and weaknesses but to keep this paper brief, and how it addresses cybersecurity challenges faced by organizations that operate in silos, I chose to analyze Tenable Network Security solution.

Tenable Network Security transforms security technology with comprehensive solutions that provide continuous visibility and critical context, enabling decisive actions to protect organizations. Tenable solutions can help can eliminate blind spots, aggregate data, prioritize threats, correlate data with Cyber Threat Intelligence(CTI), and reduce exposure and loss, allowing you to eliminate the silo mode of operation and enable better vulnerability assessment and risk management.

Tenable's SecurityCenter Continuous View™ (SCCV™) offers a true continuous network monitoring platform. SCCV provides the broadest coverage of network environment, the deepest detection of vulnerabilities, misconfigurations, malware and real-time threats, the most advanced analytics, and Assurance Report Cards (ARC) that help CISO's map Security Policy of an organization to an ARC. Information Security Policy is a set of rules enacted by an organization to ensure that all users or networks of the IT structure within the organization's domain abide by the prescriptions regarding the security of data stored digitally within the boundaries the organization stretches its authority. Security Policies help organizations to engage employees, provide visibility in who does what and when, prioritize risk and address threats. Security policy which consists of several elements can get very complex to implement and maintain in a large organization. ARC's provide a quick and easy way to measure whether Security Policy is effectively implemented. 

In fact, Defense Information Systems Agency (DISA) chose SCCV as the Assured Compliance Assessment Solution (ACAS) in 2012. SCCV was selected by DISA because it met DISA's requirements for a fully-integrated vulnerability assessment platform offering. SCCV constitutes of five major components that work in tandem to gather and analyze data across the entire organization.  These are listed below:

1.     Nessus Scanner
2.     Nessus Agents
3.     Log Correlation Engine
4.     Passive Vulnerability Scanner
5.     Connectors

The "Nessus" Project was started by Renaud Deraison in 1998 to provide to the Internet community a free remote security scanner. Nessus is a remote security scanning tool, which scans a computer and raises an alert if it discovers any vulnerabilities that malicious hackers could use to gain access to any computer you have connected to a network.  It does this by running over 75,000 checks (Plugins), testing to see if any of these attacks could be used to break into the computer or otherwise harm it.

According to surveys done in 2009 by sectools.org [6], Nessus™ is the world's most popular vulnerability scanner, taking first place in 2000, 2003, and 2006 security tools survey. With over 10 million downloads since its inception, Nessus is the most popular vulnerability assessment technology in the world.

Tenable Network Security realized that there would be devices that will not always connect to the network like remote users who are using laptops or desktops that will not always stay connected to the corporate network. Nessus Agents increases scan flexibility by making it easy to scan assets without needing ongoing host credentials or assets that are offline, as well as enable large-scale concurrent scanning with little network impact.

Tenable's Log Correlation Engine™ (LCE™) aggregates host data and offers an ability to perform in-depth event correlation. This technology provides a high-performance scripting language named TASL (Tenable Application Scripting Language). There are 9500+ normalization rules (parsed events) out of the box, which can parse data from different sources like Cisco, Fortinet, Netscreen, Snort and other IDS's. LCE can also parse data from sources like Windows event logs that can send and parse data from OS/App logs, Checkpoint Firewall, Splunk Server, CISCO IPS and Events in Motion with TNM (Network Monitor). On top of that are more rules looking for things like continuous activity, statistical anomalies, etc.

Tenable's Passive Vulnerability Scanner™ (PVS™) eliminates network blind spots by continuously monitoring network traffic in real-time to discover active assets, identify cloud applications, and detect anomalous activity. PVS™ monitors and analyzes network traffic continuously to see new assets as they become active on the network. It also identifies an asset's OS, active applications, services, network connections, and associated vulnerabilities. This ability to eliminate network blind spots is unique, especially when compared to traditional vulnerability management which relies solely on active scanning to identify devices, services, applications, and vulnerabilities. Alerting on anomalies related to network traffic is useful for understanding changes in how your network is being used and allows for better situational awareness of which traffic is normal and which atypical sets of traffic are worth investigating to see if there is a security or a compliance impact.

Tenable also has a broad set of connectors, which allow integration with a broad range of vendors to build advanced workflows, simplify configuration management, query MDM solutions to look for vulnerabilities in mobile devices, centralized credential management, and integration with NAC solutions to isolate and quarantine a compromised device in real-time.

Nessus leverages several plugins that analyze millions of malware samples a month, harvested globally, and generates terabytes of rich, actionable content every day, to provide customers unmatched scale, coverage, and protection from global threats. For example, using Nessus plugin 74442 (Microsoft Windows Known Bad AutoRuns & Scheduled Tasks), SecurityCenter users will be able to pinpoint autoruns and/or scheduled tasks that are created by malware. Tenable continuously collects indicators of compromise (IOCs) from leading commercial threat intelligence vendors that enable you to identify emerging threats in near real-time without any additional licensing or configuration costs. You can automatically create a baseline of normal activity and includes built-in anomaly detection. By default, Nessus assesses all running processes against indicators of malware. Not just on Windows, but on OS X and Linux flavors too!

Some of the new techniques attackers have used to evade detection is to encrypt communications with secure socket layer (SSL) encryption. Perimeter security systems like firewalls, intrusion detection systems/intrusion prevention systems (IDS/IPS) or sandboxes are unable to inspect the encrypted traffic payloads. Once a host is compromised, the perimeter defenses are blind to such malicious attacks. "Gartner believes that, in 2017, more than half of the network attacks targeting enterprises will use encrypted traffic to bypass controls, up from less than 5% today" [7]. This new trend has led to the emergence of vendors like Venafi (www.venafi.com). Venafi specializes in securing cryptographic keys and digital certificates, and this solution alone can cost hundreds of thousands of dollars. Nessus has several plugins that can help you upkeep the security of digital certificates. For example, Nessus Plugin#72459 checks for Certification Revocation List Expiry (CRL), Plugin#83298 checks for SSL Certificate Chain Contains Certificates Expiring, Plugin#15901 checks for SSL Certificate Expiry and much more. If your organization can't afford the million-dollar solution, you might want to leverage built-in SSL health check in Nessus.


Summary

To summarize, security is a process that requires the collaboration of systems, knowledge and people.  No security organization should deploy "go-it-alone" solutions, and we are starting to see a new era of cooperation between companies, vendors and the security community at large. Security practitioners need to work together to raise the bar against our adversaries.  Security vendors must enable solutions that provide continuous monitoring and aggregate security data in an environment through collaboration, integration, and sharing of intelligence feeds. The future of security is collective defense, correlation, and integration with actionable Threat Intelligence Data and Continuous monitoring - that's what Tenable does the best.

References

 [5] Pratyusa K. Manadhata and Jeannette M. Wing, “An Attack Surface Metric” in IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. X, MONTH 2010

1 comment: