Protect Your Hashes

Joseph Key, SSCP, Information Security Analyst

More specifically, your NTLM domain hashes. “How is this possible?” Believe it or not, the explanation is quite simple and often overlooked.

Since the release of Vista in 2007, within every default Windows domain implementation lies the protocol Link-Local Multicast Name Resolution (LLMNR). This protocol is based on Domain Name System (DNS) packet structure and allows hosts to perform name resolution for other hosts on the same network. Its default use is to provide a secondary method for systems to locate one another if your network’s DNS servers fail. So, when your clumsy-fingered employees search for \\pintserver instead of \\printserver, and DNS fails to provide an IP address for the requested resource, LLMNR helps by asking every system on the network “who is \\pintserver”.

Sounds pretty nice, right? Not so much unfortunately, or fortunately depending on which color hat you wear. As a part of offensive security, seeing this protocol blasting through TCPDUMP often induces a maniacal grin because I know that I am only a few steps away from attaining unauthorized access to your systems and data. Before we get too deep into how much I love LLMNR or more importantly how much you shouldn’t, let’s go over exactly how it works.

How LLMNR Works

First, a user makes an attempt to request for a resource, such as an internally hosted Website or network drive, \\Storag-1 instead of \\Storage-1 for example.  That user’s computer sends requested host name to the internal DNS server and given the misspelling, the DNS server sends a reply that states the resource cannot be found.

Next, without any warning to the user, the computer falls back on LLMNR as the protocol to resolve \\Storag-1 into an IP address the computer can use. The problem with this is that it broadcasts the “who has” request to every system on the network. If the misspelled requested resource exists, then it will reply with a packet stating its location, DNS entries update and all is right in the world.

The Vulnerability

But what if I am on your network conducting a penetration test or I am an evil hacker plotting to steal all of your secrets? For starters, one could host a server that will reply to any LLMNR broadcast on the network, which can easily be done with a most excellent tool, Responder. Once that happens, your user’s system will automatically believe and then attempt to negotiate a domain session with that server, sending the user’s domain password in NTLMv2 hash format straight to me. All that is left is to use tools such as Hashcat or John the Ripper to crack the hash at my leisure offline. Worse still, I can attempt to downgrade the authentication method to receive the password in NTLMv1 format making hash cracking even easier. Work smarter, not harder is what my father always told me as a child.

How the Attack Works

Let’s take a look at how this attack works. First, we setup our malicious server to catch these mistyped resource requests. “-i” is the IP address of the operating system you are running Responder on and “-d” enables the tool to respond to domain suffix queries.

Figure 1: Server Setup

Joe Blog 1

Now we wait for a user to make a typo when attempting to connect to a NAS.  When a user finally makes a mistake our tool takes over and handles all of the hard work, responding to the victim and negotiation a session.

Figure 2: Resource Request Error

Joe Blog 2

Figure 3: Captured Password Hash

Joe Blog 3

If we take a look at the wireshark capture of the above events, we can gain a better understanding of what is going behind the scenes. In Figure 4, we see our DNS server failing to resolve the “Stor-1” hostname.

Figure 4: DNS Hostname Resolution Failure

Joe Blog 4

Once the DNS server fails to resolve the hostname, LLMNR takes over and starts broadcasting requests on the network. In Figure 5, you can see our malicious server at x.x.x.220 responding to our victim’s request at x.x.x.54. Once our malicious server poisons the LLMNR response, the victim starts the SMB session negotiation, resulting in the capture of the victim’s NTLMv2 password hash. The SMB session negotiation is show in Figure 6.

Figure 5: LLMNR Interaction between Attacker and Victim

Joe Blog 5

Figure 6: SMB Negotiation

Joe Blog 6

How the Mitigate Against the Attack

Now that we understand the risk associated with LLMNR, we are better equipped to protect our systems against this type of attack. The most effective way to stop LLMNR poisoning is by disabling the protocol through enabling the “Turn off Multicast Name Resolution” setting in the Local Group Policy Editor and disabling NetBIOS Name Service.

Figure 7: Enable “Turn off Multicast Name Resolution” setting

Joe Blog 7

Figure 8: Disable NetBIOS Name Service

Joe Blog 8

Posted in Network Protection, Uncategorized | Leave a comment

The Future of IT Security and Compliance Program Management? It’s In the Cloud…

Madeline Domma, Product Specialist

In recent years, organizations of all types, most notably within financial institutions, have started to transition from a reactive, scenario-based form of IT Governance, Risk and Compliance (GRC) management to specialized, regulation-based approaches which create holistic and realistic views of the overall IT security and compliance environment. The antiquated, reactive approach to IT GRC management has proven to be unsustainable in its focus on the “here and now” instead of developing an ongoing picture of an organization’s IT security and compliance program status. In parallel, market researchers have noticed a growing adoption of Software as a Service (SaaS), or cloud-based, platforms in IT GRC management. These platforms replace decentralized methodologies so that organizations can stay ahead of potential problems using a more focused and agile approach that fully integrates with previously established systems and workflows.

Regulatory compliance and overall risk management are two universal focuses of all organizations; yet not all organizations have wholly integrated compliance and risk management initiatives into their established information security, or IT GRC, programs. Compliance does not imply reduced risk nor does risk management ensure compliance to regulations, so historically, the two have been considered separate challenges for organizations to overcome. A strategic approach considers both factors as part of the organization’s universal information security posture and allows the institution to identify and maximize its assets.

Risk and Compliance Silos are Destined to Fail

In a traditionally reaction-based IT security and compliance management program, compliance with regulating bodies cannot easily be viewed in the context of day-to-day security practices. Often, especially in small to medium-sized organizations, compliance verification efforts are initiated when the organization must become compliant with certain regulations, perhaps after regulators have deemed the organization not in compliance and issued fines. Unless an organization can afford to perform ongoing internal audits or compliance analysis, maintaining compliance is not part of day-to-day operations.

Similarly, a reaction-based approach to overall IT security and compliance management will result in a decentralized compilation of documentation and scenario-specific risk management exercises to plan for various theoretical disasters. Practices and procedures are executed to mitigate hypothetical threats and, depending upon the size or structure of the organization, solutions vary from situation to situation. Moreover, compliance with regulating bodies may not be intentionally considered during the development of these operations.

A Unified Approach for Sustainable Program Management

Analyzing information security risk and compliance management simultaneously will allow your organization to build an information security program that is sustainable, consistent, efficient and agile. Encompassing information security and compliance management requires stakeholders and decision-makers across the institution (from the highest levels of executive management and risk managers to IT operations, internal auditors and compliance officers) to leverage a single set of data across their unique initiatives.  The data collected from this approach can range from policies describing the institution’s overall security posture, to detailed vulnerability information or specific compliance citation attestation, tracking and reporting.

When so many organizations have become accustomed to retaining disjointed documentation and scenario-specific protocols to address company-wide IT GRC challenges, how can a major program reform such as this be accomplished?”

Cue, “The Cloud”

Cloud-based IT GRC platforms offer dynamic management solutions for organizations of all sizes because, by design, they must be customized and individualized to meet the needs of a variety of IT environments. The benefits of cloud-based IT GRC systems become evident soon after deployment.

Cloud-based applications are designed to quickly and easily build information security programs via a shared workspace which multiple users may authenticate to and work within collaboratively. Since most users simply need access to the web to begin working in a cloud environment, these platforms can be integrated into an organization’s existing environment with little to no change in the company’s infrastructure. The collaborative nature of cloud-based workflow makes way for comprehensive IT GRC programs within organizations of all sizes because employees become equipped to contribute to the centralized, company-wide application.

These emerging platforms inevitably eliminate redundancy or gaps in workflow, replacing decentralized security-program-related efforts. Although organizations may develop infinitely different IT security and compliance management plans based on unique needs, well-maintained cloud-based solutions provide the medium for automation of information and fastidious tracking of both day-to-day and grand-scale operations so that accurate and up-to-date data is available for those who need it, from auditors, regulators, or internal management. By delegating the responsibility of IT security and compliance program development, maintenance, and management within a centralized user interface from which all employees may contribute, maintaining the program becomes integral to day-to-day operations.

The result of this implementation is increased awareness of the organization’s IT GRC plans and procedures and a secure organization from the inside, out. Cloud-based IT GRC software is fast becoming the future platform of IT security and compliance management because, ultimately, secure and agile IT environments liberate organizations to more intelligently focus company resources towards improving customer services and satisfaction.

Posted in IT Security and Compliance | Tagged , , , | Leave a comment

TraceSecurity Receives Value Award in IT GRC Management Category from Industry Analyst, GRC 20/20

TraceCSO has been honored with a 2014 GRC Value Award in the IT GRC Management category by GRC analyst firm GRC 20/20. The 2nd annual GRC Value Awards recognized real-world implementations for Governance, Risk Management and Compliance programs and processes that have returned significant and measurable value to an organization. One organization using TraceSecurity’s cloud-based IT GRC solution, TraceCSO, was confirmed to have realized a savings of more than 100 management hours each week on average and $500,000 annually. Click here to read the GRC 20/20 blog.

The Case Study

To validate TraceSecurity’s award, GRC 20/20 Principal Analyst, Michael Rasmussen, researched one organization who struggled with decentralized processes and documents for managing its IT security, risk, and compliance program. The organization evaluated their options and looked at IT GRC solutions to assist them with this problem. The result of their evaluation led them to engage and deploy TraceCSO from TraceSecurity, a Software as a Service (SaaS) solution that the organization found was easy to engage and deploy to meet the range of their IT GRC needs.

Click here to download the in-depth case study produced by Rasmussen.

The On-Demand Webinar

TraceSecurity and Michael Rasmussen held an interactive webinar that described how TraceCSO gives organizations the ability to measure, identify and remediate issues across their processes and operations more efficiently and at a much lower operational cost. During the webinar, attendees:

  • Explored the complexities that continue to hinder IT GRC within organizations
  • Became familiar with the use cases for an IT GRC platform adoption
  • Realized the value of a simplified approach to IT GRC management

Click here to view the webinar on-demand.

 

 

Posted in IT GRC | Tagged | Leave a comment

Calculating the Cost of a Data Breach Today

In the wake of recent high-profile retail breaches, you are likely feeling the pressure to help keep your company’s name out of the headlines. In order to obtain approval and funding for security improvements, technologists often have to make their case by pointing to losses from recent security breaches; however, calculating those losses can be tricky. This articles leverages recent statistics to help you best estimate the direct and indirect costs of a data breach.

Filling in the Blanks with Reputable Metrics

According to the annual Ponemon Institute Study, it takes an average of 31 days at a cost of $20,000 per day to clean up and remediate after a cyber attack. The study analyzed 314 breaches, 61 of which were in the US, in 16 industry sectors including but not limited to financial, retail and healthcare. Some of the direct costs associated include audit and consulting services, legal defense, and public relations, communications with customers, etc. at a cost of $66 per record while indirect costs can entail lost business, increased cost to attract new customers, and in-house investigations, etc. at a cost of $135 per record.

There are several things that can increase the cost of a data breach. Lost or stolen devices increased breach costs by $18 per record, breaches involving third parties increased the costs by $25 per record, notifying stakeholders and customers too quickly increased the costs by $15 per record, and engaging consultants increased the costs by $3 per record this year. Fortunately, there are a few things we can do to decrease the cost of a breach such as having a strong security posture, having an incident response plan in place prior to the breach, having a business continuity plan in place prior to the breach, and employment of a CISO. Implementing these four controls would have reduced your data breach costs by $21, $17, $13, and $10, respectively, this year.

You may be asking yourself, what are the common causes of a data breach and what’s really at stake? Common causes include weak and stolen credentials, application vulnerabilities, malware, social engineering, inappropriate access, insider threats, physical attacks and user error. 44% of breaches involve malicious or criminal attacks and cost $246 per record, 31% involve “human error” or negligence by employees and cost $171 per record, and 25% involve system “glitches” and cost $160 per record. The average breach size affects 29,087 records with notification costs at $509,000. The average total cost of a data breach amounts to $5.85 million dollars which your business certainly cannot afford.

An Ounce of Prevention is Worth a Pound of Cure

Now that we understand the costs, let’s talk about how to mitigate the risks involved. All devices should be encrypted to protect your sensitive information from being maliciously accessed. Access control, monitoring and regular review should govern your sensitive information to prevent misuse by a third party vendor or negligent employee. Policies, procedures, standards, education and monitoring can help mitigate internal threats; your employees can be your biggest asset or your biggest risk. Lastly, having too much data in too many places can be controlled by data classification and retention policies and regular auditing.

The key takeaways can be summed up in just a few sentences. Designate a security officer or make security someone’s job. Have a strong information security program that includes performing regular risk assessments, having policies and procedures, evangelizing security awareness, knowing your compliance requirements and auditing regularly. Have an incident response and business continuity plan; don’t wait until it’s too late. Too many organizations read case studies with the kinds of powerful statistics mentioned above and still refuse to believe they will ever be affected. After seeing the costs involved, a proactive approach to information security should clearly be your only option.

To download a PDF of key metrics, visit our Slideshare or watch our webinar on-demand.

Posted in IT Risk Management and Assessments | Tagged | Leave a comment

Evaluate Cyber Liability Insurance in 3 Easy Steps

Brent Hobby, IT GRC Subject Matter Expert

We are often asked about the role that cyber liability insurance plays when an organization is developing a comprehensive information security program. We recommend cyber liability insurance be thought about in the context of an organization’s complete risk management program and as part of a company’s overall insurance package, rather than as part of an organization’s information security and compliance management program.

Step One: A Risk Assessment

Because many cyber liability policies now exclude “cyber risk,” evaluating the need for additional coverage should begin with a risk assessment. Speak with prospective insurers to make sure your assessment leverages a framework that they recommend. Depending on the size of the desired coverage, you may need to engage an approved third-party for your assessment.

Step Two: Risk Remediation or Risk Transference

Once you have a valid assessment, progress through the iterative process of reviewing risk remediation versus risk transfer. Get various quotes from insurers and repeat the review process. When complete, you will have a business-appropriate cyber risk coverage extension to your insurance coverage.

Step Three: Insure Based on Your Unique Business Need

Cyber liability insurance is relatively new, very flexible and costs can vary widely. Many organizations choose not to insure, others purchase coverage for specific breach response items, and some use it as a high-deductible umbrella coverage. Whichever your organization chooses, starting with a risk assessment will allow the business to drive the decision.

Posted in Governance, IT Risk Management and Assessments | Tagged , , , | Leave a comment

What You Should Know about Shellshock as an Ongoing Threat

Madeline Domma, Product Specialist

How Shellshock Stands Up to the Hype

Despite its clever name, many industry experts predict that Shellshock, originally released on September 25, 2014, is potentially the worst vulnerability to hit the Internet. NIST rates it a 10 out of 10 for severity, the US Department of Homeland Security has identified the vulnerability as “Critical”, and it is estimated to potentially affect nearly half of all websites.

Shellshock has proven to be an even worse threat than the heavily-reported Heartbleed vulnerability that made its debut earlier this year. Unlike Heartbleed, the Shellshock command sequence is alarmingly simple to execute remotely but can cause virtually incalculable damage to affected systems or networks of systems. The Shellshock vulnerability, nicknamed the “Bash Bug”, enables even the least skilled of hackers to exploit the extremely popular command line interpreter (or shell) utility, called GNU Bash. Commonly referred to as “Bash”, the utility was originally developed for Unix systems then later distributed to Linux and OS X systems about 25 years ago. Shellshock exploits weaknesses within Bash by injecting arbitrary code into the shell that reconfigures environment variables forcing injections of malicious code directly onto exposed systems.  Furthermore, Bash does not require authentication to execute these commands. The exposure affects a staggering number of all websites on the Internet because Bash operates in conjunction with CGI scripts on several different types of web servers, including the commonly-used Apache servers. Although patches and updates have been released and were widely available soon after the vulnerability was discovered, Shellshock remains a threat to networks everywhere for quite a few reasons.

Breadth and Scope of Shellshock Implications

Worldwide, Shellshock conversations have toned down to a dull roar despite the vulnerability existing as an ongoing threat to networks. By design, the sequence is simple to inject into an exploitable operating system. Determining whether or not a system has been exploited can be difficult, too, since the vulnerability consists of so few commands in Bash. However, the problems do not end with verification that the system has not been exploited. The degree to which Shellshock can cause harm is yet to be determined and experts are still unsure of what its full potential could be. A glimpse at the full scale of this issue both today and into the future brings with it a few main points that must be remembered:

  1. The Shellshock vulnerability affects not only Unix or Linux based systems. Android devices, OS X devices, a majority of DSL/cable routers, security cameras, standalone webcams, and other IoT (“Internet of Things”) devices that could get overlooked (such as “smart” TVs or appliances) most likely run an embedded version of Bash. Therefore, many devices will need to be updated and patched after the essential systems for business operation are secured. Most individuals, even those who remain well-informed, may not know which or how many of the devices they maintain use Bash or which version of Bash these devices are currently running.
  2. Speaking of Bash versions, Shellshock affects all versions of Bash up to version 4.3 – meaning twenty-five years’ of Bash versions are exploitable by the vulnerability.
  3. Since the vulnerability operates as a code injection attack, the depth of the malicious code is exacerbated when Bash continues to execute commands – as the utility was designed to – after the code has been injected onto the system (i.e. when Bash continues to operate as it was designed).  Hijacked systems can be affected in different ways depending upon the commands that attackers execute after gaining access. Once the system has been compromised, hackers have the ability to execute any commands they choose and, historically, hackers have proven to be nothing if not creative.
  4. The fundamental design of the command sequence implies that Shellshock will remain an issue for at least the foreseeable future. A system is considered vulnerable if an outdated version of Bash is installed and Bash can be accessed either directly from the web or via another service running on the system that is accessible from the web. Unfortunately, until systems are either taken down completely or patched and secured, the vulnerability remains a threat to networks everywhere.

Best Practices to Proactively Guard Your Information Systems

Shellshock appears as cataclysmic as a threat can be. Nevertheless, there are several actions that can be taken to guard systems from the Shellshock vulnerability. Because Shellshock is a wide-reaching threat, it has demanded proportionate levels of media and expert attention prompting network administrators and security personnel to quickly take action to secure exploitable systems. Determining whether or not a system is affected involves a straightforward process of simple commands and once vulnerable system are identified, patches, updates, and signatures are readily available to secure all platforms. Apple Inc. reported that most OS X and iOS users were not at risk despite running an exploitable version of Bash. This is due to the fact that other controls are in place on OS X. Android reported that devices are not at risk for similar reasons. The good news continues because, while Windows has historically been riddled with a myriad of weaknesses to serious threats, Windows devices are not immediately affected if Bash is not installed on Windows-based systems. Since Bash is not a native utility for Windows Operating Systems, Windows-based systems only become vulnerable when they share a network with or are serviced by systems or VMs running exploitable operating systems.

TraceSecurity suggests a number of actions for those who have systems on their networks that are susceptible to the Shellshock vulnerability:

  1. Most importantly, all firmware, operating systems, Bash versions, and security policies in company IPS programs for all exposed devices should be updated immediately.
  2. Management and IT personnel should stay informed on the Shellshock issue as the scope of this vulnerability is yet to be determined and will be a serious threat well after Shellshock is no longer the topic of conversation.
  3. Maintaining a working knowledge of the organization’s IT environment is essential to a secure network. For example, knowing that websites hosted within the network use CGI, confirms that the host systems are exposed. Contrastingly, if none of the company websites use CGI, disabling CGI functionality on network devices can be simple action taken to protect systems from potential hacks that exploit CGI.
  4. Cautiously tracking network activity at all times can prove to be a useful practice if an attacker enters the IT environment, inevitably causing inconsistencies within network traffic.
  5. Firewalls, IDS, IPS, and other controls in place to compensate for open ports in system applications must be verified on a regular basis.
  6. TraceCSO customers with contracts that include network scanning functionality can run a dedicated network scan that will identify all network devices vulnerable to Shellshock. This scan can serve as the first step towards comprehensively patching all affected systems and quickly securing your network against Shellshock.

As always, TraceSecurity is proud to serve as a resource to those who have questions or concerns about how to protect IT environments from this vulnerability as well as any other potential threats. If you have any questions please contact your Delivery Director or your Business Development Manager.

Posted in Network Protection, Vulnerability Management | Leave a comment

Tools for Your Vulnerability Management Program

Bobby Methvien, Information Security Analyst and Security Services Manager

The largest threats to complex networks are those unknown to IT personnel. As a first line of defense against system and security-related vulnerabilities and as part of an organization’s on-going vulnerability management program, IT must conduct assessments of its information systems. The goal of a vulnerability management program is to reduce risk within an organization by identifying and resolving vulnerabilities to your IT systems and internal/external network.

Bring IT System Vulnerabilities into View

Vulnerability scanners are a tool that IT personnel use to scan many remote systems using thousands of vulnerability signatures in a short period of time. Results of a scan enable IT to coordinate a resolution for any vulnerabilities identified. Over time, as IT resolves identified vulnerabilities, only a handful of new vulnerabilities will be identified with additional scans. This is the point where IT personnel become confident in the security of the network and need to put it to the test.

Pen Test Your Internal and External Network  

Once IT personnel have significantly reduced the number of vulnerabilities identified through scans, a penetration test should be performed. The penetration test acts as an additional control and is used to identify system and security-related risk that affect an organization’s internal and external network.  Penetration tests work to compromise an organization’s host, web application, the network, or sensitive data.

Penetration tests have short and long-term benefits. In the short term, organizations are able to take action against findings in the assessment, and over the long term, organizations are able to update their processes so that similar risk do not reoccur.

Penetration tests should be performed by someone who is not responsible for the daily management of the network and its information systems.  The reason is due to one’s understanding or explanation behind why a system or group of systems were configured a particular way.  We often hear IT personnel say, “I was told it has to be this way so that’s the way I configured it.” One common example is, “Our software vendor requires that we configure all users as “Local System Administrator.” As a result, IT personnel will make a key information security mistake and assign the “Domain Users” group to the “Local System Administrators” group.

Conclusion

Vulnerability scanning and penetration tests are both services used to identify risks that may affect an organization’s information systems from its internal and external network.  In addition, these services help organizations meet compliance regulations from authorities such as FFIEC, PCI-DSS, and other regulatory authorities.

Posted in Network Protection, Vulnerability Management | Tagged , | Leave a comment

Integrating Risk Assessment into Lifecycle Management

Jerry Beasley, CISM, Information Security Analyst and Security Services Manager

Perceptions Today

Working as an information security consultant, I visit many diverse organizations, ranging from government agencies and financial institutions to private corporations, but they all have things in common. For example, they all manage information systems, and they are all subject to regulatory requirements and/or oversight. Given these similarities, the subject of risk assessment often arises.

During one such visit, an executive described the implementation of a new enterprise information system. He was observably proud of their progress to date, and the system was almost online. At the conclusion, the executive stated, as an after-thought, “Once we get online, I guess we’ll need to talk about getting a risk assessment.”

The old “smoke test” metaphor immediately came to mind. This term is sometimes used by engineers when building a new electronic prototype. The builder flips the switch and hopes that the device doesn’t go “up in smoke.” When applied to information security, this can be disastrous, both in terms of business impact, and in terms of legal liability.

Don’t be too surprised at the executive’s thought process. This is a common misconception about risk assessment, and in some cases is perpetuated by the idea that risk assessment is simply a regulatory requirement. In reality, the most successful enterprises are those that integrate risk assessment, and more broadly, risk management, into their lifecycle processes. The drawback of the alternative should be obvious. If a risk assessment is done after a system is developed and tested, many changes may be required after-the-fact to integrate the required security controls.

Within this article, I’d like to discuss how risk management can be integrated into lifecycle management. To get started, we’ll take a quick look at what’s involved in these processes we call risk management and lifecycle management.

Clearing Up the Confusion

With a simple internet search, you will find many definitions and contexts of risk management. By context, I mean that risk management processes can focus on different aspects of risk in an organization, such as operational risk, financial risk, or as is TraceSecurity’s focus, information security risk.

Risk Management

One definition of risk management states: “Risk Management is the identification, assessment, and prioritization of risks as the effect of uncertainty on objectives followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities.” If that sounds a bit esoteric to you, let me provide a simpler definition.

To me, risk management is about anticipating what bad things might happen to your assets, then mitigating the impact of those bad things, or reducing the likelihood that those bad things will happen. In the information security context, we are primarily concerned with assuring the confidentiality, integrity, and availability of sensitive, personal and business data. We’ll further address the process of doing this later.

Risk Assessment

You will often hear the term risk assessment used interchangeably with risk management. However, risk assessment should be thought of as a “piece” of risk management, albeit a very important one. Risk assessment is the analysis that takes place in order to make risk management decisions. More specifically, it is the process in which an organization identifies its information and technology assets and determines the negative impact that threats have to specific assets, what’s currently being done (current controls) to mitigate the impact or likelihood of an occurrence, and what else could be done (prescribed controls) to further effectively mitigate the impact or likelihood of an occurrence.

Risk management also includes the prioritization and application of prescribed controls, monitoring the effectiveness of these controls, and ensuring that additional risk assessment is performed as the assets and the threat landscape change. It’s important to note that there are numerous standards and models for risk management and assessment. Some of the more common standards or models include the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) supporting the Federal Information Security Management Act (FISMA), and the International Standards Organization (ISO) 31000 series, addressing risk management standards.  An illustration of the NIST RMF is available on the NIST web site and also duplicated below.

Risk Management Framework

Lifecycle Management

“Lifecycle management” is another term that is used in many contexts, but in general applies to managing the development, acquisition, implementation, use, and disposition of an entity.  In information processing, it is often related to the Software/System Development Life Cycle (SDLC) or sometimes the Product Lifecycle (PLC).  In these two examples, the focus is on a particular system or product, but as we will see, lifecycle management often has applications beyond the confines of a “system.”  Depending on the model you follow, lifecycle management generally includes the following phases or activities.

  • Requirements definition / specifications
  • Development / acquisition / testing
  • Implementation / configuration
  • Operations / maintenance
  • Phase out / disposition

Risk Managements Role in Lifecycle Management

In addition to the technology involved in implementing a system are the procedures, training, and physical controls. The definition of a system can include these controls as the effectiveness of the system may not be possible without them. For example, without physical controls, the technology may be damaged, lost or stolen. Without personnel controls and training, a system can be misconfigured or misused. Keeping these in mind, let’s think about how risk management supports the lifecycle management process in meeting information security goals.

Requirements and Specifications Development. This is likely to be the most critical phase in any lifecycle management process as it provides the roadmap to either develop or acquire a system that meets the business requirements of the organization. Inaccurate or ill-conceived requirements at this phase can translate into costly changes later in the project. It is equally important for risk management to be established at this point.

Key activities that should occur during this phase include establishing a process and responsibilities for risk management, and documenting the initial known risks. At a minimum, the project managers should identify, document, and prioritize risks to the system. This process should include identifying assets to be protected and assigning their criticality in terms of confidentiality, integrity, and availability; determining the threats and resulting risk to those assets, as well as the existing or planned controls to reduce that risk. Prioritization allows the project managers to focus resources on areas with the highest risk. When necessary, the requirements and specifications should be modified to include new requirements for additional security controls identified during this phase.

System Development, Acquisition and Testing. This phase translates the requirements into solutions, so accurate classification of asset criticality and planned controls are critical to successful development or acquisition.  For example, if the system has a requirement to transmit data across a public network and the criticality rating for the confidentiality of that data is high, then some control, such as application encryption or a virtual private network, may become part of the solution.  As the system is developed, testing of each control is necessary to ensure that the controls perform as designed.

Implementation and Configuration. During this phase, the system is implemented and configured in the form that it is intended to operate. Testing is equally important in this phase, especially to confirm that the designed security controls are operational in the integrated environment. The system owner will want to ensure that the prescribed controls, including any physical or procedural controls, are in place prior to the system going live.

Operations and Maintenance. Very few systems are static, so changes to a system are expected.  Most organizations acknowledge that a means to control the system configuration is necessary.  A configuration management process helps to ensure that changes to the system hardware, software, or supporting processes are reviewed and approved prior to implementation. The piece that is sometimes missed is the resulting change to the risk posture of the system.

Any change to a system has the potential to reduce the effectiveness of existing controls, or to otherwise have some impact on the confidentiality, availability, or integrity of the system. The solution is to ensure that a risk assessment step is included in evaluating system changes. For organizations that employ a configuration control board, the addition of a risk manager or security specialist to this body can facilitate the integration of risk assessment into configuration management.

We’ve acknowledged that systems change, but unfortunately, threats can change as well. When new threats are identified, new controls may be necessary to bring risk to an acceptable level. This is why periodic risk assessments are important, even when a system changes infrequently. Risk assessment can provide an added benefit in this phase as a means to improve the effectiveness of policies, procedures, and training. When control deficiencies are identified, support personnel and users may need new training or guidance to minimize risk to the system.

Phase Out / Disposition. This phase deals with the process of replacement and/or disposal of a system.  If a risk management plan was developed at project inception, it should have identified the risk to confidentiality of residual data during this phase. Given that known risk, the risk management plan will have identified the proper procedures or controls to reduce the risk of data theft or retrieval due to improper disposal. Given the dynamic nature of many systems, the disposition planning is often overlooked. However, by identifying the risk early in the project, the controls could be documented in advance ensuring proper disposition.

Taking the Next Step

One might ask, “Well, all these are great ideas, but where do I start?” Fortunately, there are many resources available. Solutions might include simple process descriptions, data gathering tools, or more sophisticated risk analysis and automation tools.  Since no two organizations are the same, no model or solution is “one size fits all”. TraceSecurity recommends you become familiar with the available resources and whether independently, or with the assistance of a trusted provider, establish a risk management program that best meets your organization’s needs.

References and Resources:

ISO 31000 Risk Management Standards:  http://www.iso.org/iso/home/standards/iso31000.htm

FISMA: http://csrc.nist.gov/groups/SMA/fisma/index.html

NIST Risk Management Framework:  http://csrc.nist.gov/groups/SMA/fisma/framework.html

NIST Special Publication (SP) 800-64, Security Considerations in the System Development Life Cycle:

http://csrc.nist.gov/publications/nistpubs/800-64-Rev2/SP800-64-Revision2.pdf

NIST SP 800-30, Guide for Conducting Risk Assessments: http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf

FFIEC Information Security Risk Assessment:  http://ithandbook.ffiec.gov/it-booklets/information-security/information-security-risk-assessment.aspx

TraceSecurity Risk Assessment Support:  http://www.tracesecurity.com/services/risk-assessment.stml

Posted in IT Risk Management and Assessments | Tagged , , , , | Leave a comment

Data Breaches Drive Information Security and Compliance into the C-Suite

Due to recent data breaches and exposure of consumer information, Congress is paying special attention to cyber security issues. As a result, regulators must ensure that the organizations they regulate are aware of cyber security issues at the very top of their organizations. To do so, regulators, such as the Federal Financial Institutions Examination Council (FFIEC), are incorporating cyber security risk assessments into their IT examination process and forcing institutions to think strategically about their information security and compliance programs.

Associations and analysts across regulated industries are urging leaders to prepare for more stringent oversight and governance of their information security program and initiatives. According to a recent article from Bank Info Security, one banking institution executive, who asked not to be named, says regulators are already setting times for cybersecurity-related risk assessments exams to coincide with their regular IT exams, some of which are in the coming days.

Facing this increased scrutiny, organizations must be ready to prove they have strategic plans in place that ensure information security and compliance is part of their everyday business and that their leadership understands how emerging cyber-attacks could affect their business. With so many organizations outsourcing IT operations, it is important for leadership to remember that they are still responsible for the security of their enterprise and its customers.

Posted in Compliance and Regulatory Change Management, Governance | Tagged , | Leave a comment

Accounting for Internal Threats to Your Network

Bob Yowell, Delivery Director

Late last year, Forrester released a report, “Understand the State of Data Security and Privacy,” which indicated reasons for data breach. The report found that the leading cause of data breach over the previous 12 months came from internal threats, not external threats. This does not mean that your largest security threats come from within.

It can be concluded, that organizations spend the majority of their budget protecting against external threats while often ignoring their internal threats. Rightly so, the majority of IT professionals focus on external threats. The majority of TraceSecurity customers have external penetration tests and vulnerability scans in place to help guard against data loss from outsiders that are especially interested in financial or customer data.

Looking at the internal threats to your network must be addressed too. According to the Forrester report, 36% of breaches over the previous 12 months were a result of inadvertent misuse of data by employees. The study goes on to state that 57% of employees polled were not currently aware of their organization’s current security policies.

Not only do your employees need to know your security policies, but it is also important to minimize the amount of damage that can be done by a rogue employee or simple mistake. You require the ability to see what is going on inside your network, recognize patterns and determine who has access to what. If a hacker successfully bypasses your front-level security, you need to quickly know how many employees have simplistic passwords that may be discovered via password-cracking programs and what important pieces of information they may have access to.

In TraceSecurity’s experience, once given access to an organization’s internal network, analysts are successful in compromising the system the majority of the time. This can happen a variety of ways. Most commonly, TraceSecurity finds improperly secured network shares, default passwords and incorrectly patched systems. Internal penetration tests also expose flaws in design and configuration of an internal system. While not always exploitable, it can result in excessive traffic that consumes bandwidth.

When you are preparing your budgets for 2015, don’t forget to protect your internal networks too.

Posted in Network Protection, Vulnerability Management | Tagged , | Leave a comment