Evaluate Cyber Liability Insurance in 3 Easy Steps

Brent Hobby, IT GRC Subject Matter Expert

We are often asked about the role that cyber liability insurance plays when an organization is developing a comprehensive information security program. We recommend cyber liability insurance be thought about in the context of an organization’s complete risk management program and as part of a company’s overall insurance package, rather than as part of an organization’s information security and compliance management program.

Step One: A Risk Assessment

Because many cyber liability policies now exclude “cyber risk,” evaluating the need for additional coverage should begin with a risk assessment. Speak with prospective insurers to make sure your assessment leverages a framework that they recommend. Depending on the size of the desired coverage, you may need to engage an approved third-party for your assessment.

Step Two: Risk Remediation or Risk Transference

Once you have a valid assessment, progress through the iterative process of reviewing risk remediation versus risk transfer. Get various quotes from insurers and repeat the review process. When complete, you will have a business-appropriate cyber risk coverage extension to your insurance coverage.

Step Three: Insure Based on Your Unique Business Need

Cyber liability insurance is relatively new, very flexible and costs can vary widely. Many organizations choose not to insure, others purchase coverage for specific breach response items, and some use it as a high-deductible umbrella coverage. Whichever your organization chooses, starting with a risk assessment will allow the business to drive the decision.

Posted in Governance, IT Risk Management and Assessments | Tagged , , , | Leave a comment

All You Need to Know About Shellshock

Madeline Domma, Product Specialist

How Shellshock Stands Up to the Hype

Despite its clever name, many industry experts predict that Shellshock, originally released on September 25, 2014, is potentially the worst vulnerability to hit the Internet. NIST rates it a 10 out of 10 for severity, the US Department of Homeland Security has identified the vulnerability as “Critical”, and it is estimated to potentially affect nearly half of all websites.

Shellshock has proven to be an even worse threat than the heavily-reported Heartbleed vulnerability that made its debut earlier this year. Unlike Heartbleed, the Shellshock command sequence is alarmingly simple to execute remotely but can cause virtually incalculable damage to affected systems or networks of systems. The Shellshock vulnerability, nicknamed the “Bash Bug”, enables even the least skilled of hackers to exploit the extremely popular command line interpreter (or shell) utility, called GNU Bash. Commonly referred to as “Bash”, the utility was originally developed for Unix systems then later distributed to Linux and OS X systems about 25 years ago. Shellshock exploits weaknesses within Bash by injecting arbitrary code into the shell that reconfigures environment variables forcing injections of malicious code directly onto exposed systems.  Furthermore, Bash does not require authentication to execute these commands. The exposure affects a staggering number of all websites on the Internet because Bash operates in conjunction with CGI scripts on several different types of web servers, including the commonly-used Apache servers. Although patches and updates have been released and were widely available soon after the vulnerability was discovered, Shellshock remains a threat to networks everywhere for quite a few reasons.

Breadth and Scope of Shellshock Implications

Worldwide, Shellshock conversations have toned down to a dull roar despite the vulnerability existing as an ongoing threat to networks. By design, the sequence is simple to inject into an exploitable operating system. Determining whether or not a system has been exploited can be difficult, too, since the vulnerability consists of so few commands in Bash. However, the problems do not end with verification that the system has not been exploited. The degree to which Shellshock can cause harm is yet to be determined and experts are still unsure of what its full potential could be. A glimpse at the full scale of this issue both today and into the future brings with it a few main points that must be remembered:

  1. The Shellshock vulnerability affects not only Unix or Linux based systems. Android devices, OS X devices, a majority of DSL/cable routers, security cameras, standalone webcams, and other IoT (“Internet of Things”) devices that could get overlooked (such as “smart” TVs or appliances) most likely run an embedded version of Bash. Therefore, many devices will need to be updated and patched after the essential systems for business operation are secured. Most individuals, even those who remain well-informed, may not know which or how many of the devices they maintain use Bash or which version of Bash these devices are currently running.
  2. Speaking of Bash versions, Shellshock affects all versions of Bash up to version 4.3 – meaning twenty-five years’ of Bash versions are exploitable by the vulnerability.
  3. Since the vulnerability operates as a code injection attack, the depth of the malicious code is exacerbated when Bash continues to execute commands – as the utility was designed to – after the code has been injected onto the system (i.e. when Bash continues to operate as it was designed).  Hijacked systems can be affected in different ways depending upon the commands that attackers execute after gaining access. Once the system has been compromised, hackers have the ability to execute any commands they choose and, historically, hackers have proven to be nothing if not creative.
  4. The fundamental design of the command sequence implies that Shellshock will remain an issue for at least the foreseeable future. A system is considered vulnerable if an outdated version of Bash is installed and Bash can be accessed either directly from the web or via another service running on the system that is accessible from the web. Unfortunately, until systems are either taken down completely or patched and secured, the vulnerability remains a threat to networks everywhere.

Best Practices to Proactively Guard Your Information Systems

Shellshock appears as cataclysmic as a threat can be. Nevertheless, there are several actions that can be taken to guard systems from the Shellshock vulnerability. Because Shellshock is a wide-reaching threat, it has demanded proportionate levels of media and expert attention prompting network administrators and security personnel to quickly take action to secure exploitable systems. Determining whether or not a system is affected involves a straightforward process of simple commands and once vulnerable system are identified, patches, updates, and signatures are readily available to secure all platforms. Apple Inc. reported that most OS X and iOS users were not at risk despite running an exploitable version of Bash. This is due to the fact that other controls are in place on OS X. Android reported that devices are not at risk for similar reasons. The good news continues because, while Windows has historically been riddled with a myriad of weaknesses to serious threats, Windows devices are not immediately affected if Bash is not installed on Windows-based systems. Since Bash is not a native utility for Windows Operating Systems, Windows-based systems only become vulnerable when they share a network with or are serviced by systems or VMs running exploitable operating systems.

TraceSecurity suggests a number of actions for those who have systems on their networks that are susceptible to the Shellshock vulnerability:

  1. Most importantly, all firmware, operating systems, Bash versions, and security policies in company IPS programs for all exposed devices should be updated immediately.
  2. Management and IT personnel should stay informed on the Shellshock issue as the scope of this vulnerability is yet to be determined and will be a serious threat well after Shellshock is no longer the topic of conversation.
  3. Maintaining a working knowledge of the organization’s IT environment is essential to a secure network. For example, knowing that websites hosted within the network use CGI, confirms that the host systems are exposed. Contrastingly, if none of the company websites use CGI, disabling CGI functionality on network devices can be simple action taken to protect systems from potential hacks that exploit CGI.
  4. Cautiously tracking network activity at all times can prove to be a useful practice if an attacker enters the IT environment, inevitably causing inconsistencies within network traffic.
  5. Firewalls, IDS, IPS, and other controls in place to compensate for open ports in system applications must be verified on a regular basis.
  6. TraceCSO customers with contracts that include network scanning functionality can run a dedicated network scan that will identify all network devices vulnerable to Shellshock. This scan can serve as the first step towards comprehensively patching all affected systems and quickly securing your network against Shellshock.

As always, TraceSecurity is proud to serve as a resource to those who have questions or concerns about how to protect IT environments from this vulnerability as well as any other potential threats. If you have any questions please contact your Delivery Director or your Business Development Manager.

Posted in Network Protection, Vulnerability Management | Leave a comment

Tools for Your Vulnerability Management Program

Bobby Methvien, Information Security Analyst and Security Services Manager

The largest threats to complex networks are those unknown to IT personnel. As a first line of defense against system and security-related vulnerabilities and as part of an organization’s on-going vulnerability management program, IT must conduct assessments of its information systems. The goal of a vulnerability management program is to reduce risk within an organization by identifying and resolving vulnerabilities to your IT systems and internal/external network.

Bring IT System Vulnerabilities into View

Vulnerability scanners are a tool that IT personnel use to scan many remote systems using thousands of vulnerability signatures in a short period of time. Results of a scan enable IT to coordinate a resolution for any vulnerabilities identified. Over time, as IT resolves identified vulnerabilities, only a handful of new vulnerabilities will be identified with additional scans. This is the point where IT personnel become confident in the security of the network and need to put it to the test.

Pen Test Your Internal and External Network  

Once IT personnel have significantly reduced the number of vulnerabilities identified through scans, a penetration test should be performed. The penetration test acts as an additional control and is used to identify system and security-related risk that affect an organization’s internal and external network.  Penetration tests work to compromise an organization’s host, web application, the network, or sensitive data.

Penetration tests have short and long-term benefits. In the short term, organizations are able to take action against findings in the assessment, and over the long term, organizations are able to update their processes so that similar risk do not reoccur.

Penetration tests should be performed by someone who is not responsible for the daily management of the network and its information systems.  The reason is due to one’s understanding or explanation behind why a system or group of systems were configured a particular way.  We often hear IT personnel say, “I was told it has to be this way so that’s the way I configured it.” One common example is, “Our software vendor requires that we configure all users as “Local System Administrator.” As a result, IT personnel will make a key information security mistake and assign the “Domain Users” group to the “Local System Administrators” group.

Conclusion

Vulnerability scanning and penetration tests are both services used to identify risks that may affect an organization’s information systems from its internal and external network.  In addition, these services help organizations meet compliance regulations from authorities such as FFIEC, PCI-DSS, and other regulatory authorities.

Posted in Network Protection, Vulnerability Management | Tagged , | Leave a comment

Integrating Risk Assessment into Lifecycle Management

Jerry Beasley, CISM, Information Security Analyst and Security Services Manager

Perceptions Today

Working as an information security consultant, I visit many diverse organizations, ranging from government agencies and financial institutions to private corporations, but they all have things in common. For example, they all manage information systems, and they are all subject to regulatory requirements and/or oversight. Given these similarities, the subject of risk assessment often arises.

During one such visit, an executive described the implementation of a new enterprise information system. He was observably proud of their progress to date, and the system was almost online. At the conclusion, the executive stated, as an after-thought, “Once we get online, I guess we’ll need to talk about getting a risk assessment.”

The old “smoke test” metaphor immediately came to mind. This term is sometimes used by engineers when building a new electronic prototype. The builder flips the switch and hopes that the device doesn’t go “up in smoke.” When applied to information security, this can be disastrous, both in terms of business impact, and in terms of legal liability.

Don’t be too surprised at the executive’s thought process. This is a common misconception about risk assessment, and in some cases is perpetuated by the idea that risk assessment is simply a regulatory requirement. In reality, the most successful enterprises are those that integrate risk assessment, and more broadly, risk management, into their lifecycle processes. The drawback of the alternative should be obvious. If a risk assessment is done after a system is developed and tested, many changes may be required after-the-fact to integrate the required security controls.

Within this article, I’d like to discuss how risk management can be integrated into lifecycle management. To get started, we’ll take a quick look at what’s involved in these processes we call risk management and lifecycle management.

Clearing Up the Confusion

With a simple internet search, you will find many definitions and contexts of risk management. By context, I mean that risk management processes can focus on different aspects of risk in an organization, such as operational risk, financial risk, or as is TraceSecurity’s focus, information security risk.

Risk Management

One definition of risk management states: “Risk Management is the identification, assessment, and prioritization of risks as the effect of uncertainty on objectives followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities.” If that sounds a bit esoteric to you, let me provide a simpler definition.

To me, risk management is about anticipating what bad things might happen to your assets, then mitigating the impact of those bad things, or reducing the likelihood that those bad things will happen. In the information security context, we are primarily concerned with assuring the confidentiality, integrity, and availability of sensitive, personal and business data. We’ll further address the process of doing this later.

Risk Assessment

You will often hear the term risk assessment used interchangeably with risk management. However, risk assessment should be thought of as a “piece” of risk management, albeit a very important one. Risk assessment is the analysis that takes place in order to make risk management decisions. More specifically, it is the process in which an organization identifies its information and technology assets and determines the negative impact that threats have to specific assets, what’s currently being done (current controls) to mitigate the impact or likelihood of an occurrence, and what else could be done (prescribed controls) to further effectively mitigate the impact or likelihood of an occurrence.

Risk management also includes the prioritization and application of prescribed controls, monitoring the effectiveness of these controls, and ensuring that additional risk assessment is performed as the assets and the threat landscape change. It’s important to note that there are numerous standards and models for risk management and assessment. Some of the more common standards or models include the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) supporting the Federal Information Security Management Act (FISMA), and the International Standards Organization (ISO) 31000 series, addressing risk management standards.  An illustration of the NIST RMF is available on the NIST web site and also duplicated below.

Risk Management Framework

Lifecycle Management

“Lifecycle management” is another term that is used in many contexts, but in general applies to managing the development, acquisition, implementation, use, and disposition of an entity.  In information processing, it is often related to the Software/System Development Life Cycle (SDLC) or sometimes the Product Lifecycle (PLC).  In these two examples, the focus is on a particular system or product, but as we will see, lifecycle management often has applications beyond the confines of a “system.”  Depending on the model you follow, lifecycle management generally includes the following phases or activities.

  • Requirements definition / specifications
  • Development / acquisition / testing
  • Implementation / configuration
  • Operations / maintenance
  • Phase out / disposition

Risk Managements Role in Lifecycle Management

In addition to the technology involved in implementing a system are the procedures, training, and physical controls. The definition of a system can include these controls as the effectiveness of the system may not be possible without them. For example, without physical controls, the technology may be damaged, lost or stolen. Without personnel controls and training, a system can be misconfigured or misused. Keeping these in mind, let’s think about how risk management supports the lifecycle management process in meeting information security goals.

Requirements and Specifications Development. This is likely to be the most critical phase in any lifecycle management process as it provides the roadmap to either develop or acquire a system that meets the business requirements of the organization. Inaccurate or ill-conceived requirements at this phase can translate into costly changes later in the project. It is equally important for risk management to be established at this point.

Key activities that should occur during this phase include establishing a process and responsibilities for risk management, and documenting the initial known risks. At a minimum, the project managers should identify, document, and prioritize risks to the system. This process should include identifying assets to be protected and assigning their criticality in terms of confidentiality, integrity, and availability; determining the threats and resulting risk to those assets, as well as the existing or planned controls to reduce that risk. Prioritization allows the project managers to focus resources on areas with the highest risk. When necessary, the requirements and specifications should be modified to include new requirements for additional security controls identified during this phase.

System Development, Acquisition and Testing. This phase translates the requirements into solutions, so accurate classification of asset criticality and planned controls are critical to successful development or acquisition.  For example, if the system has a requirement to transmit data across a public network and the criticality rating for the confidentiality of that data is high, then some control, such as application encryption or a virtual private network, may become part of the solution.  As the system is developed, testing of each control is necessary to ensure that the controls perform as designed.

Implementation and Configuration. During this phase, the system is implemented and configured in the form that it is intended to operate. Testing is equally important in this phase, especially to confirm that the designed security controls are operational in the integrated environment. The system owner will want to ensure that the prescribed controls, including any physical or procedural controls, are in place prior to the system going live.

Operations and Maintenance. Very few systems are static, so changes to a system are expected.  Most organizations acknowledge that a means to control the system configuration is necessary.  A configuration management process helps to ensure that changes to the system hardware, software, or supporting processes are reviewed and approved prior to implementation. The piece that is sometimes missed is the resulting change to the risk posture of the system.

Any change to a system has the potential to reduce the effectiveness of existing controls, or to otherwise have some impact on the confidentiality, availability, or integrity of the system. The solution is to ensure that a risk assessment step is included in evaluating system changes. For organizations that employ a configuration control board, the addition of a risk manager or security specialist to this body can facilitate the integration of risk assessment into configuration management.

We’ve acknowledged that systems change, but unfortunately, threats can change as well. When new threats are identified, new controls may be necessary to bring risk to an acceptable level. This is why periodic risk assessments are important, even when a system changes infrequently. Risk assessment can provide an added benefit in this phase as a means to improve the effectiveness of policies, procedures, and training. When control deficiencies are identified, support personnel and users may need new training or guidance to minimize risk to the system.

Phase Out / Disposition. This phase deals with the process of replacement and/or disposal of a system.  If a risk management plan was developed at project inception, it should have identified the risk to confidentiality of residual data during this phase. Given that known risk, the risk management plan will have identified the proper procedures or controls to reduce the risk of data theft or retrieval due to improper disposal. Given the dynamic nature of many systems, the disposition planning is often overlooked. However, by identifying the risk early in the project, the controls could be documented in advance ensuring proper disposition.

Taking the Next Step

One might ask, “Well, all these are great ideas, but where do I start?” Fortunately, there are many resources available. Solutions might include simple process descriptions, data gathering tools, or more sophisticated risk analysis and automation tools.  Since no two organizations are the same, no model or solution is “one size fits all”. TraceSecurity recommends you become familiar with the available resources and whether independently, or with the assistance of a trusted provider, establish a risk management program that best meets your organization’s needs.

References and Resources:

ISO 31000 Risk Management Standards:  http://www.iso.org/iso/home/standards/iso31000.htm

FISMA: http://csrc.nist.gov/groups/SMA/fisma/index.html

NIST Risk Management Framework:  http://csrc.nist.gov/groups/SMA/fisma/framework.html

NIST Special Publication (SP) 800-64, Security Considerations in the System Development Life Cycle:

http://csrc.nist.gov/publications/nistpubs/800-64-Rev2/SP800-64-Revision2.pdf

NIST SP 800-30, Guide for Conducting Risk Assessments: http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf

FFIEC Information Security Risk Assessment:  http://ithandbook.ffiec.gov/it-booklets/information-security/information-security-risk-assessment.aspx

TraceSecurity Risk Assessment Support:  http://www.tracesecurity.com/services/risk-assessment.stml

Posted in IT Risk Management and Assessments | Tagged , , , , | Leave a comment

Data Breaches Drive Information Security and Compliance into the C-Suite

Due to recent data breaches and exposure of consumer information, Congress is paying special attention to cyber security issues. As a result, regulators must ensure that the organizations they regulate are aware of cyber security issues at the very top of their organizations. To do so, regulators, such as the Federal Financial Institutions Examination Council (FFIEC), are incorporating cyber security risk assessments into their IT examination process and forcing institutions to think strategically about their information security and compliance programs.

Associations and analysts across regulated industries are urging leaders to prepare for more stringent oversight and governance of their information security program and initiatives. According to a recent article from Bank Info Security, one banking institution executive, who asked not to be named, says regulators are already setting times for cybersecurity-related risk assessments exams to coincide with their regular IT exams, some of which are in the coming days.

Facing this increased scrutiny, organizations must be ready to prove they have strategic plans in place that ensure information security and compliance is part of their everyday business and that their leadership understands how emerging cyber-attacks could affect their business. With so many organizations outsourcing IT operations, it is important for leadership to remember that they are still responsible for the security of their enterprise and its customers.

Posted in Compliance and Regulatory Change Management, Governance | Tagged , | Leave a comment

Accounting for Internal Threats to Your Network

Bob Yowell, Delivery Director

Late last year, Forrester released a report, “Understand the State of Data Security and Privacy,” which indicated reasons for data breach. The report found that the leading cause of data breach over the previous 12 months came from internal threats, not external threats. This does not mean that your largest security threats come from within.

It can be concluded, that organizations spend the majority of their budget protecting against external threats while often ignoring their internal threats. Rightly so, the majority of IT professionals focus on external threats. The majority of TraceSecurity customers have external penetration tests and vulnerability scans in place to help guard against data loss from outsiders that are especially interested in financial or customer data.

Looking at the internal threats to your network must be addressed too. According to the Forrester report, 36% of breaches over the previous 12 months were a result of inadvertent misuse of data by employees. The study goes on to state that 57% of employees polled were not currently aware of their organization’s current security policies.

Not only do your employees need to know your security policies, but it is also important to minimize the amount of damage that can be done by a rogue employee or simple mistake. You require the ability to see what is going on inside your network, recognize patterns and determine who has access to what. If a hacker successfully bypasses your front-level security, you need to quickly know how many employees have simplistic passwords that may be discovered via password-cracking programs and what important pieces of information they may have access to.

In TraceSecurity’s experience, once given access to an organization’s internal network, analysts are successful in compromising the system the majority of the time. This can happen a variety of ways. Most commonly, TraceSecurity finds improperly secured network shares, default passwords and incorrectly patched systems. Internal penetration tests also expose flaws in design and configuration of an internal system. While not always exploitable, it can result in excessive traffic that consumes bandwidth.

When you are preparing your budgets for 2015, don’t forget to protect your internal networks too.

Posted in Network Protection, Vulnerability Management | Tagged , | Leave a comment

Meet Compliance Challenges with TraceCSO

Mark Thorburn, Security Services Manager & Kayla Campbell, Delivery Director

Meeting compliance requirements is a challenge for many credit unions.  Not only is it an overwhelming task to sift through compliance documentation, but it is also time-consuming to keep up with the credit union’s compliance posture on a consistent basis.  Given these two hindrances, it is very common for credit unions to put these compliance challenges on the back-burner until they are no longer possible to ignore.   Now, credit unions are faced with the concept of GRC – governance, risk, and compliance – which has thrown compliance even further into the limelight.

TraceSecurity identified the need to develop a solution that could aid an organization, including credit unions, in meeting its GRC demands, yet be quickly and easily deployed and managed.  This solution is TraceCSO.  TraceCSO is a GRC tool that is aimed towards helping organizations manage their risk-based information security program more effectively and efficiently.

From a compliance perspective, the introduction of authority documents within TraceCSO provides organizations with citations from hundreds of governing bodies that are stored in a centralized repository, thus eliminating the painstaking task of sifting through compliance documentation to find the true citation text.  It also allows organizations to assign citations to owners, which provides accountability, as well as allows for surveying of citation owners and collection of citation answers and supporting documentation.  This information can be reported on in multiple ways, including dashboard graphs, excel spreadsheets, and formal reports that include both executive-level and detailed sections.

TraceCSO also makes it easier for organizations to meet compliance challenges on an ongoing basis by introducing process, policy, training, and vendor functionalities that support and automate what may have been manual processes before.

Given that TraceCSO is a GRC solution, an organization’s compliance challenges are only one piece of the puzzle.  In TraceCSO, organizations can not only view their citations in compliance assessments but can also view them in risk assessments and audits, thus providing a view into the organization’s compliance posture while considering the organization’s overall risk posture.  This provides a holistic view of the organization’s security stance at any given point in time.

Overall, TraceCSO is an extremely effective solution for measuring an organization’s compliance status in relation to particular governing bodies and/or industry best practice standards.  When compliance is managed alongside risk, organizations are able to get a holistic view of their organization’s posture that, before, may have been spread across multiple solutions or not tracked at all.

Posted in Compliance and Regulatory Change Management, Compliance Audits and Assessments | Tagged | Leave a comment

Eyes on the Industry: Heartbleed and its Impact

Josh Stone, Director of Product Management and Information Security Expert

There’s been much in the news about the recent vulnerability in OpenSSL – the so-called “Heartbleed” bug. This is a landmark vulnerability and deserves the publicity and industry response. TraceSecurity has received its fair share of inquiries about this vulnerability, so here’s our perspective on the bug and its future in the security space.

Is it the future, you might ask?  It’s been patched, after all, and recent indicators suggest that the public exposure is substantially eliminated. Undoubtedly, there are plenty of sites still vulnerable, but most significant sites are now patched. So, shouldn’t the issue be largely a thing of the past?

There’s an important aspect of the vulnerability life-cycle to consider. Bugs are published, immediately rendering many existing installations vulnerable. But, even after the rush to patch all systems, these bugs live on. Other classic vulnerabilities still show up surprisingly often. For example, I recently polled our security analysts and found out that MS08-067 – a six year-old vulnerability – still shows up in about one out of every three internal penetration tests. You may remember it: it was exploited by the Conficker worm.

I predict the same future for the heartbleed vulnerability. Public Internet exposure will be history quite soon, but the real ramifications of heartbleed will be felt for years. Most organizations will have a few systems here and there that will remain vulnerable and be exploitable on the internal network.

And, heartbleed is still a big deal. I know this because I recently exploited it in an internal penetration test. The type of information that you get from this vulnerability can be extremely valuable. For example, one can obtain session tokens, usernames and passwords, or internal application data. Over time, the vulnerability surrenders new information all the time, so prolonged exploitation can yield volumes of very useful data.

We encourage all of our customers to scan for this vulnerability internally, and make sure that you patch or otherwise compensate for vulnerable hosts. It’s remarkably easy to make use of the information extracted with heartbleed, and this vulnerability could play a significant role in a future security incident near you.

 

Posted in Network Protection, Vulnerability Management | Leave a comment

The Heartbleed Bug

As many of you may have heard, a new vulnerability was recently discovered in OpenSSL cryptographic software library. This new vulnerability is known as the Heartbleed Bug.  This Heartbleed bug could allow the information protected, under normal conditions, by the SSL/TLS encryption to be stolen.  In simple terms, this means that the majority of websites on the Internet are at risk of leaking confidential information, even if the connection is via an encrypted session (HTTPS).

The following is an excerpt from heartbleed.com, which provides complete details about the vulnerability:

What is leaked: Primary key material and how to recover?

These are the crown jewels, the encryption keys themselves. Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services and to impersonate the service at will. Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed. Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys. Even doing all this will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption. All this has to be done by the owners of the services.

What is leaked: Secondary key material and how to recover?

These are for example the user credentials (user names and passwords) used in the vulnerable services. Recovery from this leaks requires owners of the service first to restore trust to the service according to steps described above. After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services that have been compromised. All session keys and session cookies should be invalided and considered compromised.

What is leaked:  protected content and how to recover?

 This is the actual content handled by the vulnerable services. It may be personal or financial details, private communication such as emails or instant messages, documents or anything seen worth protecting by encryption. Only owners of the services will be able to estimate the likelihood what has been leaked and they should notify their users accordingly. Most important thing is to restore trust to the primary and secondary key material as described above. Only this enables safe use of the compromised services in the future.

What is leaked: collateral and how to recover?

 Leaked collateral are other details that have been exposed to the attacker in the leaked memory content. These may contain technical details such as memory addresses and security measures such as canaries used to protect against overflow attacks. These have only contemporary value and will lose their value to the attacker when OpenSSL has been upgraded to a fixed version.

According to the documentation regarding this vulnerability, the following versions of OpenSSL are at risk:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

If your web server is running a version of OpenSSL that is vulnerable, we strongly encourage you to upgrade to a secured version of OpenSSL today and confirm that your web server SSL is using the upgraded version.  In addition, other encryption products that use the OpenSSL library could also be at risk and should also be addressed.

For complete details about this vulnerability, visit heartbleed.com

Posted in Network Protection, Vulnerability Management | Leave a comment

Identity Theft Armageddon is Coming

Jim Stickley, Chief Technology Officer

Recently, there has been a lot of press regarding the Target credit card breach, and this has lead to many questions regarding just how vulnerable the entire credit card payment system really is. Now, in case you are unaware of how Target was breached, the basic facts are this. Hackers were able to load malware onto the Point of Sale (POS) servers on Target’s network. This malware was specifically designed to monitor the payment processing software loaded on the devices and then watch the card data as it was being processed in plain text in the memory of the server.

How the malware actually ended up on the servers is still up for debate. It appears that a third party vendor may have been compromised, and then through this vendor, the hackers were able to gain access to the Target network. Other security experts say that it’s highly unlikely that a third party vendor would have had access to the POS servers; therefore, it’s not possible that this is how this attack started. While I am interested to read the final report that gives the actual steps the hackers took, the simple fact is that this type of attack has brought sophisticated malware to mainstream hacking and the beginning of a whole new era of targeted malware attacks.

While malware that is designed to target a specific type of application is not new, for the most part it has been used to target the average online banking consumer. In most cases, the malware would end up on a person’s PC and simply wait until they logged into their online banking account. Then, once the person logged in, the malware would begin passing commands to the online account on behalf of the user, without the user’s knowledge. When this attack first came out, it was extremely successful in automatically transferring funds out of unsuspecting victim’s bank accounts.

IT GRC Newsletter

Of course financial institutions fought back and implemented additional layers of security to help reduce the risk of these types of attacks. For example, when a person attempts to transfer funds out of their account, an additional security challenge is presented in an attempt to thwart automated malware. And, while hackers do still come up with ingenious ways to bypass these additional layers of security, overall the success rate of these targeted malware attacks has declined.

However, something happened a few years ago, and it has set-in-motion the beginning of a new trend in hacking. An extremely specific type of malware was created, ended up on Iranian servers, and just so happened to be involved in their nuclear program. The malware was given the name Stuxnet. What made this malware so special was that its whole purpose was to wreak havoc on Iran’s nuclear program.  There have been numerous white papers and even some fantastic YouTube videos released that show exactly how the malware worked. The premise is simple. As data is engineered into a piece of software, the malware manipulates that data. So, as far as the engineer was concerned, everything looked like it was supposed to. In reality, the numbers entered were way out of whack and when executed caused devastating consequences.

So how do Iran, Target and a bunch of hacked banks accounts come together to change the entire future of hacking? Hackers have now been given the blueprints to create absolute identity theft armageddon. Sound a little overblown? Well, maybe, but I will let you be the judge.

Information Security I TraceSecurity

Identity thieves have one primary purpose and that is to make money. The problem for these criminals is that their overall success is often limited to a very short window of time. Take, for example, the Target breach. Sure there was an estimated 70 million card numbers stolen, but within days, Target had sent these numbers to every financial institution in the United States. The card numbers were deactivated, and new cards were issued. Were some of the numbers used before they were deactivated? Absolutely, but the reality is that more money was lost in the cost to financial institutions having to re-issue new cards then in actual money stolen via the cards themselves. So, while the attack itself was both sophisticated and extremely successful, the overall monitory value of the attack was relatively limited.

Now, you have a large number of cyber criminals who have been closely watching this story unfold. They have seen just how successful the attack itself was. But, at the same time realize that just because it was easy for the malware to steal all this information, in the end the payoff was limited due to rapid deactivation. The Target breach made it clearly that targeting a large organization with malware designed specifically to attack a particular application is a far faster way to gain access to millions of records, than to attack the home user and gain access to one bank account at a time. Remember, malware targeting the home user’s online banking is facing more and more challenges.

So, if you’re a cyber criminal you have to be thinking to yourself, why are we wasting our time stealing credit card numbers that can simply be deactivated when we can just as easily go after social security numbers? Think about it. When it comes right down to it, each of our identities are nothing more than a simple social security number. Need a loan? You will provide your social. Want a credit card? Again, it’s the social. Dealing with the IRS? Yep, you are nothing more than 9 numeric digits and a couple of dashes. Now, add in the fact that unlike a credit card, you’re stuck with your social security number for life. If your social security number gets stolen, all you can do is attempt a fraud watch and hope that’s enough to keep you protected.

Give a man a fish and he eats for a day. Give an identity thief a credit card, and he steals for a day. Give him the social security number of an unsuspecting person, and he can rip them off numerous times for life. This is because even if the person finds out they have become a victim of identity theft and start to clean everything up, the criminal can simply put that social security number aside. Then in five or six years, they can come back and start all over because the person’s name will probably still be the same and yes, the social security number will also still be the same. Think I am making this up? Reach out to your local social security office and ask them if you can change your number. Unless you just happened to join the witness protection program, it’s not happening.

NIST TraceSecurity

Sure, financial institutions, health care facilities and accountants are all going to be primary targets, but don’t forget about all those general businesses out there that allow people to setup credit cards or apply for loans. Car dealerships and department stores are great examples of organizations that are just waiting for hackers to start their attack. The list of course is endless, and as you read this, I am sure you can think of numerous other organizations that handle social security numbers. The point is that criminals have an unlimited supply of potential targets and can create targeted malware to take each of these companies down one at a time.

As these breaches start happening and organizations are forced to disclose that social security numbers have been stolen, they will do what it takes to defuse the PR nightmare. In most cases, they will offer six months to a year of a free credit watch service. This will give the average person a false sense of security, and people will move on with their lives. Unfortunately, when that year is up, most people will not have the money to pay to keep the credit watch service active, so the service will be discontinued. And what happened to the stolen social security numbers? It’s not like the criminal who stole them just threw them out. In many cases, they will sell them to other criminals who are willing to wait to use them with the understanding that it’s not a matter of if these numbers will be useful but only a matter of when.

I have spent the past 25 years working in the cyber crime field and have seen many types of attacks come and go. The difference between those attacks of the past and what is coming in the future is that there is little to nothing the average person will be able to do to defend themselves, and this has been proven over the past several months. Even the most secure organizations are still vulnerable to attack.  As targeted malware begins to siphon off millions of social security numbers from organizations all over the United States, the ability to truly track real identities from fake ones will become so blurred that the entire system as we know it will simply fail.

Still believe this is overblown? Only time will tell. In the meantime, I can only hope that all organizations will attempt to learn from what has happened with these other public breaches and stop attempting to simply meet some poorly-constructed regulation, and instead actually attempt to properly secure the confidential information they collect. This doesn’t have to end with identity theft armageddon, but without organizations taking a much more comprehensive look at their security practices, I personally am not overly optimistic about the security of my identity in the future.

Posted in Compliance and Regulatory Change Management, IT Risk Management and Assessments, Network Protection, Social Engineering, User Awareness Training, Vendor Management, Vulnerability Management | Tagged , , , , , | Leave a comment