A Q&A with Winston Krone of Kivu Consulting – Posted by Mark Greisiger on Junto Blog

Oct 2016

There’s no doubt that ransomware attacks are on the rise and they’re becoming more insidious. I spoke with Winston Krone, global managing director of Kivu Consulting about what the latest version of ransomware looks like and what risk managers should do if it strikes their organization.

What is ransomware?

Ransomware is a type of malware that can infect any device where the malware is opened—typically through a link in an email, but we’re seeing variants where it’s seeded on a computer and activated remotely. Either way, it’s designed to infect other devices or hosts such as servers that the original device is connected to. Its real danger to organizations is its ability to spread across systems for two reasons:

  • It can compromise vast amounts of data—once it jumps from a desktop to a server, you’re talking terabytes of data compromised rather than gigs.
  • It can jump into backups and destroy the ability to restore the system. This issue has been made worse by the recent trend of synchronized backups—though regulated organizations still require long-term backup capability. If the only backup goes back a day or two and it gets lost, you don’t have earlier versions to rebuild the system.

How does it impact companies?

In the best case scenario, you come back online in several days—the worst case scenario is that you never come back online. Ransomware attacks affect just about every type of organization. While many have already designed systems with multiple backups so they can get back online immediately following an attack, some organizations, particularly law firms, accounting firms and manufacturing companies, haven’t developed systems for safely keeping backups.

Either way, organizations need to decide whether to pay the ransom or to try to rebuild the data themselves from other areas such as employee laptops or old computers that were offline (and thus, not hit by the malware). The do-it-yourself approach turns into a significant amount of work—many hundreds of hours of labor and business downtime—and it’s rarely less than the $5000 to $20,000 ransom. Some organizations have an aversion to paying criminals and that’s a legitimate concern, but there’s a danger in trying to rebuild the data yourself. We have seen situations where organizations try to do this and then realize later that they can’t and want to pay the ransom—in the meantime, they have overridden the encrypted data and when they pay the ransom and get the decryption key it doesn’t do them any good.

Many organizations don’t include ransomware in their incident response plans or they underestimate its significance. The ones that do include it need to update the plan on a quarterly basis, at the very least. Over the last year we have seen major paradigm shifts with new types of ransomware occurring every two weeks, in terms of the attack vector, seriousness of the attacks and how they’re launched.

Can you explain how the negotiations between the perpetrator and the attacked organization work?

In the most basic ransomware, you’re simply steered to a URL and there’s really no way to communicate with the attacker. In this situation, it’s usually a relatively small amount for the ransom, probably less than $5000. In a second variant, they supply a URL but there’s some degree of communication such as a comment field and some type of handshake where they let you test a small amount of data to prove that they actually have a decryption key and it works. In the third type, there’s direct communication by email, and these are the most expensive ransoms. In these cases, they’re open to negotiation—not about the price but about the time needed to pay the ransom or to figure out how the decryption works.

In larger attacks we see a new variant whereby the basic ransom goes up by the amount of computers infected. In those cases, you can pay by individual computer affected or with a blanket global license upwards of $20,000 and they’ll give you all the keys needed. In those types of attacks, the attacker is incentivized to negotiate with you more. In general, the negotiations are not for the fainthearted—we have negotiated dozens of these cases with foreign language speakers set up with multiple identities around the world and on the dark web. Our role is to make sure the negotiations go smoothly while masking the identity of our clients to the extent that we can.

Anonymity is important. We highly recommend cloaking the identity of the attacked organization because of their ability to increase the ransom. In most cases the criminals don’t know who they’re attacking and they don’t care. However, this is something that we expect to change in the next six months or so—we think attackers will go after regulated businesses or other businesses where data is important, or choose organizations that they know carry insurance so they are more likely to get paid.

How can a company set up a bitcoin wallet in order to actually pay the ransom?

Organizations can set up their own bitcoin wallet but it is very difficult and among the lawyers and risk managers I’ve met who offer advice on this topic, almost none of them have ever actually done it themselves. It’s relatively straightforward to get a small amount of bitcoins but it’s very difficult to get a significant amount of money. Most bitcoin exchanges cap the amount of money you can get within a given time period. You can start an account and usually it takes a week to get it going and build up enough transactions to call down tens of thousands of dollars’ worth, and it’s expensive—with charges of over 15 percent per transaction. Off the exchanges,  you’re dealing with sketchy people and you’re opening yourself up to getting ripped off. Unless you already have an account and a reserve of $10,000-$20,000 you’re not readily prepared to deliver a ransom.

What are some common pitfalls in this situation?

Assuming you have money lined up and you’re ready to pay the ransom, there are still a number of things that can go wrong. You have to make sure you’re paying the right people. We’re seeing increasing examples of serious criminals getting involved in the ransom business. It’s the equivalent of thieves ripping off drug dealers. We’re also seeing organizations who have been hit by multiple attacks at the same time which can interfere with the remediation process. In some cases, the decryption key doesn’t work or the IT people don’t know how to use it properly. We have also seen instances where the decryption key itself is an attempt to get additional malware on the system.

How might a forensic expert play a role here?

We can help in every step of the process, including assisting the client with the response before paying the ransom, assisting with paying the ransom (we offer the service of paying on behalf of the client with our own bitcoins), making sure all communications are anonymous and verifying that the decryption tools themselves work and don’t contain more malware. We can also determine if the ransomware is actually a cloak or cover for an actual theft of data. In those cases, the $20,000 cost of ransom is dwarfed significantly by the cost of a data breach. We’ll make sure that the encrypted data isn’t destroyed during remediation. In the newest cases of ransomware that gets set off remotely by a hacker, forensic analysis can be required by state and federal data breach regulations to determine whether confidential data has been compromised since the hackers clearly obtained some access to the network to plant the ransomware.

What else should risk managers be aware of with regard to the threat of ransomware?

We’re seeing a lot of antivirus companies that claim to be developing tools that can spot ransomware and stop it, or vaccinate computers against it, but we caution people to be very skeptical about these claims. These tools might be able to stop poorly designed ransomware but the fact is, it’s getting more sophisticated all the time—the hackers are figuring out how to outsmart us by masking the malware and the attack vector. What organizations really need to do is go back to the basics—designing a sound infrastructure for computer systems so that if there’s infection it won’t spread, and prepare for an encounter with ransomware with a detailed incident response plan.

In summary…

We want to thank Winston for his granular insights into this threat, which seems to be impacting cyber liability insurance clients on a weekly basis these days. We also think it’s important for a risk manager to see that there are many challenging and nuanced steps involved in resolving this type of cyber risk. An organization should not undertake resolution without the guidance of a Breach Coach® lawyer and forensic/security expert who has experience with extortion. Mr. Krone is a frequent speaker at NetDiligence® Cyber Liability Conferences. 

 

Testing the Password Encryption Strength of NT LAN Manager and LAN Manager Hash

Security risks associated with weak user-created passwords are well documented. In 2009, for example, cyber security provider Imperva analyzed more than 32 million passwords that were released in a 2009 data breach. More than 50% of the passwords involved poor user password choices, and 30% of the passwords contained 6 or fewer characters. User habits in poor password construction improve the chances of successful password determination by hackers who use password-guessing software.

Kivu recently participated in an experiment to evaluate the password encryption strength of two Windows Operating System authentication protocols.

LAN Manager (LM) hash, employs a multi-step algorithm to transform a user password into a calculated string value that obfuscates a password’s identity. The resulting LM hash is stored rather than the original password. First, a user’s password is converted to all uppercase letters. Next, the uppercase password is set to a 14-byte length. For passwords greater than 14 bytes, the password is truncated after the 14th byte. Passwords less than 14 bytes are null-padded to reach 14 bytes. The 14-byte password is split into two 7-byte segments, and a null is added at the beginning of each 7-byte half. Each half is used as a key to DES-encrypt the ASCII string “KGS!@#$%”. Both output values are concatenated to create a 16-byte value LM hash.

Microsoft’s second encryption method, NT LAN Manager (NTLM), is an improved algorithm for securing a password’s identity. Beginning with Microsoft Windows NT 3.1, NTLM was introduced to improve security. NTLM passwords differ from LM passwords in that NTLM employs the Unicode character set with the ability to differentiate upper and lowercase letters and permits passwords up to 128 characters in length.

Kivu’s experimental design to compare the relative strength of these two encryption methods employed Cain and Abel password-guessing software and different Windows passwords of increasing complexity.

How Cain and Abel Works

Cain and Abel provides the ability to execute password-guessing schemes using dictionary attacks and brute-force attacks. Both dictionary attacks and brute-force attacks employ guess-based methodologies to identify the plain text password associated with a specific hash-encrypted password.

Dictionary attacks use a pre-defined list of search terms or phrases as the basis for guessing. Each search term is transformed into a hash string value using a specific hash algorithm, such as the LM hash protocol. The resulting hash value is compared to a hash value of interest, and if the hash values match, the plain text password is identified.

Brute force attacks attempt every combination of defined search criteria to identify the plain text password associated with a hashed password. Search criteria settings include the use of character sets, such as ASCII and the number of characters in a password.

Conclusions

Kivu’s experimental results yielded significant insights concerning the strengths of the NTLM password algorithm, which is Microsoft’s replacement to LM.

None of the NTLM-transformed passwords we used were quickly resolved through brute force attack. Our experiment suggested that the passwords established for the test user accounts would take more than 4 years to determine. While our brute-force attacks were limited to less than 3 minutes, NTLM hash protocols were identified as having substantial lead times to identify plain text equivalents of NTLM-hashed equivalents.

LM’s password hashing approach to obfuscating plain text passwords, however, was limited in its success. As observed with three test user passwords, brute force password guessing resulted in a partial identification of LM passwords, due to LM’s sub-division of the password string during the hashing algorithmic process.

Our results indicated that both NTLM and LM passwords are susceptible to compromise in a well-designed and broad dictionary attack. Overall, NTLM hashed password equivalents may be stronger than LM in simple dictionary attacks. In substantial dictionary attacks, however, it may be more likely to identify an NTLM password due to the ability to match a calculated hash value from a dictionary. While dictionary-based attacks may be limited in their combinations of matches, larger dictionaries provide more opportunities for a match.

 

Kivu (www.kivuconsulting.com) is a nationwide technology firm specializing in the forensic response to data breaches and proactive IT security compliance.   Headquartered in San Francisco with offices in Los Angeles, New York, Washington DC, and Vancouver, Kivu handles assignments throughout the US and Canada, and is a pre approved cyber forensics vendor for leading North American insurance carriers.  Author, Megan Bell, directs data analysis projects and cyber security investigations at Kivu.

Honored to be interviewed on NPR’s Marketplace on how forensic evidence can identify key employees who are planning to quit after receiving their bonuses (and maybe taking secrets).  We see a big spike in trade secret theft during the bonus season – but interestingly the taking can start months in advance when the star salesperson or software engineer first gets a feeling that the end of year bonus is not going to be so stellar.  Also interesting to watch how other employees quickly delete their secret caches of confidential files when a colleague is caught red-handed.  http://www.marketplace.org/2016/02/04/business/bonus-season

In several forensic investigation cases, Kivu has analyzed iOS backup files as a method of obtaining evidence of text messages or other data from an iOS device, usually when an iOS device is not readily available or as a means of cross-correlating evidence.

These backups are often made to the custodian’s computer when they connect their iOS device to a computer to charge it or sync it with iTunes. When they connect their iPod touch, iPhone, or iPad to their computer, certain files and settings on their device are automatically backed up. As such, they are locally stored on the custodian’s computer and can be extracted and parsed for further analysis.

In a recent case, the backups were extracted from the custodian’s laptop, which was provided to Kivu. The backups pertained to two iPhone devices. Kivu forensically extracted the backups from the custodian’s laptop and was able to parse the backups and uncover text message data that came from both the custodian’s current iPhone and the prior one, which was no longer in her possession.

Here’s how the text messages were retrieved

 

Within the “Backup” directory under MobileSync, there is a subdirectory named for the unique device identifier (UDID) of the device for a full backup. The UDID is a 40-character hexadecimal string that identifies the device [example: 5b8791c14e926cc9220073aefcedd2b831c843b1]. Sometimes, the UDID will have a timestamp appended to it that indicates the date and time that the backup was made. For example, a directory named 5b8791c14e926cc9220073aefcedd2b831c843b1-20150506 122733 indicates that the iOS device was backed up on May 6, 2015 at 12:27:33 PM.

Within the UDID directory, there are numerous files with a similar naming convention as the UDID directory without a file extension. These filenames are actually SHA1 hash values of files from the device. When backing up an iOS device through iTunes, iTunes computes a SHA1 hash value of the file’s path. Below is a chart detailing several common SHA1 file names for files pulled from an iOS in the course of an iTunes backup.

Since text messages are often of interest, it’s important to note the SHA1 hash value assigned to sms.db. This is the database file that holds text message data, including sender, recipient, and content of messages.

 

Sources:

http://ios-forensics.blogspot.com/2014/07/apple-ios-backup-file-structure.html
http://resources.infosecinstitute.com/ios-5-backups-part-1/
http://www.iphonebackupextractor.com/blog/2012/apr/23/what-are-all-files-iphone-backup/

About Kivu

Kivu is a nationally recognized leader for security assessments and breach response services.  For more information about collecting forensic data from Apple devices, please contact Kivu.

Data quality is not a glamourous subject. It is not the type of topic that headlines a conference or becomes front-page news. It is more typically suited for help guides and reference manuals that few individuals relish reading. However, organizations that acknowledge the importance of data quality and have strong data quality programs significantly reduce privacy and security risks. They also lower the potential costs associated with data breaches, the legal risks, and potential size of business interruptions.

Data quality issues start when information is created. This includes incorrect information, data entry errors, and inaccurate document conversion such as conversion of text contained within image files (e.g., a screen shot from a patient management system). Data quality issues also arise as data is being processed, transferred or stored.

1. Build a foundation of knowledge and fluency about data.

“Understanding data” means moving deeper than simply understanding that a database stores records or that a file contains information. Knowledge of data means taking the time to understand that data exists in different layers and structures and can be readily transformed. Additionally, data can be defined as discreet elements (e.g., a data element that stores date time information) and have assigned roles and restrictions. Investment in the language of data can improve control over data and enable better decisions on information security and privacy.

2. Don’t leave data design and quality decisions to the development team or an IT group.

This could place data at significant risk including possible loss, misuse and insecurity. Development teams are often provided with high-level requirement such as “design a secure form to collect user data”. While this directive may appear clear, privacy and security risks reside in the implementation of this directive. To achieve better security and privacy, more attention must be directed to clarify the method of data form collection, transmission and storage of data. Further validations should be provided so that data is corrected before it is stored.

3. Articulate security and privacy concepts in terms that assist developers integrate better security.

Regulations and policies concerning privacy and information security often address data from a systems perspective. Terms such as “protect the perimeter” articulate protection of a network and the systems and data within the network. “Protect the perimeter” does not clearly translate design into a more secure system.

Developers and analysts work with data in the context of business and user requirements. Developers also work under tight budget constraints and significant systems complexity where one requirement may consist of several steps. As security and privacy requirements continue to mature, understanding the needs and workflow of developers will facilitate better “baked in” security and privacy.

4. Extend security and privacy requirements to how data is created, changed, stored, transmitted and deleted.

Security requirements typically speak at a high level and leave a substantial gap in clarity with respect to data. As an example, a business may have a requirement where social security numbers (SSNs) are encrypted at rest. At the same time, the company may display SSNs in a web application where the SSNs are partially hidden by form design but otherwise are present and unprotected.

5. Embed security analysis into the QA process.

Security testing is often the purview of InfoSec groups and external consultants who evaluate software that exists in an operations environment (also referred to DevOps or Production). This includes the use of tools and the knowledge to locate and remediate vulnerabilities. The pitfall with this approach to security testing is that vulnerabilities are not identified before software is released. Using tools such as Seeker (which analyzes software for vulnerabilities during the QA process) can improve overall application security by reducing the number of possible vulnerabilities in software design.

CASE: Data at Risk (by Design)

Organizations are at increased risk of security incidents due to un-defined or poorly specified software requirements. One such example is inadequate articulation of secure password storage. Poor design is initiated when developers or an IT group receive a directive to secure user passwords. However, securing passwords can mean many things including:

  • Storing clear text passwords in a secure database.
  • Using well-known mathematical formulae to convert passwords into what are called hash values.
  • Storing software code or algorithms to secure passwords in the same data file or directory as the password data.
  • Storing password hints with passwords.
  • Forgetting to secure the folders where data is stored (which leaves the door open to the risk of exfiltration)
  • Not requiring strong password rules for the creation of passwords.
  • Not validating passwords prior to storing passwords.
  • Leaving administrative passwords in the same location as customer data.
  • Creating a backdoor for developers as an easy means to administrate or perform corrections.
  • Not requiring or allowing time for developers who wrote the code for securing passwords to create documentation that explains the code.
  • Leaving design implementation to a developer who may not be available or reachable after code implementation

Accountability for data design, use and quality should exist across an organization. With less of a technical divide, organizations can improve the conversation on how to better protect data with the appropriate use of security to balance risk and cost. Attention to detail at the bottom (the data level) may also deliver secondary benefits such as cleaner customer data, reduction in time to resolve customer issues, or better disaster recovery.

The misnomer of HIPAA compliant software is prevalent in the health care industry. Too often, HIPAA-regulated entities rely on vendor controls and claims of compliance as a substitute for their own HIPAA security programs. While the vendor software itself may meet the requirements of HIPAA compliance for the discrete functions it performs, the truth of the matter is that no software or system that handles Protected Health Information (PHI) is HIPAA compliant until it has undergone a risk assessment by the regulated entity to determine the efficacy of its security controls in the user’s environment.

Adherence to HIPAA required risk management processes and industry-best practices should protect organizations from attacks. HIPAA requires that both covered entities and business associates maintain a security management process to implement policies and procedures to prevent, detect, contain, and correct security violations. The foundational step in the security management process is the risk assessment, which requires regulated entities to conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the entity.

HIPAA compliant risk assessment

NIST Special Publication 800-66 identifies a protocol organizations may use for conducting a HIPAA compliant risk assessment. 800-66 generally identifies nine steps an organization should take in this regard. Significantly, the first two steps of the risk assessment process should be read together to identify all information systems containing PHI and ensure that all PHI created, maintained, or transmitted by the system is being maintained appropriately and that security controls are applied.

In the context of third party software and systems, the risk assessment process should be used to identify hidden repositories of PHI where unintended business functions or improper implementation cause PHI to be located outside of an organization’s secure environment. If third party software and systems are not identified within the scope of a risk assessment, and a disclosure or audit occurs, the government may impose penalties for not conducting a thorough risk assessment. Additionally, there is potential for third party lawsuits if a disclosure results. In a data breach dispute, the argument usually boils down to whether the controls the organization had in place were reasonable to protect PHI. In many cases, the plaintiffs use HIPAA as a standard of care, so that if an organization was not in compliance, the plaintiffs will argue the organization did not take reasonable steps to protect PHI.

While not conducting an accurate and thorough risk assessment may result in regulatory enforcement or litigation risk, failing to identify hidden repositories of PHI may also result in other HIPAA violations. If data is stored outside of its intended repository, it is unlikely that an appropriate data classification and associated security controls have been applied to the hidden repository. The result is that it is unlikely the HIPAA regulated entity is meeting the required technical implementation specifications of the HIPAA Security Rule with regard to the information contained in the hidden repository. In such situations it is unlikely that an organization has appropriate access and audit controls in place on systems that are not intended to store PHI.

Common vulnerabilities in electronic medical record (EMR) software

Software is developed for a specific purpose, such as managing patient information or insurance billing. Software’s core functionality is created during the development cycle, and security may be incorporated into the development process, or it may be an afterthought. Security is optimal when it exists within a software application and the environment where the application is hosted.

  1. At the device level where the software is installed, software integrates with its host operating system, file system and network environment. The intersection between an application and its host environment could create significant PHI exposure risk.
  2. Software, particularly database software, is often vulnerable due to poor security upgrade practices and loose configurations.
  3. Even when security features are established, those features may be changed to appease users or to simplify IT tasks.
  4. Delayed software upgrades or improper upgrade installation may increase the potential for compromise.
  5. External communication channels are often incorporated into software applications to enable functionality, such as transmitting faxes/emails, or to allow access by outside administrative support. These communication channels are often left unsecured with default configuration settings and administrative credentials.
  6. Audit logs are typically developed to support a specific software application, but use of audit logs may be disabled or ignored.

A recent recent data breach investigation

In a recent data breach investigation, Kivu encountered an integrated EMR software solution that stored patient records, including social security numbers (“SSNs”), on a Windows server. While the EMR application had protected access with unique credentials assigned to users, the server itself was accessible to all employees with domain credentials. The EMR software offered complete practice management capability in a single offering (such as patient management, prescriptions ordering and tracking, patient communications and billing).

The EMR software and the server housing the EMR software lacked appropriate controls to secure PHI. The presence of EMR login credentials in text-searchable files potentially negated the use of encryption for the EMR database. Unsecured directories provided the opportunity for any user to browse the server and potentially locate files containing patient data.

The audit capabilities of the EMR software were limited to the EMR database. As a result, externally stored files with patient data were outside the reach of the EMR software. PHI could have been exfiltrated without leaving evidence of file activity. For example, on a Windows computer, a hacker could use a Robocopy command to copy files, and the use of this command would leave no evidence of file access.

Using sophisticated search tools employing data pattern recognition, Kivu was able to identify numerous instances of PHI on the compromised server. The client was surprised by the result because they believed the EMR system was secure and HIPPA compliant. This was a painful lesson in the numerous (and dangerous) ways that sensitive data can leak from an otherwise secure system.

Kivu is a nationwide technology firm specializing in the forensic response to data breaches and proactive IT security compliance. Headquartered in San Francisco with offices in Los Angeles, New York and Washington DC, Kivu handles assignments throughout the US and is a pre-approved cyber forensics vendor for leading North American insurance carriers.

For more information about HIPPA data leakage and HIPAA compliant risk assessments, please read the full paper: Forensic Analysis Reveals Data Leaks in HIPAA Compliant Software or contact Kivu.

Some of the worst and most costly data breaches occur because an organisation doesn’t know what and how much data they have stored, says Winston Krone Managing Director, Kivu Consulting. In many cases, businesses have simply been unaware that they hold sensitive data such as healthcare or financial information, and “…haven’t purged data, they haven’t taken it off line; they’ve treated old data…as being necessary to be instantly accessible,” Krone, a computer forensics expert, argues in an interview for Hiscox Global Insight.

What’s in an email?

Part 1A particular area of exposure, Krone says – and this is particularly the case for professional services companies – is with the storage of unstructured data such as email. “It’s been the driver in many of the most expensive data breaches. The most common is email or a file server where you have attachments, spreadsheets, word documents. In a lot of these cases you don’t know what you’ve got. You may not even know that someone has sent you an attachment with a thousand names, dates of birth, social security.”

Krone adds: “Trying to determine how many mail boxes have been raided [following a breach] can be the work of weeks and then determining what data is inside those mail boxes can take 30-40 days. This pushes up the response time [and] the response costs.”

Part 2

For many businesses, even if an attempted data breach is unsuccessful, the impact can be just as bad as a successful breach, explains Krone. “In most cases the attackers are stopped or seen. But the real problem for us, and it’s probably a problem in half our cases, is that the organisation was not logging or monitoring its own system sufficiently to allow us to disprove the hack. Unless we can prove what they did and what they’ve taken…that will be a defacto data breach with enormous costs and implications to the organisation.”

Part 3

Given that it’s virtually impossible to protect against a data breach happening, however, Krone says that the best risk management happens well before a breach. “If you haven’t set up in advance your system so it’s recording evidence, so it’s logging evidence, data of who is coming in, where they’re coming in, what they’re doing, what they’re taking out of your system – you can’t go back in time and work that one out. That’s a crucial preparation to put in advance.” A good incident response plan is also important, Krone adds, as well as having a good understanding of what data an organisation holds.

Insurance sector can drive better risk management

The growth in cyPart 4ber insurance is also playing a role in improving awareness of the cyber threat. “Just having the discussion about cyber insurance has required organisations to rethink their risk and how they’re mitigating these problems,” says Krone. “We see a huge difference between companies who have a cyber risk policy – or at least have gone down the road in deciding whether they should have one – and those who haven’t thought about it. It’s a huge educator and the more enlightened insurers are asking companies to really answer some deep questions. It’s a great way for disparate groups [in an organisation] – legal, risk management, HR, IT – to come together when they think about cyber insurance.”

Data choicePart 5

In such a fast changing environment however, where data breaches hitting the news become ever more significant in scale, Krone says that the real differentiator between good and bad businesses from an information security perspective, will be the way in which they deal with their data. “If you look at the example of financial institutions and healthcare – two [sectors] that are very regulated in the US and have got their act together – [a business] is either going to take [its] data and start heavily encrypting it and segregating it and making sure that nobody can get into it, or they’re going to take their data and say we’re not in the data storage business; we’re going to put it off to security accredited vendors. It’s really a question of whether smaller organisations are going to have the means and the budget to go down those two different roads.”

#1. Anti-virus programs are generally ineffective

#2. Your firewall faces the wrong way

#3. You are the weakest link in the Cloud

#4. Advising your employees not to open emails from “strangers” is counter-productive

#5. Encrypting your company’s portable devices isn’t enough

Many small-to-medium (SMB) sized businesses believe that they aren’t important or large enough to be targeted by hackers. Unfortunately, that’s not the case. Smaller companies in general have fewer resources to spend on defending their networks, yet they have substantial assets that hackers can take. As larger organizations adopt better cyber defenses, many hackers specifically target SMBs as easier targets.

If a hacker targets an SMB, the risks are great. When a hacker intrudes into a business network, they may be able to steal and illegally use customer data, lift employee information (including social security numbers and payroll information) and empty the company’s bank account. In addition to these direct losses, a hacker can use the SMB’s network to attack other targets such as the SMB’s business partners and customers. These consequential third party losses can obliterate goodwill and expose the SMB to costly litigation.

Hacking is becoming an increasingly serious threat to every type of business. Computer virus source code is readily available on the Internet, sometimes for free, making new malware easier to create by professional cyber criminals and “wannabe” hackers alike. New malware is appearing at an estimated rate of 80,000 instances per day.

To learn more read the full white paper.  We’ll talk about the five things hackers don’t want SMBs to know.  We’ll pinpoint what hackers look for when choosing a company to attack. We’ll reveal the damage that they can do. Then, we’ll offer some practical steps that SMBs can take immediately to protect their organizations from outside intrusion.

What is PCI 3.0 and How Does It Differ from PCI 2.0?

The Payment Card Industry Data Security Standard (PCI DSS) applies to companies of any size that accept credit card payments. The effective date of version 3.0 of the standard was January 1, 2014, but existing PCI DSS 2.0 compliant vendors had until January 1, 2015 to move to the new standard. Some of the changes are not required to be in place until June 1, 2015. This blog post from Kivu will explain what the new standards are and review some of the most critical issues involved with compliance.

PCI 3.0 is not a wholesale revision of PCI 2.0. The 12 core principals of PCI compliance remain intact. PCI 3.0 is the clarification and revision of all 12 principals and is roughly 25% bigger than PCI 2.0, including 98 upgrades. Some of the upgrades are small but others are significant. PCI 3.0 will be harder and more expensive to implement than PCI 2.0. Organizations should expect that the PCI 3.0 assessment will be similar to PCI 2.0 but more transparent and consistent.

A major concern for merchants implementing PCI 3.0 is how they will be able to afford the increased cost of compliance. PCI 3.0 requires additional processes and procedures that many organizations might not be prepared to implement.

New Key Areas for PCI 3.0

Segmentation of Card Data Environment (CDE) – Penetration Testing

PCI 3.0 is a great improvement over PCI 2.0 because it segments the Card Data Environment (CDE) from other networks. During the breach at Target, contractors had access to the client network, putting the whole CDE environment at risk.

The cost of segmenting the CDE environment will be a burden on the merchant, but it is a significant step towards reducing risk and exposure. Penetration Testing (testing a computer system, network or web application to find vulnerabilities that an attacker could exploit) will be critical. Qualified Security Assessors (QSAs) will have a tough job auditing the new guidelines and results.

Key Takeaways

  • PCI 3.0 has to be implemented by June 2015.
  • PCI 3.0 requires that all merchants be PCI compliant to undergo a Penetration Test.
  • Merchants need to ensure that correct methods are used to segment the CDE environment from the client network.
  • The contractor network must be segmented from the client network.
  • The Best Practice Framework will be based around NIST SP800-115.
  • Merchants must be diligent in their selection of penetration testing services.

System Inventories

Maintaining system inventories is not an easy task, and accurate system inventories have been difficult to accomplish under PCI 2.0 What is different with PCI 3.0?

The inventory list under PCI 3.0 just grew bigger. Now, maintaining an inventory of hardware, software, rules and logs will be an even more difficult task in order to remain in compliance. Documenting components and inventory is time consuming, and inventory changes frequently. Who will be in charge of accomplishing this within an organization, and how reliable will the inventory list be? What happens when virtualization/cloud is thrown into the inventory mix? What about geographic locations?

We at Kivu see maintaining a system inventory as an evolving cycle with constant issues.

Key Takeaways

  • Maintaining a reliable, timely inventory will be somewhat impossible.
  • The merchant’s IT & compliance teams will have to spend more time creating inventories.
  • Merchants need to know who will be responsible for maintaining system component inventories that are in scope for PCI DSS (Hardware & Software).
  • Merchants must maintain an inventory of authorized wireless access points, including their business justification.
  • Documenting components and functions will be a continuous cycle.

Vendor Relationships

Explicit documentation of who manages each aspect of PCI DSS compliance is a critical improvement of PCI 3.0 over PCI 2.0. Who owns what, the service provider or the organization? Management of each aspect of PCI DSS compliance should be well documented in every vendor contract agreement.

Kivu recommends a written agreement with service providers verifying that the provider maintains all applicable PCI-DSS requirements. Getting service providers to agree will be a daunting task. Will vendors want to take this responsibility? In refuting PCI reports, identifying who is at fault is a common problem. If there is a breach, who is liable?

Key Takeaways

  • In PCI 3.0, detailed contractual language and service provider roles and responsibilities are much more of a focus.
  • Merchants should decide who owns each aspect of PCI compliance.
  • PCI compliance has to be written into the vendor contract agreement, with specific language on who owns what.
  • Outline where responsibility lies for control over compliance.
  • Providers must give their customers written documentation stating that they are responsible for the cardholder data in their possession.

Anti-Malware Systems

PCI 3.0 places a new emphasis on identifying and evaluating evolving malware threats targeted at systems NOT commonly considered to be affected by malicious software. Advanced research capabilities or Intel on malware threats is seen as a proactive measure, but who will provide these proactive services to merchants? How can this be enforced?

Who will be responsible for keeping abreast of threats and making sure anti-malware systems are patched and configured correctly? It is critical for the PCI Standards Council to release a recommended list of anti-malware vendors and provide guidelines for merchants.

Key Takeaways

  • PCI 2.0 only states that antivirus software should be in place. PCI 3.0 takes it to another level.
  • PCI 3.0 states that if malware emerges for PCI systems, the merchant should know about it. There needs to be a process that makes sure this happens.
  • PCI QSAs will need to scrutinize anti-malware controls on all platforms.
  • Technical planning and strategy will involve more paperwork for merchants.
  • Specific authorization from management to disable or alter operations of all antivirus mechanisms should be a policy.
  • An anti-malware system should automatically lock out the user for trying to disable it.
  • Merchants will need to justify why they don’t have anti-malware software running on non-windows platforms. This is critical because it causes organizations to think carefully about evolving non-windows threats.

Physical Access and POS System Inventories

PCI 3.0 states that physical access to a merchant’s server room should be restricted, whether the room is in a closet in the back of the store or in a high-end data center. Physical access should be limited to certain personnel, and all others should be escorted and signed in and out of the room. Restricting admission limits the risk of unauthorized access to POS devices and back end systems that could potentially be swapped out by unauthorized individuals.

Maintaining an inventory of POS hardware and conducting frequent spot checks to ensure serial numbers match will be critical to staying compliant under PCI 3.0. POS device inspections should be a best practice, but how many merchants even have a list of their POS devices?

Key Takeaways

  • Control physical access to the server room for all on-site personnel based on individual job function. Access should be revoked upon termination.
  • Maintain an inventory of all POS devices and implement controls to protect these devices.
  • POS device inspections should be a best practice. Periodically inspect POS devices and check serial numbers to ensure devices have not been swapped out.
  • Procedures for frequently testing POS devices should be implemented.
  • Provide security awareness training to employees that use POS systems to identify suspicious behavior.
  • PCI 3.0 mandates that service providers with remote access to the CDE must use a unique authentication credential for each customer environment.
  • Access needs and privileges for all job functions allowed access to the CDE must be formally defined and documented in advance.

What Other Changes Should We Expect with PCI 3.0?

Following are some moderate changes worth highlighting:

  • Risk assessments are now to be performed annually, as well as whenever significant changes are made to the Card Data Environment. What constitutes a significant change to the environment? There are no guidelines that specifically address this.
  • New password management processes/controls are being enforced and met.
  • The CDE must be formally defined, with an up-to-date diagram that shows payment flow across systems.
  • Merchants need to implement file change detection systems and then investigate and respond to all alerts generated by this system. This type of system can generate many alerts every day. Kivu recommends that merchants understand who will monitor these alerts and review and document responses.
  • Daily review of logs is required. Again, who will do this?
  • QSAs will have more responsibility to enforce the new guidelines.
  • PCI 3.0 will increase compliance costs, and those who complain may not fully understand the reasons for the process mandate.
  • There is a recommendation to avoid service providers that are non-compliant.
  • Memory scraping became a best practice for PCI 3.0.

Has the Value of PCI Standards Declined?

It is tough to argue against good security and retailers accepting more responsibility for it. The buck has been passed to the retailer, although banks should take more responsibility to provide more security as well through chip technology or point-to-point encryption. Some retailers are moving ahead with tokenization and point-to-point encryption because they believe that PCI 3.0 compliance is not enough.

What Failures Do We See in PCI 3.0?

The PCI Security Standards Council has missed some key opportunities to clarify the standard and to address compliance as it relates to emerging technologies.

  • One significant issue is the failure of PCI 3.0 to address virtualization, cloud and mobile payment providers. Merchants are frequently using these 3 areas, but PCI 3.0 does not address them in detail nor provide merchants with guidelines.
  • PCI 3.0 continues to ignore mobile payment processing and mobile device security, leaving merchants who support mobile payment technology on their own to determine how to be compliant. Card brands are reluctant to put security constraints on mobile technology through fear of stifling the growing revenue expected from mobile payments.
  • Some merchants remain non-compliant with PCI 2.0, yet they are expected to be compliant with PCI 3.0 by June. How will they be able to make all of the changes necessary? Will some merchants be allowed to become PCI 2.0 compliant at first and given additional time by the PCI Security Standards Council to comply with PCI 3.0?

Is PCI 3.0 Worth It?

PCI 3.0 is bigger, therefore harder and more expensive to implement than PCI 2.0, but it offers additional, critical security benefits. It will take more time and resources from merchants to stay in compliance with PCI 3.0. We at Kivu believe that going forward, it would be best to integrate PCI compliance activities into an organization’s year round IT Security Management process.

Most computer compromises aren’t discovered until after an attack—sometimes days or weeks later. Shutting down a computer may halt malware activity, but it could have negative and unforeseen consequences. For example, it could become difficult to retrace information infiltrated by a hacker or botnet. This is particularly important if significant time has transpired between an attack and discovery of malware.

During a forensic investigation, there should be a balance between rushing to remove malware and understanding the scope of the malware infestation in order to find a solution that deters future attacks.

What is Malware?

Malware is software that is designed for illicit and potentially illegal purposes. Malware may be a single software program or a collection of programs used to accomplish tasks such as:

  • Obtaining system control—for command and control of a computer
  • Acquiring unauthorized access to system resources—network intrusion
  • Interrupting business operation
  • Gathering information—reconnaissance
  • Holding digital assets hostage—ransomware

How Does Malware Infection Occur?

The Internet has opened the door to broad distribution of malware. It is possible for malware to originate from sources such as email, instant messaging, or infected file downloads. Malware can also spread through USB devices or connectivity to public WiFi hotspots.

The most complex malware tools may use a combination of distribution methods to infiltrate an organization. For example, an email may contain a hyperlink to a website that causes “dropper” software to download. The dropper software performs reconnaissance of its host computer and transmits results out to another computer on the Internet. The second computer analyzes the reconnaissance results and sends back malware that is customized to the host computer.

What are Common Types of Malware?

Virus. Virus software refers to software that inserts malicious code into a computer and has the capability of spreading to other computers. The ability to propagate is a requirement for malware to be classified as a virus or worm.

Worm. Worms are a type of malware that propagate across networks. A worm finds its way by reading network addresses or email contact lists and then copying itself to identified addresses. Worms may have specific capabilities, such as file encryption or installation of certain software, including remote access software.

Trojan Horse. This type of malware enables unauthorized access to a victim computer. Unauthorized access could result in theft of data or a computer that becomes part of a denial-of-service (DDoS) attack. Unlike viruses or worms, Trojan horse software does not spread to other computers.

Rootkits. Rootkits refers to malware that takes control of a host computer and is designed to evade detection. Rootkits accomplish evasion through tactics, such as hiding in protected directories or running hidden process names on DLL’s (Dynamic Link Libraries) as legitimate files, without the computer or user noticing an abnormality. Rootkits may defend themselves from deletion and may have the ability to re-spawn after deletion. Most notably, rootkits have the potential to operate in stealth mode for extensive periods of time and to communicate to external computers, often transmitting collected data from a victim computer.

Spyware. The purpose of spyware is to collect data from a victim computer. Spyware may exist as malware that is installed on a host computer or embedded within a browser. Spyware may collect data over an extensive time period without the victim ever knowing the extent of the spying activity. Spyware may collect keyboard strokes, take screenshots of user activity, or utilize built-in cameras to record video.

Browser Hijacker. This malware takes control of a user’s browser settings and changes the default home page and search engine. Browser hijacking software may disable search engine removal features and have the ability to re-generate after deletion. There may also be persistent, unwanted toolbars that attach to a browser.

Adware. Adware refers to software that has integrated advertising, particularly freeware software. Adware displays advertisements within the freeware product and transmits collected data back to a controlling party (e.g., an advertising distributor). A software creator may utilize advertisements to earn advertising revenue.

Ransomware. Ransomware is malware that encrypts part or all of a host computer. Encryption locks a victim out of important files or a computer until a ransom demand is paid, possibly in the form of bitcoins. If the ransom is paid, the victim has no guarantee that the ransomware will de-crypt the computer.

Investigating Malware

When a malware infection is suspected, care should be taken to investigate and collect evidence where possible while performing radiation to remove the malware infection. The following guidelines should be considered when malware is suspected. If a forensics team is involved with the investigation, the following points will be addressed by forensics examiners.

  1. Assess the implication of powering down the potentially infected computer. Powering down a computer may stop malware in its tracks and result in the loss of potential evidence. In the case of ransomware, a shutdown could results in permanently unrecoverable data. The first response to possible malware infestation should be an evaluation of the victim computer and gathering of key evidence. If the malware is associated with network intrusion or other nefarious activity, evidence gathering may extend across multiple computers and the respective network that hosts the victim computer.
  2. Collect a sample of Random Access Memory (RAM). RAM is temporary memory that exists while a computing device is powered on. RAM is particularly important since malware has the ability to operate (and hide) in RAM. Capturing an image of the infected computer’s RAM, prior to shut down, enables a forensic examiner to assess the potential activity and functionality of the malware. Artifacts that may reside in RAM include:
    • Network artifacts, such as connections, ARP tables, and open interfaces
    • Processes and programs
    • Encryption keys
    • Evidence of code injections
    • Root kit artifacts
    • DLL and driver information
    • Stored passwords for exfiltration containers
    • Typed commands in the DOS prompt
  3. Identify and preserve log files. Log files record a variety of information about system and application usage, user login events, unusual activity such as a software crash, virus activity, network traffic, etc. In the event of a potential malware infection or network intrusion event, log files should be collected and preserved for further analysis. If logging activity is turned off or log files are set for overwriting, they may be limited in value for an investigation.
  4. Interview users who may have received suspicious emails or observed unusual computer activity. Computer users and IT staff may have important information regarding the origin, timeline and possible activity of the malware. Early in an investigation, interviews should be conducted to assess the potential scope and breadth of an incident. If malware was introduced through user activity, such as a phishing email, the suspect email may still reside in a user’s email. In the case of malware that entered a computer through a software vulnerability (e.g., code injection through an unsecured website), IT staff may have information about unusual events in system logs or data leaving through a firewall at unusual times (e.g., after business hours).
  5. Determine whether to investigate other computers. Malware may spread through computers within the same network segment or a shared file server. Investigation of malware should include scans of potentially connected computers to assess the possibility of further malware infestation. Additionally, if external connections such as Remote Desktop or GoToMyPC exist and are active, then a determination should be made to analyze externally connected computers.

For more information about malware infection and forensic investigation, please contact Kivu.