Kivu will be providing the technical forensics support for Endurance’s recently launched Cyber Extortion Response Services.  Working with the law firm Mullen Coughlin, Kivu will guide ransomware victims as they respond to malicious attacks, including arranging for payment in Bitcoin or other cryptocurrency, analyzing and testing decryption keys to ensure they are effectively and safely applied without further compromising the company’s network, and preparing documentation for reporting events to appropriate law enforcement agencies.

 

Kivu’s Cyber Extortion services are leading the cyber response and cyber insurance industry.  Kivu has built a reputation combating cyber extortion and responding to cyber-crime, allowing our clients to make informed, cost-effective decisions.  Our experts include analysts fluent in Russian, Chinese, Spanish, and German, trained in negotiation techniques, and highly experienced in hacking techniques and protocols.

 

A Q&A with Winston Krone of Kivu Consulting – Posted by Mark Greisiger on Junto Blog

Oct 2016

There’s no doubt that ransomware attacks are on the rise and they’re becoming more insidious. I spoke with Winston Krone, global managing director of Kivu Consulting about what the latest version of ransomware looks like and what risk managers should do if it strikes their organization.

What is ransomware?

Ransomware is a type of malware that can infect any device where the malware is opened—typically through a link in an email, but we’re seeing variants where it’s seeded on a computer and activated remotely. Either way, it’s designed to infect other devices or hosts such as servers that the original device is connected to. Its real danger to organizations is its ability to spread across systems for two reasons:

  • It can compromise vast amounts of data—once it jumps from a desktop to a server, you’re talking terabytes of data compromised rather than gigs.
  • It can jump into backups and destroy the ability to restore the system. This issue has been made worse by the recent trend of synchronized backups—though regulated organizations still require long-term backup capability. If the only backup goes back a day or two and it gets lost, you don’t have earlier versions to rebuild the system.

How does it impact companies?

In the best case scenario, you come back online in several days—the worst case scenario is that you never come back online. Ransomware attacks affect just about every type of organization. While many have already designed systems with multiple backups so they can get back online immediately following an attack, some organizations, particularly law firms, accounting firms and manufacturing companies, haven’t developed systems for safely keeping backups.

Either way, organizations need to decide whether to pay the ransom or to try to rebuild the data themselves from other areas such as employee laptops or old computers that were offline (and thus, not hit by the malware). The do-it-yourself approach turns into a significant amount of work—many hundreds of hours of labor and business downtime—and it’s rarely less than the $5000 to $20,000 ransom. Some organizations have an aversion to paying criminals and that’s a legitimate concern, but there’s a danger in trying to rebuild the data yourself. We have seen situations where organizations try to do this and then realize later that they can’t and want to pay the ransom—in the meantime, they have overridden the encrypted data and when they pay the ransom and get the decryption key it doesn’t do them any good.

Many organizations don’t include ransomware in their incident response plans or they underestimate its significance. The ones that do include it need to update the plan on a quarterly basis, at the very least. Over the last year we have seen major paradigm shifts with new types of ransomware occurring every two weeks, in terms of the attack vector, seriousness of the attacks and how they’re launched.

Can you explain how the negotiations between the perpetrator and the attacked organization work?

In the most basic ransomware, you’re simply steered to a URL and there’s really no way to communicate with the attacker. In this situation, it’s usually a relatively small amount for the ransom, probably less than $5000. In a second variant, they supply a URL but there’s some degree of communication such as a comment field and some type of handshake where they let you test a small amount of data to prove that they actually have a decryption key and it works. In the third type, there’s direct communication by email, and these are the most expensive ransoms. In these cases, they’re open to negotiation—not about the price but about the time needed to pay the ransom or to figure out how the decryption works.

In larger attacks we see a new variant whereby the basic ransom goes up by the amount of computers infected. In those cases, you can pay by individual computer affected or with a blanket global license upwards of $20,000 and they’ll give you all the keys needed. In those types of attacks, the attacker is incentivized to negotiate with you more. In general, the negotiations are not for the fainthearted—we have negotiated dozens of these cases with foreign language speakers set up with multiple identities around the world and on the dark web. Our role is to make sure the negotiations go smoothly while masking the identity of our clients to the extent that we can.

Anonymity is important. We highly recommend cloaking the identity of the attacked organization because of their ability to increase the ransom. In most cases the criminals don’t know who they’re attacking and they don’t care. However, this is something that we expect to change in the next six months or so—we think attackers will go after regulated businesses or other businesses where data is important, or choose organizations that they know carry insurance so they are more likely to get paid.

How can a company set up a bitcoin wallet in order to actually pay the ransom?

Organizations can set up their own bitcoin wallet but it is very difficult and among the lawyers and risk managers I’ve met who offer advice on this topic, almost none of them have ever actually done it themselves. It’s relatively straightforward to get a small amount of bitcoins but it’s very difficult to get a significant amount of money. Most bitcoin exchanges cap the amount of money you can get within a given time period. You can start an account and usually it takes a week to get it going and build up enough transactions to call down tens of thousands of dollars’ worth, and it’s expensive—with charges of over 15 percent per transaction. Off the exchanges,  you’re dealing with sketchy people and you’re opening yourself up to getting ripped off. Unless you already have an account and a reserve of $10,000-$20,000 you’re not readily prepared to deliver a ransom.

What are some common pitfalls in this situation?

Assuming you have money lined up and you’re ready to pay the ransom, there are still a number of things that can go wrong. You have to make sure you’re paying the right people. We’re seeing increasing examples of serious criminals getting involved in the ransom business. It’s the equivalent of thieves ripping off drug dealers. We’re also seeing organizations who have been hit by multiple attacks at the same time which can interfere with the remediation process. In some cases, the decryption key doesn’t work or the IT people don’t know how to use it properly. We have also seen instances where the decryption key itself is an attempt to get additional malware on the system.

How might a forensic expert play a role here?

We can help in every step of the process, including assisting the client with the response before paying the ransom, assisting with paying the ransom (we offer the service of paying on behalf of the client with our own bitcoins), making sure all communications are anonymous and verifying that the decryption tools themselves work and don’t contain more malware. We can also determine if the ransomware is actually a cloak or cover for an actual theft of data. In those cases, the $20,000 cost of ransom is dwarfed significantly by the cost of a data breach. We’ll make sure that the encrypted data isn’t destroyed during remediation. In the newest cases of ransomware that gets set off remotely by a hacker, forensic analysis can be required by state and federal data breach regulations to determine whether confidential data has been compromised since the hackers clearly obtained some access to the network to plant the ransomware.

What else should risk managers be aware of with regard to the threat of ransomware?

We’re seeing a lot of antivirus companies that claim to be developing tools that can spot ransomware and stop it, or vaccinate computers against it, but we caution people to be very skeptical about these claims. These tools might be able to stop poorly designed ransomware but the fact is, it’s getting more sophisticated all the time—the hackers are figuring out how to outsmart us by masking the malware and the attack vector. What organizations really need to do is go back to the basics—designing a sound infrastructure for computer systems so that if there’s infection it won’t spread, and prepare for an encounter with ransomware with a detailed incident response plan.

In summary…

We want to thank Winston for his granular insights into this threat, which seems to be impacting cyber liability insurance clients on a weekly basis these days. We also think it’s important for a risk manager to see that there are many challenging and nuanced steps involved in resolving this type of cyber risk. An organization should not undertake resolution without the guidance of a Breach Coach® lawyer and forensic/security expert who has experience with extortion. Mr. Krone is a frequent speaker at NetDiligence® Cyber Liability Conferences. 

 

Testing the Password Encryption Strength of NT LAN Manager and LAN Manager Hash

Security risks associated with weak user-created passwords are well documented. In 2009, for example, cyber security provider Imperva analyzed more than 32 million passwords that were released in a 2009 data breach. More than 50% of the passwords involved poor user password choices, and 30% of the passwords contained 6 or fewer characters. User habits in poor password construction improve the chances of successful password determination by hackers who use password-guessing software.

Kivu recently participated in an experiment to evaluate the password encryption strength of two Windows Operating System authentication protocols.

LAN Manager (LM) hash, employs a multi-step algorithm to transform a user password into a calculated string value that obfuscates a password’s identity. The resulting LM hash is stored rather than the original password. First, a user’s password is converted to all uppercase letters. Next, the uppercase password is set to a 14-byte length. For passwords greater than 14 bytes, the password is truncated after the 14th byte. Passwords less than 14 bytes are null-padded to reach 14 bytes. The 14-byte password is split into two 7-byte segments, and a null is added at the beginning of each 7-byte half. Each half is used as a key to DES-encrypt the ASCII string “KGS!@#$%”. Both output values are concatenated to create a 16-byte value LM hash.

Microsoft’s second encryption method, NT LAN Manager (NTLM), is an improved algorithm for securing a password’s identity. Beginning with Microsoft Windows NT 3.1, NTLM was introduced to improve security. NTLM passwords differ from LM passwords in that NTLM employs the Unicode character set with the ability to differentiate upper and lowercase letters and permits passwords up to 128 characters in length.

Kivu’s experimental design to compare the relative strength of these two encryption methods employed Cain and Abel password-guessing software and different Windows passwords of increasing complexity.

How Cain and Abel Works

Cain and Abel provides the ability to execute password-guessing schemes using dictionary attacks and brute-force attacks. Both dictionary attacks and brute-force attacks employ guess-based methodologies to identify the plain text password associated with a specific hash-encrypted password.

Dictionary attacks use a pre-defined list of search terms or phrases as the basis for guessing. Each search term is transformed into a hash string value using a specific hash algorithm, such as the LM hash protocol. The resulting hash value is compared to a hash value of interest, and if the hash values match, the plain text password is identified.

Brute force attacks attempt every combination of defined search criteria to identify the plain text password associated with a hashed password. Search criteria settings include the use of character sets, such as ASCII and the number of characters in a password.

Conclusions

Kivu’s experimental results yielded significant insights concerning the strengths of the NTLM password algorithm, which is Microsoft’s replacement to LM.

None of the NTLM-transformed passwords we used were quickly resolved through brute force attack. Our experiment suggested that the passwords established for the test user accounts would take more than 4 years to determine. While our brute-force attacks were limited to less than 3 minutes, NTLM hash protocols were identified as having substantial lead times to identify plain text equivalents of NTLM-hashed equivalents.

LM’s password hashing approach to obfuscating plain text passwords, however, was limited in its success. As observed with three test user passwords, brute force password guessing resulted in a partial identification of LM passwords, due to LM’s sub-division of the password string during the hashing algorithmic process.

Our results indicated that both NTLM and LM passwords are susceptible to compromise in a well-designed and broad dictionary attack. Overall, NTLM hashed password equivalents may be stronger than LM in simple dictionary attacks. In substantial dictionary attacks, however, it may be more likely to identify an NTLM password due to the ability to match a calculated hash value from a dictionary. While dictionary-based attacks may be limited in their combinations of matches, larger dictionaries provide more opportunities for a match.

 

Kivu (www.kivuconsulting.com) is a nationwide technology firm specializing in the forensic response to data breaches and proactive IT security compliance.   Headquartered in San Francisco with offices in Los Angeles, New York, Washington DC, and Vancouver, Kivu handles assignments throughout the US and Canada, and is a pre approved cyber forensics vendor for leading North American insurance carriers.  Author, Megan Bell, directs data analysis projects and cyber security investigations at Kivu.

In several forensic investigation cases, Kivu has analyzed iOS backup files as a method of obtaining evidence of text messages or other data from an iOS device, usually when an iOS device is not readily available or as a means of cross-correlating evidence.

These backups are often made to the custodian’s computer when they connect their iOS device to a computer to charge it or sync it with iTunes. When they connect their iPod touch, iPhone, or iPad to their computer, certain files and settings on their device are automatically backed up. As such, they are locally stored on the custodian’s computer and can be extracted and parsed for further analysis.

In a recent case, the backups were extracted from the custodian’s laptop, which was provided to Kivu. The backups pertained to two iPhone devices. Kivu forensically extracted the backups from the custodian’s laptop and was able to parse the backups and uncover text message data that came from both the custodian’s current iPhone and the prior one, which was no longer in her possession.

Here’s how the text messages were retrieved

 

Within the “Backup” directory under MobileSync, there is a subdirectory named for the unique device identifier (UDID) of the device for a full backup. The UDID is a 40-character hexadecimal string that identifies the device [example: 5b8791c14e926cc9220073aefcedd2b831c843b1]. Sometimes, the UDID will have a timestamp appended to it that indicates the date and time that the backup was made. For example, a directory named 5b8791c14e926cc9220073aefcedd2b831c843b1-20150506 122733 indicates that the iOS device was backed up on May 6, 2015 at 12:27:33 PM.

Within the UDID directory, there are numerous files with a similar naming convention as the UDID directory without a file extension. These filenames are actually SHA1 hash values of files from the device. When backing up an iOS device through iTunes, iTunes computes a SHA1 hash value of the file’s path. Below is a chart detailing several common SHA1 file names for files pulled from an iOS in the course of an iTunes backup.

Since text messages are often of interest, it’s important to note the SHA1 hash value assigned to sms.db. This is the database file that holds text message data, including sender, recipient, and content of messages.

 

Sources:

http://ios-forensics.blogspot.com/2014/07/apple-ios-backup-file-structure.html
http://resources.infosecinstitute.com/ios-5-backups-part-1/
http://www.iphonebackupextractor.com/blog/2012/apr/23/what-are-all-files-iphone-backup/

About Kivu

Kivu is a nationally recognized leader for security assessments and breach response services.  For more information about collecting forensic data from Apple devices, please contact Kivu.

Airline boarding passes are full of personal data that you might not want total strangers to know. Many travelers simply toss their used boarding passes in the trash, or leave them in the pocket of the seat in front of them when they fly, unaware that the information stored in their boarding pass barcode could leave them open to identity theft. While some airlines, like Southwest, scramble the information on the barcode, others, like United, currently do not.

Recently, Kivu was asked by KPIX-TV in San Francisco to help research the type of information that a data thief could glean from a typical commercial airline-boarding pass. Kivu was provided with three sample boarding passes. The specific information available from each boarding pass barcode depended on the airline. Kivu looked at barcodes for three major airlines – United Airlines, Southwest Airlines, and Virgin America. Here’s what we uncovered.

What’s on the Barcode?

Barcodes are technically easy to decipher. With a good scanner app, information that is not available in plain text on a boarding pass can be uncovered. There are several different types of barcodes that one can find on a variety of items. Boarding pass barcodes are encoded as PDF417 barcodes. This barcode type contains multiple modes to represent text, numeric, and binary data.

If a customer purchases a flight using a Frequent Flier account or with Frequent Flier miles, (depending on the airline), their personal frequent flyer information is displayed when the barcode is scanned. If the customer did not purchase or reserve the flight using Frequent Flier miles, that information is not available by scanning the barcode.

For example, with her permission we decoded the QR Code and identified the Frequent Flyer number used by a recent traveler on United Airlines. With this information, we were able to log on to the passenger’s United Airlines account. We then knew her address, personal email, and telephone number. Going further, we knew when her next flights were scheduled and had the option to cancel them or change her seat. We also knew her date of birth, middle name, and the username for her account. Lastly, we could access her Miles Rewards and have them transferred to our own personal account in the form of cash.

All of this easily available information leaves travelers open for further data hacks. If we wanted to try to get into her personal bank account, this information would have provided a great start.

Less data is available if a passenger is not using a Frequent Flyer number. Still, a data thief could learn from a boarding pass barcode the passenger’s name, where they flew, the date and the airline.

For airline passengers, this should be a wakeup call. One solution to this problem is to keep your boarding pass on your phone rather than print a copy.

Kivu’s forensic investigators are experienced in protecting organizations against compromise of data, theft of trade secrets and unauthorized access to data. Author, Katherine Delude, is a Digital Forensic Analyst at Kivu Consulting in San Francisco, California. For more information, please contact Kivu.

Data quality is not a glamourous subject. It is not the type of topic that headlines a conference or becomes front-page news. It is more typically suited for help guides and reference manuals that few individuals relish reading. However, organizations that acknowledge the importance of data quality and have strong data quality programs significantly reduce privacy and security risks. They also lower the potential costs associated with data breaches, the legal risks, and potential size of business interruptions.

Data quality issues start when information is created. This includes incorrect information, data entry errors, and inaccurate document conversion such as conversion of text contained within image files (e.g., a screen shot from a patient management system). Data quality issues also arise as data is being processed, transferred or stored.

1. Build a foundation of knowledge and fluency about data.

“Understanding data” means moving deeper than simply understanding that a database stores records or that a file contains information. Knowledge of data means taking the time to understand that data exists in different layers and structures and can be readily transformed. Additionally, data can be defined as discreet elements (e.g., a data element that stores date time information) and have assigned roles and restrictions. Investment in the language of data can improve control over data and enable better decisions on information security and privacy.

2. Don’t leave data design and quality decisions to the development team or an IT group.

This could place data at significant risk including possible loss, misuse and insecurity. Development teams are often provided with high-level requirement such as “design a secure form to collect user data”. While this directive may appear clear, privacy and security risks reside in the implementation of this directive. To achieve better security and privacy, more attention must be directed to clarify the method of data form collection, transmission and storage of data. Further validations should be provided so that data is corrected before it is stored.

3. Articulate security and privacy concepts in terms that assist developers integrate better security.

Regulations and policies concerning privacy and information security often address data from a systems perspective. Terms such as “protect the perimeter” articulate protection of a network and the systems and data within the network. “Protect the perimeter” does not clearly translate design into a more secure system.

Developers and analysts work with data in the context of business and user requirements. Developers also work under tight budget constraints and significant systems complexity where one requirement may consist of several steps. As security and privacy requirements continue to mature, understanding the needs and workflow of developers will facilitate better “baked in” security and privacy.

4. Extend security and privacy requirements to how data is created, changed, stored, transmitted and deleted.

Security requirements typically speak at a high level and leave a substantial gap in clarity with respect to data. As an example, a business may have a requirement where social security numbers (SSNs) are encrypted at rest. At the same time, the company may display SSNs in a web application where the SSNs are partially hidden by form design but otherwise are present and unprotected.

5. Embed security analysis into the QA process.

Security testing is often the purview of InfoSec groups and external consultants who evaluate software that exists in an operations environment (also referred to DevOps or Production). This includes the use of tools and the knowledge to locate and remediate vulnerabilities. The pitfall with this approach to security testing is that vulnerabilities are not identified before software is released. Using tools such as Seeker (which analyzes software for vulnerabilities during the QA process) can improve overall application security by reducing the number of possible vulnerabilities in software design.

CASE: Data at Risk (by Design)

Organizations are at increased risk of security incidents due to un-defined or poorly specified software requirements. One such example is inadequate articulation of secure password storage. Poor design is initiated when developers or an IT group receive a directive to secure user passwords. However, securing passwords can mean many things including:

  • Storing clear text passwords in a secure database.
  • Using well-known mathematical formulae to convert passwords into what are called hash values.
  • Storing software code or algorithms to secure passwords in the same data file or directory as the password data.
  • Storing password hints with passwords.
  • Forgetting to secure the folders where data is stored (which leaves the door open to the risk of exfiltration)
  • Not requiring strong password rules for the creation of passwords.
  • Not validating passwords prior to storing passwords.
  • Leaving administrative passwords in the same location as customer data.
  • Creating a backdoor for developers as an easy means to administrate or perform corrections.
  • Not requiring or allowing time for developers who wrote the code for securing passwords to create documentation that explains the code.
  • Leaving design implementation to a developer who may not be available or reachable after code implementation

Accountability for data design, use and quality should exist across an organization. With less of a technical divide, organizations can improve the conversation on how to better protect data with the appropriate use of security to balance risk and cost. Attention to detail at the bottom (the data level) may also deliver secondary benefits such as cleaner customer data, reduction in time to resolve customer issues, or better disaster recovery.

The misnomer of HIPAA compliant software is prevalent in the health care industry. Too often, HIPAA-regulated entities rely on vendor controls and claims of compliance as a substitute for their own HIPAA security programs. While the vendor software itself may meet the requirements of HIPAA compliance for the discrete functions it performs, the truth of the matter is that no software or system that handles Protected Health Information (PHI) is HIPAA compliant until it has undergone a risk assessment by the regulated entity to determine the efficacy of its security controls in the user’s environment.

Adherence to HIPAA required risk management processes and industry-best practices should protect organizations from attacks. HIPAA requires that both covered entities and business associates maintain a security management process to implement policies and procedures to prevent, detect, contain, and correct security violations. The foundational step in the security management process is the risk assessment, which requires regulated entities to conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the entity.

HIPAA compliant risk assessment

NIST Special Publication 800-66 identifies a protocol organizations may use for conducting a HIPAA compliant risk assessment. 800-66 generally identifies nine steps an organization should take in this regard. Significantly, the first two steps of the risk assessment process should be read together to identify all information systems containing PHI and ensure that all PHI created, maintained, or transmitted by the system is being maintained appropriately and that security controls are applied.

In the context of third party software and systems, the risk assessment process should be used to identify hidden repositories of PHI where unintended business functions or improper implementation cause PHI to be located outside of an organization’s secure environment. If third party software and systems are not identified within the scope of a risk assessment, and a disclosure or audit occurs, the government may impose penalties for not conducting a thorough risk assessment. Additionally, there is potential for third party lawsuits if a disclosure results. In a data breach dispute, the argument usually boils down to whether the controls the organization had in place were reasonable to protect PHI. In many cases, the plaintiffs use HIPAA as a standard of care, so that if an organization was not in compliance, the plaintiffs will argue the organization did not take reasonable steps to protect PHI.

While not conducting an accurate and thorough risk assessment may result in regulatory enforcement or litigation risk, failing to identify hidden repositories of PHI may also result in other HIPAA violations. If data is stored outside of its intended repository, it is unlikely that an appropriate data classification and associated security controls have been applied to the hidden repository. The result is that it is unlikely the HIPAA regulated entity is meeting the required technical implementation specifications of the HIPAA Security Rule with regard to the information contained in the hidden repository. In such situations it is unlikely that an organization has appropriate access and audit controls in place on systems that are not intended to store PHI.

Common vulnerabilities in electronic medical record (EMR) software

Software is developed for a specific purpose, such as managing patient information or insurance billing. Software’s core functionality is created during the development cycle, and security may be incorporated into the development process, or it may be an afterthought. Security is optimal when it exists within a software application and the environment where the application is hosted.

  1. At the device level where the software is installed, software integrates with its host operating system, file system and network environment. The intersection between an application and its host environment could create significant PHI exposure risk.
  2. Software, particularly database software, is often vulnerable due to poor security upgrade practices and loose configurations.
  3. Even when security features are established, those features may be changed to appease users or to simplify IT tasks.
  4. Delayed software upgrades or improper upgrade installation may increase the potential for compromise.
  5. External communication channels are often incorporated into software applications to enable functionality, such as transmitting faxes/emails, or to allow access by outside administrative support. These communication channels are often left unsecured with default configuration settings and administrative credentials.
  6. Audit logs are typically developed to support a specific software application, but use of audit logs may be disabled or ignored.

A recent recent data breach investigation

In a recent data breach investigation, Kivu encountered an integrated EMR software solution that stored patient records, including social security numbers (“SSNs”), on a Windows server. While the EMR application had protected access with unique credentials assigned to users, the server itself was accessible to all employees with domain credentials. The EMR software offered complete practice management capability in a single offering (such as patient management, prescriptions ordering and tracking, patient communications and billing).

The EMR software and the server housing the EMR software lacked appropriate controls to secure PHI. The presence of EMR login credentials in text-searchable files potentially negated the use of encryption for the EMR database. Unsecured directories provided the opportunity for any user to browse the server and potentially locate files containing patient data.

The audit capabilities of the EMR software were limited to the EMR database. As a result, externally stored files with patient data were outside the reach of the EMR software. PHI could have been exfiltrated without leaving evidence of file activity. For example, on a Windows computer, a hacker could use a Robocopy command to copy files, and the use of this command would leave no evidence of file access.

Using sophisticated search tools employing data pattern recognition, Kivu was able to identify numerous instances of PHI on the compromised server. The client was surprised by the result because they believed the EMR system was secure and HIPPA compliant. This was a painful lesson in the numerous (and dangerous) ways that sensitive data can leak from an otherwise secure system.

Kivu is a nationwide technology firm specializing in the forensic response to data breaches and proactive IT security compliance. Headquartered in San Francisco with offices in Los Angeles, New York and Washington DC, Kivu handles assignments throughout the US and is a pre-approved cyber forensics vendor for leading North American insurance carriers.

For more information about HIPPA data leakage and HIPAA compliant risk assessments, please read the full paper: Forensic Analysis Reveals Data Leaks in HIPAA Compliant Software or contact Kivu.

Some of the worst and most costly data breaches occur because an organisation doesn’t know what and how much data they have stored, says Winston Krone Managing Director, Kivu Consulting. In many cases, businesses have simply been unaware that they hold sensitive data such as healthcare or financial information, and “…haven’t purged data, they haven’t taken it off line; they’ve treated old data…as being necessary to be instantly accessible,” Krone, a computer forensics expert, argues in an interview for Hiscox Global Insight.

What’s in an email?

Part 1A particular area of exposure, Krone says – and this is particularly the case for professional services companies – is with the storage of unstructured data such as email. “It’s been the driver in many of the most expensive data breaches. The most common is email or a file server where you have attachments, spreadsheets, word documents. In a lot of these cases you don’t know what you’ve got. You may not even know that someone has sent you an attachment with a thousand names, dates of birth, social security.”

Krone adds: “Trying to determine how many mail boxes have been raided [following a breach] can be the work of weeks and then determining what data is inside those mail boxes can take 30-40 days. This pushes up the response time [and] the response costs.”

Part 2

For many businesses, even if an attempted data breach is unsuccessful, the impact can be just as bad as a successful breach, explains Krone. “In most cases the attackers are stopped or seen. But the real problem for us, and it’s probably a problem in half our cases, is that the organisation was not logging or monitoring its own system sufficiently to allow us to disprove the hack. Unless we can prove what they did and what they’ve taken…that will be a defacto data breach with enormous costs and implications to the organisation.”

Part 3

Given that it’s virtually impossible to protect against a data breach happening, however, Krone says that the best risk management happens well before a breach. “If you haven’t set up in advance your system so it’s recording evidence, so it’s logging evidence, data of who is coming in, where they’re coming in, what they’re doing, what they’re taking out of your system – you can’t go back in time and work that one out. That’s a crucial preparation to put in advance.” A good incident response plan is also important, Krone adds, as well as having a good understanding of what data an organisation holds.

Insurance sector can drive better risk management

The growth in cyPart 4ber insurance is also playing a role in improving awareness of the cyber threat. “Just having the discussion about cyber insurance has required organisations to rethink their risk and how they’re mitigating these problems,” says Krone. “We see a huge difference between companies who have a cyber risk policy – or at least have gone down the road in deciding whether they should have one – and those who haven’t thought about it. It’s a huge educator and the more enlightened insurers are asking companies to really answer some deep questions. It’s a great way for disparate groups [in an organisation] – legal, risk management, HR, IT – to come together when they think about cyber insurance.”

Data choicePart 5

In such a fast changing environment however, where data breaches hitting the news become ever more significant in scale, Krone says that the real differentiator between good and bad businesses from an information security perspective, will be the way in which they deal with their data. “If you look at the example of financial institutions and healthcare – two [sectors] that are very regulated in the US and have got their act together – [a business] is either going to take [its] data and start heavily encrypting it and segregating it and making sure that nobody can get into it, or they’re going to take their data and say we’re not in the data storage business; we’re going to put it off to security accredited vendors. It’s really a question of whether smaller organisations are going to have the means and the budget to go down those two different roads.”