Kivu will be providing the technical forensics support for Endurance’s recently launched Cyber Extortion Response Services. Working with the law firm Mullen Coughlin, Kivu will guide ransomware victims as they respond to malicious attacks, including arranging for payment in Bitcoin or other cryptocurrency, analyzing and testing decryption keys to ensure they are effectively and safely applied without further compromising the company’s network, and preparing documentation for reporting events to appropriate law enforcement agencies.
Kivu’s Cyber Extortion services are leading the cyber response and cyber insurance industry. Kivu has built a reputation combating cyber extortion and responding to cyber-crime, allowing our clients to make informed, cost-effective decisions. Our experts include analysts fluent in Russian, Chinese, Spanish, and German, trained in negotiation techniques, and highly experienced in hacking techniques and protocols.
A Q&A with Winston Krone of Kivu Consulting – Posted by Mark Greisiger on Junto Blog
There’s no doubt that ransomware attacks are on the rise and they’re becoming more insidious. I spoke with Winston Krone, global managing director of Kivu Consulting about what the latest version of ransomware looks like and what risk managers should do if it strikes their organization.
What is ransomware?
Ransomware is a type of malware that can infect any device where the malware is opened—typically through a link in an email, but we’re seeing variants where it’s seeded on a computer and activated remotely. Either way, it’s designed to infect other devices or hosts such as servers that the original device is connected to. Its real danger to organizations is its ability to spread across systems for two reasons:
- It can compromise vast amounts of data—once it jumps from a desktop to a server, you’re talking terabytes of data compromised rather than gigs.
- It can jump into backups and destroy the ability to restore the system. This issue has been made worse by the recent trend of synchronized backups—though regulated organizations still require long-term backup capability. If the only backup goes back a day or two and it gets lost, you don’t have earlier versions to rebuild the system.
How does it impact companies?
In the best case scenario, you come back online in several days—the worst case scenario is that you never come back online. Ransomware attacks affect just about every type of organization. While many have already designed systems with multiple backups so they can get back online immediately following an attack, some organizations, particularly law firms, accounting firms and manufacturing companies, haven’t developed systems for safely keeping backups.
Either way, organizations need to decide whether to pay the ransom or to try to rebuild the data themselves from other areas such as employee laptops or old computers that were offline (and thus, not hit by the malware). The do-it-yourself approach turns into a significant amount of work—many hundreds of hours of labor and business downtime—and it’s rarely less than the $5000 to $20,000 ransom. Some organizations have an aversion to paying criminals and that’s a legitimate concern, but there’s a danger in trying to rebuild the data yourself. We have seen situations where organizations try to do this and then realize later that they can’t and want to pay the ransom—in the meantime, they have overridden the encrypted data and when they pay the ransom and get the decryption key it doesn’t do them any good.
Many organizations don’t include ransomware in their incident response plans or they underestimate its significance. The ones that do include it need to update the plan on a quarterly basis, at the very least. Over the last year we have seen major paradigm shifts with new types of ransomware occurring every two weeks, in terms of the attack vector, seriousness of the attacks and how they’re launched.
Can you explain how the negotiations between the perpetrator and the attacked organization work?
In the most basic ransomware, you’re simply steered to a URL and there’s really no way to communicate with the attacker. In this situation, it’s usually a relatively small amount for the ransom, probably less than $5000. In a second variant, they supply a URL but there’s some degree of communication such as a comment field and some type of handshake where they let you test a small amount of data to prove that they actually have a decryption key and it works. In the third type, there’s direct communication by email, and these are the most expensive ransoms. In these cases, they’re open to negotiation—not about the price but about the time needed to pay the ransom or to figure out how the decryption works.
In larger attacks we see a new variant whereby the basic ransom goes up by the amount of computers infected. In those cases, you can pay by individual computer affected or with a blanket global license upwards of $20,000 and they’ll give you all the keys needed. In those types of attacks, the attacker is incentivized to negotiate with you more. In general, the negotiations are not for the fainthearted—we have negotiated dozens of these cases with foreign language speakers set up with multiple identities around the world and on the dark web. Our role is to make sure the negotiations go smoothly while masking the identity of our clients to the extent that we can.
Anonymity is important. We highly recommend cloaking the identity of the attacked organization because of their ability to increase the ransom. In most cases the criminals don’t know who they’re attacking and they don’t care. However, this is something that we expect to change in the next six months or so—we think attackers will go after regulated businesses or other businesses where data is important, or choose organizations that they know carry insurance so they are more likely to get paid.
How can a company set up a bitcoin wallet in order to actually pay the ransom?
Organizations can set up their own bitcoin wallet but it is very difficult and among the lawyers and risk managers I’ve met who offer advice on this topic, almost none of them have ever actually done it themselves. It’s relatively straightforward to get a small amount of bitcoins but it’s very difficult to get a significant amount of money. Most bitcoin exchanges cap the amount of money you can get within a given time period. You can start an account and usually it takes a week to get it going and build up enough transactions to call down tens of thousands of dollars’ worth, and it’s expensive—with charges of over 15 percent per transaction. Off the exchanges, you’re dealing with sketchy people and you’re opening yourself up to getting ripped off. Unless you already have an account and a reserve of $10,000-$20,000 you’re not readily prepared to deliver a ransom.
What are some common pitfalls in this situation?
Assuming you have money lined up and you’re ready to pay the ransom, there are still a number of things that can go wrong. You have to make sure you’re paying the right people. We’re seeing increasing examples of serious criminals getting involved in the ransom business. It’s the equivalent of thieves ripping off drug dealers. We’re also seeing organizations who have been hit by multiple attacks at the same time which can interfere with the remediation process. In some cases, the decryption key doesn’t work or the IT people don’t know how to use it properly. We have also seen instances where the decryption key itself is an attempt to get additional malware on the system.
How might a forensic expert play a role here?
We can help in every step of the process, including assisting the client with the response before paying the ransom, assisting with paying the ransom (we offer the service of paying on behalf of the client with our own bitcoins), making sure all communications are anonymous and verifying that the decryption tools themselves work and don’t contain more malware. We can also determine if the ransomware is actually a cloak or cover for an actual theft of data. In those cases, the $20,000 cost of ransom is dwarfed significantly by the cost of a data breach. We’ll make sure that the encrypted data isn’t destroyed during remediation. In the newest cases of ransomware that gets set off remotely by a hacker, forensic analysis can be required by state and federal data breach regulations to determine whether confidential data has been compromised since the hackers clearly obtained some access to the network to plant the ransomware.
What else should risk managers be aware of with regard to the threat of ransomware?
We’re seeing a lot of antivirus companies that claim to be developing tools that can spot ransomware and stop it, or vaccinate computers against it, but we caution people to be very skeptical about these claims. These tools might be able to stop poorly designed ransomware but the fact is, it’s getting more sophisticated all the time—the hackers are figuring out how to outsmart us by masking the malware and the attack vector. What organizations really need to do is go back to the basics—designing a sound infrastructure for computer systems so that if there’s infection it won’t spread, and prepare for an encounter with ransomware with a detailed incident response plan.
We want to thank Winston for his granular insights into this threat, which seems to be impacting cyber liability insurance clients on a weekly basis these days. We also think it’s important for a risk manager to see that there are many challenging and nuanced steps involved in resolving this type of cyber risk. An organization should not undertake resolution without the guidance of a Breach Coach® lawyer and forensic/security expert who has experience with extortion. Mr. Krone is a frequent speaker at NetDiligence® Cyber Liability Conferences.
Testing the Password Encryption Strength of NT LAN Manager and LAN Manager Hash
Security risks associated with weak user-created passwords are well documented. In 2009, for example, cyber security provider Imperva analyzed more than 32 million passwords that were released in a 2009 data breach. More than 50% of the passwords involved poor user password choices, and 30% of the passwords contained 6 or fewer characters. User habits in poor password construction improve the chances of successful password determination by hackers who use password-guessing software.
Kivu recently participated in an experiment to evaluate the password encryption strength of two Windows Operating System authentication protocols.
LAN Manager (LM) hash, employs a multi-step algorithm to transform a user password into a calculated string value that obfuscates a password’s identity. The resulting LM hash is stored rather than the original password. First, a user’s password is converted to all uppercase letters. Next, the uppercase password is set to a 14-byte length. For passwords greater than 14 bytes, the password is truncated after the 14th byte. Passwords less than 14 bytes are null-padded to reach 14 bytes. The 14-byte password is split into two 7-byte segments, and a null is added at the beginning of each 7-byte half. Each half is used as a key to DES-encrypt the ASCII string “KGS!@#$%”. Both output values are concatenated to create a 16-byte value LM hash.
Microsoft’s second encryption method, NT LAN Manager (NTLM), is an improved algorithm for securing a password’s identity. Beginning with Microsoft Windows NT 3.1, NTLM was introduced to improve security. NTLM passwords differ from LM passwords in that NTLM employs the Unicode character set with the ability to differentiate upper and lowercase letters and permits passwords up to 128 characters in length.
Kivu’s experimental design to compare the relative strength of these two encryption methods employed Cain and Abel password-guessing software and different Windows passwords of increasing complexity.
How Cain and Abel Works
Cain and Abel provides the ability to execute password-guessing schemes using dictionary attacks and brute-force attacks. Both dictionary attacks and brute-force attacks employ guess-based methodologies to identify the plain text password associated with a specific hash-encrypted password.
Dictionary attacks use a pre-defined list of search terms or phrases as the basis for guessing. Each search term is transformed into a hash string value using a specific hash algorithm, such as the LM hash protocol. The resulting hash value is compared to a hash value of interest, and if the hash values match, the plain text password is identified.
Brute force attacks attempt every combination of defined search criteria to identify the plain text password associated with a hashed password. Search criteria settings include the use of character sets, such as ASCII and the number of characters in a password.
Kivu’s experimental results yielded significant insights concerning the strengths of the NTLM password algorithm, which is Microsoft’s replacement to LM.
None of the NTLM-transformed passwords we used were quickly resolved through brute force attack. Our experiment suggested that the passwords established for the test user accounts would take more than 4 years to determine. While our brute-force attacks were limited to less than 3 minutes, NTLM hash protocols were identified as having substantial lead times to identify plain text equivalents of NTLM-hashed equivalents.
LM’s password hashing approach to obfuscating plain text passwords, however, was limited in its success. As observed with three test user passwords, brute force password guessing resulted in a partial identification of LM passwords, due to LM’s sub-division of the password string during the hashing algorithmic process.
Our results indicated that both NTLM and LM passwords are susceptible to compromise in a well-designed and broad dictionary attack. Overall, NTLM hashed password equivalents may be stronger than LM in simple dictionary attacks. In substantial dictionary attacks, however, it may be more likely to identify an NTLM password due to the ability to match a calculated hash value from a dictionary. While dictionary-based attacks may be limited in their combinations of matches, larger dictionaries provide more opportunities for a match.
Kivu (www.kivuconsulting.com) is a nationwide technology firm specializing in the forensic response to data breaches and proactive IT security compliance. Headquartered in San Francisco with offices in Los Angeles, New York, Washington DC, and Vancouver, Kivu handles assignments throughout the US and Canada, and is a pre approved cyber forensics vendor for leading North American insurance carriers. Author, Megan Bell, directs data analysis projects and cyber security investigations at Kivu.
Honored to be interviewed on NPR’s Marketplace on how forensic evidence can identify key employees who are planning to quit after receiving their bonuses (and maybe taking secrets). We see a big spike in trade secret theft during the bonus season – but interestingly the taking can start months in advance when the star salesperson or software engineer first gets a feeling that the end of year bonus is not going to be so stellar. Also interesting to watch how other employees quickly delete their secret caches of confidential files when a colleague is caught red-handed. http://www.marketplace.org/2016/02/04/business/bonus-season
In several forensic investigation cases, Kivu has analyzed iOS backup files as a method of obtaining evidence of text messages or other data from an iOS device, usually when an iOS device is not readily available or as a means of cross-correlating evidence.
These backups are often made to the custodian’s computer when they connect their iOS device to a computer to charge it or sync it with iTunes. When they connect their iPod touch, iPhone, or iPad to their computer, certain files and settings on their device are automatically backed up. As such, they are locally stored on the custodian’s computer and can be extracted and parsed for further analysis.
In a recent case, the backups were extracted from the custodian’s laptop, which was provided to Kivu. The backups pertained to two iPhone devices. Kivu forensically extracted the backups from the custodian’s laptop and was able to parse the backups and uncover text message data that came from both the custodian’s current iPhone and the prior one, which was no longer in her possession.
Here’s how the text messages were retrieved
Within the “Backup” directory under MobileSync, there is a subdirectory named for the unique device identifier (UDID) of the device for a full backup. The UDID is a 40-character hexadecimal string that identifies the device [example: 5b8791c14e926cc9220073aefcedd2b831c843b1]. Sometimes, the UDID will have a timestamp appended to it that indicates the date and time that the backup was made. For example, a directory named 5b8791c14e926cc9220073aefcedd2b831c843b1-20150506 122733 indicates that the iOS device was backed up on May 6, 2015 at 12:27:33 PM.
Within the UDID directory, there are numerous files with a similar naming convention as the UDID directory without a file extension. These filenames are actually SHA1 hash values of files from the device. When backing up an iOS device through iTunes, iTunes computes a SHA1 hash value of the file’s path. Below is a chart detailing several common SHA1 file names for files pulled from an iOS in the course of an iTunes backup.
Since text messages are often of interest, it’s important to note the SHA1 hash value assigned to sms.db. This is the database file that holds text message data, including sender, recipient, and content of messages.
Kivu is a nationally recognized leader for security assessments and breach response services. For more information about collecting forensic data from Apple devices, please contact Kivu.
Data quality is not a glamourous subject. It is not the type of topic that headlines a conference or becomes front-page news. It is more typically suited for help guides and reference manuals that few individuals relish reading. However, organizations that acknowledge the importance of data quality and have strong data quality programs significantly reduce privacy and security risks. They also lower the potential costs associated with data breaches, the legal risks, and potential size of business interruptions.
Data quality issues start when information is created. This includes incorrect information, data entry errors, and inaccurate document conversion such as conversion of text contained within image files (e.g., a screen shot from a patient management system). Data quality issues also arise as data is being processed, transferred or stored.
1. Build a foundation of knowledge and fluency about data.
“Understanding data” means moving deeper than simply understanding that a database stores records or that a file contains information. Knowledge of data means taking the time to understand that data exists in different layers and structures and can be readily transformed. Additionally, data can be defined as discreet elements (e.g., a data element that stores date time information) and have assigned roles and restrictions. Investment in the language of data can improve control over data and enable better decisions on information security and privacy.
2. Don’t leave data design and quality decisions to the development team or an IT group.
This could place data at significant risk including possible loss, misuse and insecurity. Development teams are often provided with high-level requirement such as “design a secure form to collect user data”. While this directive may appear clear, privacy and security risks reside in the implementation of this directive. To achieve better security and privacy, more attention must be directed to clarify the method of data form collection, transmission and storage of data. Further validations should be provided so that data is corrected before it is stored.
3. Articulate security and privacy concepts in terms that assist developers integrate better security.
Regulations and policies concerning privacy and information security often address data from a systems perspective. Terms such as “protect the perimeter” articulate protection of a network and the systems and data within the network. “Protect the perimeter” does not clearly translate design into a more secure system.
Developers and analysts work with data in the context of business and user requirements. Developers also work under tight budget constraints and significant systems complexity where one requirement may consist of several steps. As security and privacy requirements continue to mature, understanding the needs and workflow of developers will facilitate better “baked in” security and privacy.
4. Extend security and privacy requirements to how data is created, changed, stored, transmitted and deleted.
Security requirements typically speak at a high level and leave a substantial gap in clarity with respect to data. As an example, a business may have a requirement where social security numbers (SSNs) are encrypted at rest. At the same time, the company may display SSNs in a web application where the SSNs are partially hidden by form design but otherwise are present and unprotected.
5. Embed security analysis into the QA process.
Security testing is often the purview of InfoSec groups and external consultants who evaluate software that exists in an operations environment (also referred to DevOps or Production). This includes the use of tools and the knowledge to locate and remediate vulnerabilities. The pitfall with this approach to security testing is that vulnerabilities are not identified before software is released. Using tools such as Seeker (which analyzes software for vulnerabilities during the QA process) can improve overall application security by reducing the number of possible vulnerabilities in software design.
CASE: Data at Risk (by Design)
Organizations are at increased risk of security incidents due to un-defined or poorly specified software requirements. One such example is inadequate articulation of secure password storage. Poor design is initiated when developers or an IT group receive a directive to secure user passwords. However, securing passwords can mean many things including:
- Storing clear text passwords in a secure database.
- Using well-known mathematical formulae to convert passwords into what are called hash values.
- Storing software code or algorithms to secure passwords in the same data file or directory as the password data.
- Storing password hints with passwords.
- Forgetting to secure the folders where data is stored (which leaves the door open to the risk of exfiltration)
- Not requiring strong password rules for the creation of passwords.
- Not validating passwords prior to storing passwords.
- Leaving administrative passwords in the same location as customer data.
- Creating a backdoor for developers as an easy means to administrate or perform corrections.
- Not requiring or allowing time for developers who wrote the code for securing passwords to create documentation that explains the code.
- Leaving design implementation to a developer who may not be available or reachable after code implementation
Accountability for data design, use and quality should exist across an organization. With less of a technical divide, organizations can improve the conversation on how to better protect data with the appropriate use of security to balance risk and cost. Attention to detail at the bottom (the data level) may also deliver secondary benefits such as cleaner customer data, reduction in time to resolve customer issues, or better disaster recovery.
GET IN TOUCH
GET THE NEWSLETTER
Cyber security and Kivu news direct to your inbox. Never spam.
You can unsubscribe at any time.