Kivu’s digital forensic professionals are seeing an ever-increasing number of Apple devices being used within organizations. Our forensic professionals have extensive Apple experience and have provided expert testimony on a number of legal cases involving Apple devices.

The Challenges of Collecting Data

Mac computers are known for having a secure delete function built into the system. This allows a user to overwrite the computer’s free space 1 time, 7 times or 35 times, making it impossible for forensic examiners to recover deleted data.

Mac computers also come with a built in encryption feature called “File Vault.” If the user enables File Vault, examiners cannot image or access the contents of the computer until the encryption is bypassed, either with the user’s password or by extensive workarounds involving memory analysis to extract possible passwords. Some vendors claim to decrypt File Vault passwords, but the cost of this method is very high and may not provide the needed results.

iOS devices, such as iPhones and iPads, also present imaging challenges. Physical images are bit for bit copies of a device, which includes deleted data. Physical acquisition of certain iPhone models is not possible, due to Apple’s encryption. To bypass the encryption, an examiner would need to “jailbreak the device.” This is a risky approach, since jail breaking a device could lead to destroying current evidence and making the device unusable and inaccessible.

If physical acquisition of a certain iOS model is not possible and jail breaking is not feasible, a logical acquisition may suffice. The primary issue with logical data acquisition is that certain data cannot be extracted for analysis, including: deleted data, emails, cache files, and geo-locations. This, of course, causes a major issue for forensic examiners.

Apple Forensic Tools

The digital forensic professionals at Kivu Consulting are experts in forensically imaging and preserving Apple device data. Our forensic analysts are trained and certified in the industry leading tools used to image and analyze Apple devices, such as MacQuisition, Encase, Cellebrite, FTK Imager and Black Light.

For Mac computers, MacQuisition allows for live data acquisitions, targeted data collections, and forensic imaging. This tool can acquire over 185 different Macintosh computer models and provides a built in write-blocker to maintain data preservation.

Kivu uses tools such as Encase, FTK Imager and Black Light to analyze Macintosh forensic images, as well as image and analyze iOS mobile devices. Our forensic experts hold the Encase Certified Examiner and Certified Black Light Examiner certifications, offered by Encase and Black Bag Technologies.

Selected Kivu Engagements and Expert Testimony

  • Kivu Consulting has worked on and testified in various nationwide cases involving Macintosh computers and iOS mobile devices:
    A construction company was investigating a sexual harassment claim. The client was using an iPhone and iPad. These devices were collected, imaged, and analyzed for evidence of communication between the user making the claim and the client, as well as any inappropriate photos that may have been taken using the devices.
  • Kivu assisted multiple law firms with cases involving theft of Intellectual Property. These law firms reached out to Kivu to assist with iPhone acquisition and forensic analysis to determine device activity, such as applications used, browsing, text messages and calls within a specific timeframe.
  • Kivu investigated and analyzed multiple MacBook Pro devices for an accounting firm, to determine if unauthorized users gained access to the devices and exfiltrated data.
  • Kivu has testified in a federal class action suit involving Apple. Multiple people claimed that Apple billed them twice for the same iTunes songs. They said that the songs they originally downloaded were not accessible in iTunes, so they downloaded the songs again and were billed a second time. Kivu conducted forensic analysis on all Apple devices provided in the case to determine if multiple instances of the same songs were present on the computers and if the originally downloaded songs were, in fact, inaccessible to the users.
  • Kivu investigated multiple Mac devices for educational institutions to determine if students hacked the schools’ computer systems to acquire better grades.

About Kivu

Kivu Consulting combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide.  Author, Thomas Langer, EnCE, CEH, is an Associate Director in Kivu’s Washington DC office. For more information about malware trends and what your company can do to better protect its environment and data, please contact Kivu.

What is PCI 3.0 and How Does It Differ from PCI 2.0?

The Payment Card Industry Data Security Standard (PCI DSS) applies to companies of any size that accept credit card payments. The effective date of version 3.0 of the standard was January 1, 2014, but existing PCI DSS 2.0 compliant vendors had until January 1, 2015 to move to the new standard. Some of the changes are not required to be in place until June 1, 2015. This blog post from Kivu will explain what the new standards are and review some of the most critical issues involved with compliance.

PCI 3.0 is not a wholesale revision of PCI 2.0. The 12 core principals of PCI compliance remain intact. PCI 3.0 is the clarification and revision of all 12 principals and is roughly 25% bigger than PCI 2.0, including 98 upgrades. Some of the upgrades are small but others are significant. PCI 3.0 will be harder and more expensive to implement than PCI 2.0. Organizations should expect that the PCI 3.0 assessment will be similar to PCI 2.0 but more transparent and consistent.

A major concern for merchants implementing PCI 3.0 is how they will be able to afford the increased cost of compliance. PCI 3.0 requires additional processes and procedures that many organizations might not be prepared to implement.

New Key Areas for PCI 3.0

Segmentation of Card Data Environment (CDE) – Penetration Testing

PCI 3.0 is a great improvement over PCI 2.0 because it segments the Card Data Environment (CDE) from other networks. During the breach at Target, contractors had access to the client network, putting the whole CDE environment at risk.

The cost of segmenting the CDE environment will be a burden on the merchant, but it is a significant step towards reducing risk and exposure. Penetration Testing (testing a computer system, network or web application to find vulnerabilities that an attacker could exploit) will be critical. Qualified Security Assessors (QSAs) will have a tough job auditing the new guidelines and results.

Key Takeaways

  • PCI 3.0 has to be implemented by June 2015.
  • PCI 3.0 requires that all merchants be PCI compliant to undergo a Penetration Test.
  • Merchants need to ensure that correct methods are used to segment the CDE environment from the client network.
  • The contractor network must be segmented from the client network.
  • The Best Practice Framework will be based around NIST SP800-115.
  • Merchants must be diligent in their selection of penetration testing services.

System Inventories

Maintaining system inventories is not an easy task, and accurate system inventories have been difficult to accomplish under PCI 2.0 What is different with PCI 3.0?

The inventory list under PCI 3.0 just grew bigger. Now, maintaining an inventory of hardware, software, rules and logs will be an even more difficult task in order to remain in compliance. Documenting components and inventory is time consuming, and inventory changes frequently. Who will be in charge of accomplishing this within an organization, and how reliable will the inventory list be? What happens when virtualization/cloud is thrown into the inventory mix? What about geographic locations?

We at Kivu see maintaining a system inventory as an evolving cycle with constant issues.

Key Takeaways

  • Maintaining a reliable, timely inventory will be somewhat impossible.
  • The merchant’s IT & compliance teams will have to spend more time creating inventories.
  • Merchants need to know who will be responsible for maintaining system component inventories that are in scope for PCI DSS (Hardware & Software).
  • Merchants must maintain an inventory of authorized wireless access points, including their business justification.
  • Documenting components and functions will be a continuous cycle.

Vendor Relationships

Explicit documentation of who manages each aspect of PCI DSS compliance is a critical improvement of PCI 3.0 over PCI 2.0. Who owns what, the service provider or the organization? Management of each aspect of PCI DSS compliance should be well documented in every vendor contract agreement.

Kivu recommends a written agreement with service providers verifying that the provider maintains all applicable PCI-DSS requirements. Getting service providers to agree will be a daunting task. Will vendors want to take this responsibility? In refuting PCI reports, identifying who is at fault is a common problem. If there is a breach, who is liable?

Key Takeaways

  • In PCI 3.0, detailed contractual language and service provider roles and responsibilities are much more of a focus.
  • Merchants should decide who owns each aspect of PCI compliance.
  • PCI compliance has to be written into the vendor contract agreement, with specific language on who owns what.
  • Outline where responsibility lies for control over compliance.
  • Providers must give their customers written documentation stating that they are responsible for the cardholder data in their possession.

Anti-Malware Systems

PCI 3.0 places a new emphasis on identifying and evaluating evolving malware threats targeted at systems NOT commonly considered to be affected by malicious software. Advanced research capabilities or Intel on malware threats is seen as a proactive measure, but who will provide these proactive services to merchants? How can this be enforced?

Who will be responsible for keeping abreast of threats and making sure anti-malware systems are patched and configured correctly? It is critical for the PCI Standards Council to release a recommended list of anti-malware vendors and provide guidelines for merchants.

Key Takeaways

  • PCI 2.0 only states that antivirus software should be in place. PCI 3.0 takes it to another level.
  • PCI 3.0 states that if malware emerges for PCI systems, the merchant should know about it. There needs to be a process that makes sure this happens.
  • PCI QSAs will need to scrutinize anti-malware controls on all platforms.
  • Technical planning and strategy will involve more paperwork for merchants.
  • Specific authorization from management to disable or alter operations of all antivirus mechanisms should be a policy.
  • An anti-malware system should automatically lock out the user for trying to disable it.
  • Merchants will need to justify why they don’t have anti-malware software running on non-windows platforms. This is critical because it causes organizations to think carefully about evolving non-windows threats.

Physical Access and POS System Inventories

PCI 3.0 states that physical access to a merchant’s server room should be restricted, whether the room is in a closet in the back of the store or in a high-end data center. Physical access should be limited to certain personnel, and all others should be escorted and signed in and out of the room. Restricting admission limits the risk of unauthorized access to POS devices and back end systems that could potentially be swapped out by unauthorized individuals.

Maintaining an inventory of POS hardware and conducting frequent spot checks to ensure serial numbers match will be critical to staying compliant under PCI 3.0. POS device inspections should be a best practice, but how many merchants even have a list of their POS devices?

Key Takeaways

  • Control physical access to the server room for all on-site personnel based on individual job function. Access should be revoked upon termination.
  • Maintain an inventory of all POS devices and implement controls to protect these devices.
  • POS device inspections should be a best practice. Periodically inspect POS devices and check serial numbers to ensure devices have not been swapped out.
  • Procedures for frequently testing POS devices should be implemented.
  • Provide security awareness training to employees that use POS systems to identify suspicious behavior.
  • PCI 3.0 mandates that service providers with remote access to the CDE must use a unique authentication credential for each customer environment.
  • Access needs and privileges for all job functions allowed access to the CDE must be formally defined and documented in advance.

What Other Changes Should We Expect with PCI 3.0?

Following are some moderate changes worth highlighting:

  • Risk assessments are now to be performed annually, as well as whenever significant changes are made to the Card Data Environment. What constitutes a significant change to the environment? There are no guidelines that specifically address this.
  • New password management processes/controls are being enforced and met.
  • The CDE must be formally defined, with an up-to-date diagram that shows payment flow across systems.
  • Merchants need to implement file change detection systems and then investigate and respond to all alerts generated by this system. This type of system can generate many alerts every day. Kivu recommends that merchants understand who will monitor these alerts and review and document responses.
  • Daily review of logs is required. Again, who will do this?
  • QSAs will have more responsibility to enforce the new guidelines.
  • PCI 3.0 will increase compliance costs, and those who complain may not fully understand the reasons for the process mandate.
  • There is a recommendation to avoid service providers that are non-compliant.
  • Memory scraping became a best practice for PCI 3.0.

Has the Value of PCI Standards Declined?

It is tough to argue against good security and retailers accepting more responsibility for it. The buck has been passed to the retailer, although banks should take more responsibility to provide more security as well through chip technology or point-to-point encryption. Some retailers are moving ahead with tokenization and point-to-point encryption because they believe that PCI 3.0 compliance is not enough.

What Failures Do We See in PCI 3.0?

The PCI Security Standards Council has missed some key opportunities to clarify the standard and to address compliance as it relates to emerging technologies.

  • One significant issue is the failure of PCI 3.0 to address virtualization, cloud and mobile payment providers. Merchants are frequently using these 3 areas, but PCI 3.0 does not address them in detail nor provide merchants with guidelines.
  • PCI 3.0 continues to ignore mobile payment processing and mobile device security, leaving merchants who support mobile payment technology on their own to determine how to be compliant. Card brands are reluctant to put security constraints on mobile technology through fear of stifling the growing revenue expected from mobile payments.
  • Some merchants remain non-compliant with PCI 2.0, yet they are expected to be compliant with PCI 3.0 by June. How will they be able to make all of the changes necessary? Will some merchants be allowed to become PCI 2.0 compliant at first and given additional time by the PCI Security Standards Council to comply with PCI 3.0?

Is PCI 3.0 Worth It?

PCI 3.0 is bigger, therefore harder and more expensive to implement than PCI 2.0, but it offers additional, critical security benefits. It will take more time and resources from merchants to stay in compliance with PCI 3.0. We at Kivu believe that going forward, it would be best to integrate PCI compliance activities into an organization’s year round IT Security Management process.

Most computer compromises aren’t discovered until after an attack—sometimes days or weeks later. Shutting down a computer may halt malware activity, but it could have negative and unforeseen consequences. For example, it could become difficult to retrace information infiltrated by a hacker or botnet. This is particularly important if significant time has transpired between an attack and discovery of malware.

During a forensic investigation, there should be a balance between rushing to remove malware and understanding the scope of the malware infestation in order to find a solution that deters future attacks.

What is Malware?

Malware is software that is designed for illicit and potentially illegal purposes. Malware may be a single software program or a collection of programs used to accomplish tasks such as:

  • Obtaining system control—for command and control of a computer
  • Acquiring unauthorized access to system resources—network intrusion
  • Interrupting business operation
  • Gathering information—reconnaissance
  • Holding digital assets hostage—ransomware

How Does Malware Infection Occur?

The Internet has opened the door to broad distribution of malware. It is possible for malware to originate from sources such as email, instant messaging, or infected file downloads. Malware can also spread through USB devices or connectivity to public WiFi hotspots.

The most complex malware tools may use a combination of distribution methods to infiltrate an organization. For example, an email may contain a hyperlink to a website that causes “dropper” software to download. The dropper software performs reconnaissance of its host computer and transmits results out to another computer on the Internet. The second computer analyzes the reconnaissance results and sends back malware that is customized to the host computer.

What are Common Types of Malware?

Virus. Virus software refers to software that inserts malicious code into a computer and has the capability of spreading to other computers. The ability to propagate is a requirement for malware to be classified as a virus or worm.

Worm. Worms are a type of malware that propagate across networks. A worm finds its way by reading network addresses or email contact lists and then copying itself to identified addresses. Worms may have specific capabilities, such as file encryption or installation of certain software, including remote access software.

Trojan Horse. This type of malware enables unauthorized access to a victim computer. Unauthorized access could result in theft of data or a computer that becomes part of a denial-of-service (DDoS) attack. Unlike viruses or worms, Trojan horse software does not spread to other computers.

Rootkits. Rootkits refers to malware that takes control of a host computer and is designed to evade detection. Rootkits accomplish evasion through tactics, such as hiding in protected directories or running hidden process names on DLL’s (Dynamic Link Libraries) as legitimate files, without the computer or user noticing an abnormality. Rootkits may defend themselves from deletion and may have the ability to re-spawn after deletion. Most notably, rootkits have the potential to operate in stealth mode for extensive periods of time and to communicate to external computers, often transmitting collected data from a victim computer.

Spyware. The purpose of spyware is to collect data from a victim computer. Spyware may exist as malware that is installed on a host computer or embedded within a browser. Spyware may collect data over an extensive time period without the victim ever knowing the extent of the spying activity. Spyware may collect keyboard strokes, take screenshots of user activity, or utilize built-in cameras to record video.

Browser Hijacker. This malware takes control of a user’s browser settings and changes the default home page and search engine. Browser hijacking software may disable search engine removal features and have the ability to re-generate after deletion. There may also be persistent, unwanted toolbars that attach to a browser.

Adware. Adware refers to software that has integrated advertising, particularly freeware software. Adware displays advertisements within the freeware product and transmits collected data back to a controlling party (e.g., an advertising distributor). A software creator may utilize advertisements to earn advertising revenue.

Ransomware. Ransomware is malware that encrypts part or all of a host computer. Encryption locks a victim out of important files or a computer until a ransom demand is paid, possibly in the form of bitcoins. If the ransom is paid, the victim has no guarantee that the ransomware will de-crypt the computer.

Investigating Malware

When a malware infection is suspected, care should be taken to investigate and collect evidence where possible while performing radiation to remove the malware infection. The following guidelines should be considered when malware is suspected. If a forensics team is involved with the investigation, the following points will be addressed by forensics examiners.

  1. Assess the implication of powering down the potentially infected computer. Powering down a computer may stop malware in its tracks and result in the loss of potential evidence. In the case of ransomware, a shutdown could results in permanently unrecoverable data. The first response to possible malware infestation should be an evaluation of the victim computer and gathering of key evidence. If the malware is associated with network intrusion or other nefarious activity, evidence gathering may extend across multiple computers and the respective network that hosts the victim computer.
  2. Collect a sample of Random Access Memory (RAM). RAM is temporary memory that exists while a computing device is powered on. RAM is particularly important since malware has the ability to operate (and hide) in RAM. Capturing an image of the infected computer’s RAM, prior to shut down, enables a forensic examiner to assess the potential activity and functionality of the malware. Artifacts that may reside in RAM include:
    • Network artifacts, such as connections, ARP tables, and open interfaces
    • Processes and programs
    • Encryption keys
    • Evidence of code injections
    • Root kit artifacts
    • DLL and driver information
    • Stored passwords for exfiltration containers
    • Typed commands in the DOS prompt
  3. Identify and preserve log files. Log files record a variety of information about system and application usage, user login events, unusual activity such as a software crash, virus activity, network traffic, etc. In the event of a potential malware infection or network intrusion event, log files should be collected and preserved for further analysis. If logging activity is turned off or log files are set for overwriting, they may be limited in value for an investigation.
  4. Interview users who may have received suspicious emails or observed unusual computer activity. Computer users and IT staff may have important information regarding the origin, timeline and possible activity of the malware. Early in an investigation, interviews should be conducted to assess the potential scope and breadth of an incident. If malware was introduced through user activity, such as a phishing email, the suspect email may still reside in a user’s email. In the case of malware that entered a computer through a software vulnerability (e.g., code injection through an unsecured website), IT staff may have information about unusual events in system logs or data leaving through a firewall at unusual times (e.g., after business hours).
  5. Determine whether to investigate other computers. Malware may spread through computers within the same network segment or a shared file server. Investigation of malware should include scans of potentially connected computers to assess the possibility of further malware infestation. Additionally, if external connections such as Remote Desktop or GoToMyPC exist and are active, then a determination should be made to analyze externally connected computers.

For more information about malware infection and forensic investigation, please contact Kivu.

One of the most popular email programs used today is Gmail.  Kivu initiated a project to determine the most efficient and defensible process to collect Gmail account information. This blog post is the second in a series of articles that evaluate Gmail collection options for computer forensic purposes.

A common email client that can be incorporated into a forensic email collection is (shock horror) Microsoft Outlook. Outlook is included in the Microsoft Office package, and for many years it was king of email clients for the business environment. As the popularity of mobile phones and web-based clients increased, however, Microsoft Outlook’s use has declined.

We will be using the latest version, Outlook 2013, for our collection of forensic data. While not usually seen as a part of the forensic investigator’s tool kit, Microsoft Outlook has some interesting attributes that can be verified in use, and tested as to its output. You just need to know what you’re doing and (as in all forensic work) be able to confirm the veracity of the data.

Outlook has an option for IMAP setup that allows automatic testing of account credentials. Outlook will send an email from the account to the account to ensure that the account credentials are correct. Outlook 2010 has the ability to disable this test, but in Outlook 2013 the option is greyed out, and the test email is sent automatically. If account intrusion needs to to be kept to a minimum, it is good to keep this in mind.

How to Use Microsoft Outlook for Gmail Collection, Step-by-Step

Change Microsoft Outlook Settings

To start your Gmail collection, check that the settings in the target Gmail account are set to IMAP. Then, open up the email account settings, either though Outlook File>Info>Account Settings or though the Control Panel>Mail>Email accounts. Selecting New… in the Email tab will prompt you for the service you wish to set up. Check E-mail Account, click on Next, and then select Manual Setup. Click Next again.

Unlike GM Vault, which we evaluated in the first article on this topic, a bit more work is needed to ensure a smooth email collection. In addition to User Name and Password, Outlook requests both the incoming and outgoing servers for the IMAP account.

User Information
Your Name:
(Top Level Email Name)
Email Address: (Collection Gmail address)
Server Information
Account Type:
IMAP
Incoming mail server: imap.gmail.com
Outgoing mail server (SMTP): smtp.gmail.com
Logon Information
User Name:
(Collection Gmail address)
Password: (Collection Gmail password)

Click on More Settings to open up Internet email settings. Under Outgoing Server check the box for Outgoing sever requires authentication and use the same setting for your incoming mail server. Click on the Advanced tab and change the server port numbers to 993 for incoming and 465 for outgoing. Select SSL for the encryption type for both, and set the server timeout to 5 min. These are Google’s recommended settings for using the Outlook client for Gmail accounts.

Start Gmail Collection

Go to the Send/Receive tab and click on the drop down list for Send/Receive Groups and select Define Send/Receive Groups…. In the pop-up window, select the All Accounts and click Edit on the right hand side of the window. Check all boxes except Send mail items and select Download complete items… If you want to collect only specific folders, use the custom behavior option to select the folders you to collect. Click OK and click OK again. Then you can either select the Group to Send/Receive drop down menu or use the short cut key (F9).

 

 

Track Gmail Collection

Once the collection has started, there are a few options and settings that can help minimize intrusion and track the collection – again, crucial steps if you are hoping to achieve a forensically sound collection. Outlook’s default setting marks an email as “Read” – whenever you select a new email, the previous email is marked as read. To change this setting, go into reading pane options either via the File>options>Mail>Outlook panes>Reading Pane… or the View tab and click on the Reading Pane drop down menu. In the options screen uncheck all of the boxes. Now, Outlook will not mark the emails you view as read when you look through them.

For tracking, to ensure that you have reviewed the correct number of emails, you’ll need to tell Outlook to show all items in a folder rather than just the unread items. Unfortunately, this can only be done folder by folder. Right click on a folder and select Properties. Select the option Show Total Numbers of Items then click OK. Repeat with all of the folders that you are collecting. If a folder does not show a number, there are 0 emails in the folder. Compare the folder numbers with the counts you can view online at: www.gmail.google.com. Once all of the folder counts match, the collection is finished.

Working with Offline Email Storage

Outlook uses an Off-line Storage Table (OST) format to store emails from POP, IMAP and other web- based email accounts offline when the Internet is not available. When the sever access is resumed, the accounts are synced to the cloud storage. Outlook also uses Personal Storage Tables (PST) files to back up and transfer email files and accounts. While some forensic processing tools can extract data from OST files, almost all of them can extract the data from PST files. PST files can also be opened up on any computer with Outlook.

To export the collected PST files, select File>Open>Import, Export to File, and then select Outlook Data File (.pst). Browse to where you want the file to be saved. Select Allow duplicate items to be created so all items will be exported. Once the PST has been backed up and you have verified that the item count is correct, you can remove the account from the account settings and undo any options changed in the Gmail account. Then, inform your client that they can now access their email and should consider changing their password.

Following are the Pros and Cons of Using Microsoft Outlook for Forensic Investigation:

Pros

• The wide availability of Outlook
• Once all options are set, processing is simple and quick
• Native PST export

Cons

• Options are expansive and sometimes unintuitive
• Can be intrusive – Outlook sends test emails during setup and may mark unread mail as read

About Kivu

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Thomas Larsen, is a data analyst in Kivu’s San Francisco office. For more information about how to retrieve and store Gmail messages for forensic investigation, please contact Kivu.

In yet another laptop data breach incident, Riverside County Regional Medical Center in Riverside, California reported that a lost laptop containing Personally Identifiable Information (“PII”) and Protected Health Information (“PHI”) for about 7,900 patients went missing in December 2014. According to a letter filed with the California State Attorney General, potentially exposed PII and PHI information may have included Social Security Numbers, demographic information (such as name or date of birth), medical record number, diagnosis, treatment, and other medical information. Ironically, breaches involving laptops are highly preventable with the use of encryption technology.

Encryption is the conversion of electronic data into another form, called ciphertext, which cannot be easily understood by anyone except authorized parties. To read the data, you need to use a key or password to unencrypt the data. Crucially, under the California Breach Notification Law SB 1386, and most other state breach notification laws, the fact that lost data was properly encrypted will avoid the need for public notification.

It’s therefore highly important to confirm that any device in use by an organization is actually encrypted.

Encryption typically operates in the background

On laptops or desktops, installed encryption products typically function in the background. For example, a billing analyst using an encrypted desktop may interact with billing software, Microsoft Excel and email throughout a business day to complete work. This analyst may only encounter encryption while logging in at the beginning of a day and may not realize encryption is present. While some products such as Microsoft BitLocker employ a lock symbol next to a drive icon to indicate the presence of active encryption, most encryption products bury the status of encryption in an operating system menu or within software. Determining whether encryption is present and active are two distinct steps that require knowledge about a computer’s operating system and the ability to search a computer.

BitLocker Enabled in Microsoft Windows
BitLocker Enabled in Microsoft Windows

How to Tell Whether Encryption is Present?

Ideally, encryption should be installed so that it protects an entire hard drive—“whole disk encryption” — and not just specific folders or email — “file-level encryption”. In newer computers, encryption is often integrated in the operating system (such as the encryption products built into Apple’s new operating system Yosemite or Microsoft’s Windows 7 and up). Encryption may be set-up for default installation (i.e., a user has to de-select encryption during computer set-up).

1. Determine the version of operating system (“OS”).

OS Type: Microsoft Windows 8.1

OS Type: Microsoft Windows 8.1

Kivu_Identify_Encryption_3
OS Type: Apple OSX Versions

2. If native OS encryption is available, locate built-in encryption and review status.

  • Windows. In computers running Microsoft Windows 7 Ultimate and Enterprise (as well as Windows 8 versions), BitLocker encryption is installed and provides whole disk encryption capability. There are caveats to the use of BitLocker (such as configuration with or without hardware-level encryption ), but the presence of BitLocker can be confirmed by searching for BitLocker in the Control Panel. More details are available at http://windows.microsoft.com/en-US/windows7/products/features/bitlocker.

Kivu_Identify_Encryption_4
Windows with BitLocker Activated

  • Apple. In Apple computers, FileVault 2 provides whole disk encryption capability. To determine the status of FileVault 2 whole disk encryption in Apple Yosemite, go to the Security & Privacy pane of System Preferences. For older Apple OSX versions with FileVault, encryption is limited to a user’s home folder rather whole disk encryption. More details are available at http://support.apple.com/en-us/HT4790.


Apple OSX FileVault 2 Menu

3. Look for a third-party application.

There are several third-party software applications that provide whole disk encryption (examples listed below). These applications can be found by searching a computer’s installed applications. To determine whether encryption is active, the application will need to be opened and reviewed. Many encryption applications will use a visual symbol or term such as “active” to indicate that encryption is functioning. (For a comparison of encryption products, review the following discussion: http://en.wikipedia.org/wiki/Comparison_of_disk_encryption_software.)

Software

Windows

Mac OSX

1. Built into Operating System (“OS”) BitLocker FileVault 2
2. Third-Party Software Products
Symantec PGP X X
Dell Data Protection Encryption (DDPE) X X
Check Point Full Disk Encryption Software Blade X X
Pointsec (Check Point) X
DriveCrypt X
  • Finding third-party software on a Windows computer.

i. Locate and open the Control Panel by clicking on the Start menu (not available in Windows 8) or using Windows search. (To learn more about the Control Panel, refer to the link http://support.microsoft.com/search?query=control%20panel.)

Windows Search
Windows Search

ii. Navigate to the Programs section of the Control Panel.

Windows Select Programs Section
Windows Select Programs Section

iii. Click on Programs and Features.

Windows Select Programs and Features
Windows Select Programs and Features

iv. Scroll through the installed software applications to determine whether third-party encryption software is installed.


Windows Review Installed Programs

  • Finding third-party software on an Apple computer.

i. Apple computers are configured with Spotlight — an Apple-native search utility that catalogues and organizes content. (See the following URL for information on Spotlight: http://support.apple.com/en-us/HT204014.)

ii. Spotlight can be found by clicking on the magnifying glass symbol in the upper right-hand corner of Apple’s menu bar.

iii. Enter the name of the third-party software into the Spotlight search box and review search results. (See the “quicktime” search example in the screenshot below.)


Apple Spotlight Search

Caution with the Use of Encryption

  1. User Versus IT (Information Technology department) Installation.

    In Apple FileVault 2 user guidance, three scenarios are identified for the installation of encryption — IT only, user with IT support or user only. These scenarios apply to the installation of any encryption and software product. While it is less expensive to have end users configure devices, encryption is the type of activity that can render a laptop useless if improperly deployed. As a rule of thumb, IT should direct installation and configuration of encryption to protect corporate assets.

  2. Properly Set Up Users.

    When encryption is deployed, there is often a requirement to set up “approved” users for access. If a user is not set up, then access is denied. If IT does not have user-level access, then IT may be locked out.

  3. Key Control.

    IT should maintain control of encryption keys. IT should have keys for each device with deployed encryption. Further, all encryption keys should be backed up to a source NOT controlled by IT. With tight control and access over encryption keys, an organization minimizes the chance that encryption will lock an organization out of corporate assets. Providing IT with access to each computer’s encryption keys also prevents a disgruntled employee from locking an organization out of their own computers.

  4. Fully Document IT Encrypting Devices.

    If a device is lost or stolen, it may be crucial to prove that the device was encrypted in order to avoid the need for a costly notification of any persons whose PII has been compromised. Make sure that IT has fully documented the encryption process and specific serial numbers of devices so protected.

  5. Don’t Forget Other Sources Such as Cloud Applications.

    Document and control cloud data storage of corporate assets. For each computer where cloud-based applications are running (including email), digital assets should be evaluated as to whether encryption is required locally and in the cloud. Many cloud storage applications offer encryption for stored data and data being transmitted.

Other References

Within the past year, Kivu has seen several malware trends emerging, including exploitation in widely used software applications (Heartbleed, Bash, and Shellshock), cycles of ransomware and destructive malware (Master boot wiper, HD wiper), and an increase of rootkits, botnets and traditional drive-by malware. In 2015, we expect to see new malware trends, including an increase in social engineering (attack the weakest link), exploitation of identified security flaws in newly developed mobile payment applications, exploitation of cloud SharePoint systems, and the continuation of exploitation of traditional Point of Sale (POS) credit card systems. Kivu also expects an increase in exploit kits for all types of mobile devices and traditional devices that contain diverse functionality.

Following is what Kivu recommends that companies do to help secure their systems and data.

Protecting Your Computer Environment Against Malware

To protect your environment, Kivu recommends a strength-in-depth approach, coupled with segmentation of sensitive data. Segmenting your network environment adds an additional security layer by separating your sensitive traffic from other regular network traffic. Servers with PHI, PII or PCI should be segmented from the backbone and WAN. A separate firewall should protect this segmented data.

Ensure that your firewall is fine-tuned, hardened, and that vital security logs are maintained for at least 2-3 months. Conduct regular external and internal vulnerability network scans to test your security perimeters and detect vulnerabilities. Remediate these security flaws within a timely manner.

Perimeter protection devices require regular maintenance and monitoring. Ensure that your ingress/egress protection devices (IDS/IPS) are monitoring real time to detect malicious network traffic.

Be sure to maintain and update your software and system applications on a regular basis to eliminate security flaws and loopholes. Verify that all security applications within your environment are fine-tuned and hardened and that security logs are maintained. Review your security logs on a regular basis to ensure that logging is enabled and that valid data is being captured and preserved for an extended time period without being overwritten.

Remote Access Considerations

Kivu recommends limiting and controlling remote access within your environment with two-factor authentication. Create a strong password policy that includes changing passwords frequently and eliminating default passwords for systems and software applications that are public facing.

For outsourced IT services, make sure your data security is in compliance with the latest standards and policies. Maintain and verify on a regular basis that all 3rd party vendors follow outlined security policies and procedures. Eliminate account and password sharing and ensure that all 3rd party vendors use defined and unique accounts for remote access.

Securing Vulnerable Data

Protecting your data is not only the responsibility of Information Security; it is everyone’s responsibility to do their part to keep your environment safe and secure. Encrypt, protect and maintain your critical data. Upgrade older systems when possible and verify that sensitive data is encrypted during transmission and data storage. Manage and verify data protection with all 3rd party vendors.

About Kivu

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Thomas Langer, EnCE, CEH, is an Associate Director in Kivu’s Washington DC office. For more information about malware trends and what your company can do to better protect its environment and data, please contact Kivu.

Social media has become a notable source of potential forensic evidence, with social media giant Facebook being a primary source of interest. With over 1.35 billion monthly active users as of September 30, 2014 [1], Facebook is considered the largest social networking platform.

Kivu is finding that forensic collection of Facebook (and other sources of social media evidence) can be a significant challenge because of these factors:

1. Facebook content is not a set of static files, but rather a collection of rendered database content and active programmatic scripts. It’s an interactive application delivered to users via a web-browser. Each page of delivered Facebook content is uniquely created for a user on a specific device and browser.  Ignoring the authentication and legal evidentiary issues, screen prints or PDF printouts of Facebook web pages often do not suffice for collecting this type of information – they simply miss parts of what would have been visible to the user – including, interestingly the unique ads that were tailored to the specific user because of their preferences and prior viewing habits.

2. Most forensic collection tools have limitations in the capture of active Internet content, and this includes Facebook. Specialized tools, such as X1 Social Discovery and PageFreezer, can record and preserve Internet content, but gaps remain in the use of such tools. The forensic collection process must adapt to address the gaps (e.g., X1 Social Discovery does not capture all forms of video).

Below are guidelines that we at Kivu have developed for collecting Facebook account content as forensic evidence:

1. Identify the account or accounts that will be collected – Determine whether or not the custodian has provided their Facebook account credentials. If no credentials have been provided, the investigation is a “public collection” – that is, the collection needs to be based on what a Facebook user who is not “friends” with the target individual (or friends with any of the target individual’s friends, depending on how the target individual has set up their privacy settings) can access. If credentials have been provided, it is considered a “private collection, ” and the investigator will need to confirm the scope of the collection with attorneys or the client, including what content to collect.

2. Verify the ownership of the account – Verifying an online presence through a collection tool as well as a web browser is a good way to validate the presence of the target account.

3. Identify whether friends’ details will be collected.

4. Determine the scope of collection – (e.g. the entire account or just photos).

5. Determine how to perform the collection – which tool or combination of tools will be most effective? Make sure that that your tool of choice can access and view the target profile. The tool X-1 Social Discovery, for example, uses the Facebool API to collect information from Facebook. The Facebook API is documented and provides a foundation for consistent collection versus a custom-built application that may not be entirely validated. Further, Facebook collections from other sources such as cached Google pages provide a method of cross-validating the data targeted for collection.

6. Identify gaps in the collection methodology.

a. If photos are of importance and there is a large volume of photos to be collected, a batch script that can export all photos of interest can speed up the collection process. One method of doing so is a mouse recording tool.

b. Videos do not render properly while being downloaded for preservation, aeven when using forensic capture tools such as X-1 Social Discovery. If videos are an integral part of an investigation, the investigator will need to capture videos in their native format in addition to testing any forensic collection tool. It should be noted that there are tools such as downvids.net to download the videos, and these tools in combination with forensic collection tools such as X-1 Social Discovery provide the capability to authenticate and preserve video-based evidence.

7. Define the best method to deliver the collection – If there are several hundred photos to collect, determine whether all photos can be collected. Identify whether an automated screen capture method is needed.

8. If the collection is ongoing (e.g., once a week), define the recurring collection parameters.

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author Katherine Delude is a Digital Forensic Analyst in Kivu’s San Francisco office. To learn more about forensically preserving Facebook content, please contact Kivu.

[1] http://newsroom.fb.com/company-info/ Accessed 11 December 2014.

Internet technology provides a substantial challenge to the collection and preservation of data, metadata (data that describes and gives information about other data) in particular. This blog post from Kivu will explain the factors to consider in using web pages in forensics investigations.

The challenge stems from the complexity of source-to-endpoint content distribution. Originating content for a single website may be stored on one or more servers and then collectively called and transmitted to an endpoint such as a laptop or tablet. For example, a mobile phone in Germany may receive different content from the same website than a phone in the United States. As content is served (e.g., sent to a tablet), it may be routed through different channels and re-packaged before reaching the final destination (e.g., an online magazine delivered as an iPhone application.)

From a forensics perspective, this dynamic Internet technology increases the difficulty of identifying and preserving content that is presented to a user through a browser or mobile application. To comprehend the issues concerning forensics and Internet technology, we need to understand what web pages are and the differences between the two types of web pages: fixed content (static web pages) and web pages with changing content (dynamic web pages).

What is a Web Page? graphic

A web page is a file that contains content (e.g., a blog article) and links to other files (e.g., an image file). The content within the web page is structured with Hypertext Markup Language (HTML), a formatting protocol that was developed to standardize the display of content in an Internet browser. To illustrate HTML, let’s look at the following example. The web page’s title, “Web Page Example,” is identified by an HTML <title> label and the page content “Hello World” is bolded using a <b> label.

graphic2Web pages that are accessible on the Internet reside on a web server and are accessible through a website address known as a Uniform Resource Locator, or URL (e.g., http://kivuconsulting.com/). The web server distributes web pages to a user as the user navigates through a website. Most visitors reach a website by entering the domain in a URL bar or by typing keywords into a search engine.

Static versus Dynamic Web Pages

Web pages may be classified as static or dynamic. The difference between static and dynamic web pages stems from the level of interactivity within a web page.

A static web page is an HTML page that is “delivered exactly as it is stored,” meaning that the content stored within the HTML page on the source server is the same content that is delivered to an end-user. A static web page may:

• Contain image(s)
• Link to other web pages
• Have some user interactivity such as a form page used to request information
• Employ formatting files, known as Cascading Style Sheets (CSS)

A dynamic web page is an HTML page that is generated on demand as a user visits a web page. A dynamic page is derived from a combination of:

• Programmatic code file(s)
• Files that define formatting
• Static files such as image files
• Data source(s) such as a database

A dynamic web page has the behavior of a software application delivered in a web-browser. Dynamic web page content can vary by numerous factors, including: user, device, geographic location or account type (e.g., paid versus free). The underlying software code may exist on the client-side (stored on a user’s device), the server-side (stored on a remote server) or both. From a user’s perspective, a single dynamic web page is a hidden combination of complex software code, content, images and other files. Finally, the website delivering dynamic web page content can manage multiple concurrent user activities at one time on the same device or manage multiple dynamically-generated web pages during one user session on a single device. This behind-the-scenes management of user activity hides the underlying complexity of the numerous activities for a single user session.

Web Pages Stored on a User Device as Forensics Evidence

To a forensic examiner, web page artifacts that are stored on a user device may have significant value as evidence in an investigation. Web page artifacts are one type of Internet browser artifact. Other Internet artifacts include: Internet browser history, downloaded files and cookie files. If the device of interest is a mobile device, evidence may also reside in database files such as SQLite files.

Forensic examiners review Internet artifacts to answer specific questions such as, “Was web-mail in use?” or “Is there evidence of file transfer?” Forensic analysis may be used to create a timeline of user activity, locate web-based email communications, identify an individual’s geographic location based on Internet use, or establish theft of corporate data using cloud-based storage such as Dropbox.

Web Content Stored on a Server as Forensics Evidence

Depending on the type of investigation (e.g., a computer hacking investigation), a forensic examiner may search for evidence on servers. Server-side content may be composed of stored files such as log files, software code, style sheets and data sources (e.g., databases).

Server-side content may directly or indirectly relate to web pages or files on a user device. If a user downloaded an Adobe PDF file, for example, the file on the server is likely to match the downloaded file on the user’s device. If the evidence on a user device is a dynamic web page, however, there may be a number of individual files that collectively relate as evidence, including: images, scripts, style sheets and log files.

The individual server-side files are component parts of a web page. A forensic examiner would analyze server-side files by investigating the relationship between the web page content on a user device and related server-side files. A forensic examiner may also review server logs for artifacts such as IP address and user account activity.

Factors to Consider in Web Page Forensics Investigations

1. Analyze the domain associated with web page content. Collect information on:

a. Owner of the domain – WHOIS database lookup.
b. Domain registry company – e.g., GoDaddy.
c. Location of domain – IP address and location of web server.

2. Conduct a search using a search engine such as Google, Yahoo or Bing. Review the first page of search results and then review an additional 2 to 10 pages.

a. Depending on the scope of the content, it may be worth filtering search results by date or other criteria.
b. It may be worth using specialty search tools that focus on blogs or social media.
c. Consider searching sites that track plagiarism.

3. Examine the impact of geo-location filtering. Many companies filter individuals by location in order to provide targeted content.

a. Searches may need to be carried out in different countries.
b. Consider using a proxy server account to facilitate international searches.

4. Use caution when examining server-side metadata. Website files are frequently updated, and the updates change file metadata. A limited number of file types such as image files, may provide some degree of historical metadata.

5. There is a small possibility that archival sites, such as The Wayback Machine, may contain web page content. However, archival sites may be limited in the number of historical records, unless a paid archiving service is used.

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Megan Bell, directs data analysis projects and manages business development initiatives at Kivu. For more information about using web pages in forensics investigations, please contact Kivu.

The cloud is becoming an ever-increasing repository for email storage. One of the more popular email programs is Gmail, with its 15 GB of free storage and easy access anywhere for users with an Internet connection. Due to the great number of email accounts, the potential for large amounts of data, and no direct income, Google has throttled back on backups to lessen the burden on their servers worldwide.

This blog post is the start of a series of articles that will review Gmail collection options for computer forensic purposes. Kivu initiated a project to find the most efficient and defensible process to collect Gmail account information. The methods tested were Microsoft Outlook, Gmvault, X1 Social Discovery and Google scripts.

All four programs were run through two Gmail collection processes, with a focus on:

  • Discovering how the program stores emails.
  • Identifying whether the program encounters throttling? If so, how does it deal with it?
  • Determining if current forensic tools can process the emails collected.
  • Measuring how long the program takes to process the email, and the level of examiner involvement necessary.

Kivu employees created two Google email accounts for this analysis. Each email account had over 30,000 individual emails, which is a sufficient amount for Google throttling to occur and differences in speed to become apparent. The data included attachments as well as multi-recipient emails to incorporate a wide range of options and test how the programs collect and sort variations in emails. Our first blog post focuses on Gmvault.

What is Gmvault and How Does It Work?

Gmvault is a third party Gmail backup application that can be downloaded at Gmvault.org. Gmvault uses the IMAP protocol to retrieve and store Gmail messages for backup and onsite storage. Gmvault has built-in protocols that help bypass most of the common issues with retrieving email from Google. The process is scriptable to run on a set schedule to ensure a constant backup in case disaster should happen. The file system database created by Gmvault can be uploaded to any other Gmail account for either consolidation or migration.

During forensic investigation, Gmvault can be used to collect Gmail account data with minimal examiner contact with the collected messages. The program requires user interaction with the account twice – once to allow application access to the account and again at the end to remove the access previously granted. Individual emails can be viewed without worrying about changing metadata, such as Read Status, and/or Folders/Labels because this information is stored in a separate file with a .meta file extension.

How to Use Gmvault for Forensic Investigation

Gmvault needs very little user input and can be initiated with this command:

$> gmvault sync [email address]

We suggest using the following options:

$> gmvault sync –d [Destination Directory] –no-compression [email address]

“d” enables the user to change where the download will go, allowing for the data extraction to go directly to an evidence drive, (default: Usercloudgmvault-db)

“no-compression” downloads .eml files rather than the .gzip default. Compression comes with a rare chance of data corruption during both the compression and decompression processes so, unless size is an issue, it is better to use the “no compression” option. Download speed is unaffected by the compression, although compressed files are roughly 50% of the uncompressed size.

Next, sign in to the Gmail account to authorize Gmvault access. The program will create 3 folders in the destination drive you set, and emails will be stored by month. The process is largely automated, and Gmvault manages Google throttling. It accomplishes this by disconnecting from Google, waiting a predetermined number of seconds and retrying. If this fails 4 times, the email is skipped, and Gmvault moves on to the next set of emails. When finished with the email backup, Gmvault checks for chats and downloads them as well.

When Gmvault is finished, a summary of the sync is displayed in the cmd shell. Gmvault performs a check to see if any of the emails were deleted from the account and removes them from the database. This should not be a problem for initial email collections, but it will need to be noted on further syncs for the same account. The summary shows the total time for the sync, number of emails quarantined, number of reconnects, number of emails that could not be fetched, and emails returned by Gmail as blank.

To obtain the emails that could not be fetched by Gmvault, simply run the same cmd line again:

$> gmvault sync –d [Destination Directory] –no-compression [email address]

Gmvault will check to see if the emails are already in the database, if so skip them, and then download the skipped items from the previous sync. It may take up to 10 times to recover all skipped emails, but the process can probably be completed within 5 minutes.

Be sure to remove authorization once the collection is complete.

Now you should have all of the emails from the account in .eml format, stored by date in multiple folders. Gmvault can then be used to export these files into a more useable storage system. The database can be exported as offlineimap, dovecot, maildir or mbox (default). Here’s how:

gmvault-shell>gmvault export -d[Destination Directory] [Export Directory]

Following are the Pros and Cons of Using Gmvault:

Pros:

  • Easy to setup and run
  • Counts total emails/collected emails to quickly know if emails are missing
  • 50% compression
  • Can be scripted to collect multiple accounts

Cons:

  • No friendly UI
  • Needs further processing to get to a user friendly deliverable
  • Will sometimes not retrieve the last few emails

The enduring onslaught of data breach events such as the theft of 4.5 million health records from Community Health Systems or the recent staggering loss of information for 76m JP Morgan accounts continues to highlight the need for robust information security and the ability to proactively prevent and redress potential security incidents. In response, organizations have increased investment in better information security programs and supporting technologies. However, while more organizations may be better positioned to cope with data breach events, information security continues to lack appropriate coverage of cloud and mobile device technology risks.

Lags in InfoSec Deployment:

According to the 2014 Global State of Information Security® Survey of information, executives and security practitioners, organizational leaders expressed confidence in their information security activities (nearly three-quarters of study respondents reported being somewhat or very confident). However, the survey reveals gaps in the application of information security for cloud and mobile technologies. Nearly half of respondents reported that their organizations used cloud computing services but only 18% reported having governance policies for cloud services. Furthermore, less than half of respondents reported having a mobile security strategy or mobile device security measures such as protection(s) for email/ calendaring on employee-owned devices.

Real Issue is Lack of Knowledge

Gaps in cloud and mobile information security represent a broader trend that even exists in regulated industries. For example, in the 2013 Ponemon report, “The Risk of Regulated Data on Mobile Devices & in the Cloud”, 80% of IT professionals could not define the proportion of regulated data stored in the cloud and on mobile devices. The gap in information security does not appear to be limited to the deployment of polices and controls. Instead the potential issues with cloud and mobile information security stem from lack of knowledge concerning storage and use of data. As noted in the study “Data Breach: The Cloud Multiplier Effect” their organizations as having low effectiveness in securing data and applications in the cloud.

Reducing Cloud and Mobile Technology Risks

Developing an appropriate security posture for cloud and mobile technologies should begin with the realization that information security requirements for these technologies differ from traditional IT infrastructure. For example, the responsibility for storage and use of data in the cloud is shared by a greater number of parties—organization, employees, external vendors, etc. Additionally, contracts and written policies for cloud applications must specify more granular coverage for access, use, tracking and management of data. In the event of a potential security incident, possible sources of evidence, such as security logs, are stored externally and may require the assistance of specific employees or service providers.

The following considerations provide a starting point for the development of information security practices that are relevant to cloud and mobile technologies.

1. Identify security measures that are commensurate with cloud and mobile technologies.

a. Use security features that are built into cloud and mobile technologies. This includes access controls and encryption. Frequently, security features that would have prevented major cloud-based breaches (such as multi-factor authentication and text-to-cellphone warnings of suspicious activity) are already made available by cloud service providers. However, users of these services, whether individuals or large corporate clients, are frequently delaying full implementation of available security options due to cost or organizational concerns.

b. Implement additional security tools or services to address gaps in specific cloud and mobile technologies. For example, software-based firewalls to manage traffic flow may also provide logging capability that is missing from a cloud service provider’s capabilities.

2. If possible, use comprehensive solutions for user, device, account, and data management.

a. Manage mobile devices and their contents. Mobile device management (MDM) solutions enable organizations to coordinate the use of applications and control organizational data across multiple users and mobile devices.

b. Use available tools in the cloud. Cloud service providers such as Google Apps provide tools for IT administration to manage users, data and specific services such as Google Drive data storage. Unfortunately, many organizations do not utilize these tools and take risks such as losing control over email account access and content.

3. Maintain control over organizational data.

a. IT should control applications used for file-sharing and collaboration. Cloud- based tools such as Dropbox provide a robust method of sharing data. Unfortunately, Dropbox accounts often belong to the employee and not the organization. In the case of a security incident, IT may be locked out of an employee’s personal account.

b. Users should not be responsible for security. Organizations often entrust employees and business partners with sensitive data. This includes maintaining security requirements such as use of encryption and strong passwords. The organization that owns the data (usually its IT department) should have responsibility for security, and this includes organizational data stored outside of an organization’s internal IT infrastructure.

c. Encryption keys should be secured and available to IT in the case of a potential incident. With the advent of malware such as ransomeware that holds data captive and employees who could destroy encryption keys, securing encryption keys has become becoming a vital step in the potential recovery of data. If IT does not maintain master control over encryption keys, important organizational data could be rendered inaccessible during a security incident.

4. Actively evaluate InfoSec response and readiness in the cloud.

a. IT should have a means to access potential sources of organizational data. If data is stored on an employee’s tablet or at a third-party data storage provider, IT should have a vetted plan for access and retrieval of organizational data. Testing should not occur when a potential security incident arises.

b. Important digital assets should be accessible from more than one source and should be available within hours and not days. IT should have backup repositories of corporate data, in particular for data stored in cloud environments. This may include using a combination of cloud providers to store data and having an explicit agreement on the timing and costs required to retrieve data (in the event of an incident).

c. Audit systems should be turned on and used. Cloud providers often have built-in auditing capability that ranges from data field tracking (e.g., a phone number) to file revision history. The responsibility for setting up audit capability belongs to the organization. As part of using a cloud provider’s technology, the use of auditing should be defined, documented and implemented.

d. IT staff should have the knowledge and skills to access and review log files. The diversity and complexity of log files have grown with the number of technologies in use by an organization. Cross-correlating logs files across differing technology platforms requires specialized knowledge and advanced training. If an organization lacks the skill to analyze logs files, the ability to detect and investigate potential security events may be severely compromised.

5. Incident response plans and investigation practices should cover scenarios where data is stored in the cloud or on mobile devices.

Hackers have become more aggressive in seeking out data repositories. As organizations continue to adopt cloud and mobile technologies, information security must keep pace and extend the same internal focus on information security to external sources of organizational data. In particular, incident response plans should cover an increasing phenomenon—where attackers infiltrate an organization’s physical network solely to gain the keys to its cloud data repository.

The financial industry has long been known for “repackaging risk” – slicing and dicing investments to lessen their aggregate risk. During the 2008 subprime mortgage crisis, the repackaging process eventually reached the point where no one knew the real financial risk, who exactly was exposed to it, and where and how the risk was concentrated.

A similar process is happening today for cyber risk. Known as “Cyberization,” organizations are unknowingly exposed to cyber risk outside of their own organizations because they have outsourced, interconnected or otherwise exposed themselves to an increasingly complex network of networks. Their cyber risk starts with their internal corporate network and security practices and expands outward to their counterparties and affiliates, their supply chain and outsourcing partners. This blog post from Kivu will help explain what Cyberization is and the aggregate risk that organizations face.

How Leveraging Technology Leads to Increased Cyber Risk

Organizations today are relying more and more on technology to increase efficiencies and lower costs, making it possible to be more profitable while deploying fewer resources. This trend makes global cyberization more likely because the Internet is a tightly coupled system with extensive aggregations, societies and economies. With so much interdependency, any disruption in the system is likely to have a cascading effect.

Cyber risk management often assumes that risk is simply the aggregation of local technology and procedures within an organization. In general, risk managers focus mostly on what is going on inside their own walls. Today’s cyber risk managers need to understand, however, that cyber risk is not self-contained within individual enterprises. They must expand their horizons and look far beyond their boundary walls.

Factors to Consider in Cyber Risk Management

Internal IT Enterprise

Risk associated with an organization’s IT.

Examples: hardware, software, people and processes.

Counterparties & Partners

Risk from dependence on or direct interconnection with outside organizations.

Examples: Partnerships, vendors, associations.

Outsourcing

Risk from contractual relationships with external suppliers of service.

Examples: IT and Cloud providers, HR, Legal, Accounting and Consultancy.

Supply Chain

Risk to the IT sector and traditional supply chain and logistics functions.

Examples: Exposure to country, counterfeit or tampered products.

Disruptive Technologies

Risk from the unseen effects of or disruptions from new technologies – those already existing and those due soon.

Examples: Driverless cars, automated digital appliances, embedded medical devices.

Upstream Infrastructure

Risk from disruptions to infrastructure relied upon by economies and societies, electric, oil or gas infrastructure, financial systems and telecom.

Examples: Internet Infrastructure, Internet governance.

External Shocks

Risk from incidents outside the control of an organization that are likely to have cascading effects.

Examples: International conflicts, malware pandemic, natural disasters.

About Kivu

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Elgan Jones, is the Director of Cyber Investigations at Kivu Consulting in Washington DC. For more information about cyber risk management and mitigating the effects of cyberization, please contact Kivu.