What is PCI 3.0 and How Does It Differ from PCI 2.0?

The Payment Card Industry Data Security Standard (PCI DSS) applies to companies of any size that accept credit card payments. The effective date of version 3.0 of the standard was January 1, 2014, but existing PCI DSS 2.0 compliant vendors had until January 1, 2015 to move to the new standard. Some of the changes are not required to be in place until June 1, 2015. This blog post from Kivu will explain what the new standards are and review some of the most critical issues involved with compliance.

PCI 3.0 is not a wholesale revision of PCI 2.0. The 12 core principals of PCI compliance remain intact. PCI 3.0 is the clarification and revision of all 12 principals and is roughly 25% bigger than PCI 2.0, including 98 upgrades. Some of the upgrades are small but others are significant. PCI 3.0 will be harder and more expensive to implement than PCI 2.0. Organizations should expect that the PCI 3.0 assessment will be similar to PCI 2.0 but more transparent and consistent.

A major concern for merchants implementing PCI 3.0 is how they will be able to afford the increased cost of compliance. PCI 3.0 requires additional processes and procedures that many organizations might not be prepared to implement.

New Key Areas for PCI 3.0

Segmentation of Card Data Environment (CDE) – Penetration Testing

PCI 3.0 is a great improvement over PCI 2.0 because it segments the Card Data Environment (CDE) from other networks. During the breach at Target, contractors had access to the client network, putting the whole CDE environment at risk.

The cost of segmenting the CDE environment will be a burden on the merchant, but it is a significant step towards reducing risk and exposure. Penetration Testing (testing a computer system, network or web application to find vulnerabilities that an attacker could exploit) will be critical. Qualified Security Assessors (QSAs) will have a tough job auditing the new guidelines and results.

Key Takeaways

  • PCI 3.0 has to be implemented by June 2015.
  • PCI 3.0 requires that all merchants be PCI compliant to undergo a Penetration Test.
  • Merchants need to ensure that correct methods are used to segment the CDE environment from the client network.
  • The contractor network must be segmented from the client network.
  • The Best Practice Framework will be based around NIST SP800-115.
  • Merchants must be diligent in their selection of penetration testing services.

System Inventories

Maintaining system inventories is not an easy task, and accurate system inventories have been difficult to accomplish under PCI 2.0 What is different with PCI 3.0?

The inventory list under PCI 3.0 just grew bigger. Now, maintaining an inventory of hardware, software, rules and logs will be an even more difficult task in order to remain in compliance. Documenting components and inventory is time consuming, and inventory changes frequently. Who will be in charge of accomplishing this within an organization, and how reliable will the inventory list be? What happens when virtualization/cloud is thrown into the inventory mix? What about geographic locations?

We at Kivu see maintaining a system inventory as an evolving cycle with constant issues.

Key Takeaways

  • Maintaining a reliable, timely inventory will be somewhat impossible.
  • The merchant’s IT & compliance teams will have to spend more time creating inventories.
  • Merchants need to know who will be responsible for maintaining system component inventories that are in scope for PCI DSS (Hardware & Software).
  • Merchants must maintain an inventory of authorized wireless access points, including their business justification.
  • Documenting components and functions will be a continuous cycle.

Vendor Relationships

Explicit documentation of who manages each aspect of PCI DSS compliance is a critical improvement of PCI 3.0 over PCI 2.0. Who owns what, the service provider or the organization? Management of each aspect of PCI DSS compliance should be well documented in every vendor contract agreement.

Kivu recommends a written agreement with service providers verifying that the provider maintains all applicable PCI-DSS requirements. Getting service providers to agree will be a daunting task. Will vendors want to take this responsibility? In refuting PCI reports, identifying who is at fault is a common problem. If there is a breach, who is liable?

Key Takeaways

  • In PCI 3.0, detailed contractual language and service provider roles and responsibilities are much more of a focus.
  • Merchants should decide who owns each aspect of PCI compliance.
  • PCI compliance has to be written into the vendor contract agreement, with specific language on who owns what.
  • Outline where responsibility lies for control over compliance.
  • Providers must give their customers written documentation stating that they are responsible for the cardholder data in their possession.

Anti-Malware Systems

PCI 3.0 places a new emphasis on identifying and evaluating evolving malware threats targeted at systems NOT commonly considered to be affected by malicious software. Advanced research capabilities or Intel on malware threats is seen as a proactive measure, but who will provide these proactive services to merchants? How can this be enforced?

Who will be responsible for keeping abreast of threats and making sure anti-malware systems are patched and configured correctly? It is critical for the PCI Standards Council to release a recommended list of anti-malware vendors and provide guidelines for merchants.

Key Takeaways

  • PCI 2.0 only states that antivirus software should be in place. PCI 3.0 takes it to another level.
  • PCI 3.0 states that if malware emerges for PCI systems, the merchant should know about it. There needs to be a process that makes sure this happens.
  • PCI QSAs will need to scrutinize anti-malware controls on all platforms.
  • Technical planning and strategy will involve more paperwork for merchants.
  • Specific authorization from management to disable or alter operations of all antivirus mechanisms should be a policy.
  • An anti-malware system should automatically lock out the user for trying to disable it.
  • Merchants will need to justify why they don’t have anti-malware software running on non-windows platforms. This is critical because it causes organizations to think carefully about evolving non-windows threats.

Physical Access and POS System Inventories

PCI 3.0 states that physical access to a merchant’s server room should be restricted, whether the room is in a closet in the back of the store or in a high-end data center. Physical access should be limited to certain personnel, and all others should be escorted and signed in and out of the room. Restricting admission limits the risk of unauthorized access to POS devices and back end systems that could potentially be swapped out by unauthorized individuals.

Maintaining an inventory of POS hardware and conducting frequent spot checks to ensure serial numbers match will be critical to staying compliant under PCI 3.0. POS device inspections should be a best practice, but how many merchants even have a list of their POS devices?

Key Takeaways

  • Control physical access to the server room for all on-site personnel based on individual job function. Access should be revoked upon termination.
  • Maintain an inventory of all POS devices and implement controls to protect these devices.
  • POS device inspections should be a best practice. Periodically inspect POS devices and check serial numbers to ensure devices have not been swapped out.
  • Procedures for frequently testing POS devices should be implemented.
  • Provide security awareness training to employees that use POS systems to identify suspicious behavior.
  • PCI 3.0 mandates that service providers with remote access to the CDE must use a unique authentication credential for each customer environment.
  • Access needs and privileges for all job functions allowed access to the CDE must be formally defined and documented in advance.

What Other Changes Should We Expect with PCI 3.0?

Following are some moderate changes worth highlighting:

  • Risk assessments are now to be performed annually, as well as whenever significant changes are made to the Card Data Environment. What constitutes a significant change to the environment? There are no guidelines that specifically address this.
  • New password management processes/controls are being enforced and met.
  • The CDE must be formally defined, with an up-to-date diagram that shows payment flow across systems.
  • Merchants need to implement file change detection systems and then investigate and respond to all alerts generated by this system. This type of system can generate many alerts every day. Kivu recommends that merchants understand who will monitor these alerts and review and document responses.
  • Daily review of logs is required. Again, who will do this?
  • QSAs will have more responsibility to enforce the new guidelines.
  • PCI 3.0 will increase compliance costs, and those who complain may not fully understand the reasons for the process mandate.
  • There is a recommendation to avoid service providers that are non-compliant.
  • Memory scraping became a best practice for PCI 3.0.

Has the Value of PCI Standards Declined?

It is tough to argue against good security and retailers accepting more responsibility for it. The buck has been passed to the retailer, although banks should take more responsibility to provide more security as well through chip technology or point-to-point encryption. Some retailers are moving ahead with tokenization and point-to-point encryption because they believe that PCI 3.0 compliance is not enough.

What Failures Do We See in PCI 3.0?

The PCI Security Standards Council has missed some key opportunities to clarify the standard and to address compliance as it relates to emerging technologies.

  • One significant issue is the failure of PCI 3.0 to address virtualization, cloud and mobile payment providers. Merchants are frequently using these 3 areas, but PCI 3.0 does not address them in detail nor provide merchants with guidelines.
  • PCI 3.0 continues to ignore mobile payment processing and mobile device security, leaving merchants who support mobile payment technology on their own to determine how to be compliant. Card brands are reluctant to put security constraints on mobile technology through fear of stifling the growing revenue expected from mobile payments.
  • Some merchants remain non-compliant with PCI 2.0, yet they are expected to be compliant with PCI 3.0 by June. How will they be able to make all of the changes necessary? Will some merchants be allowed to become PCI 2.0 compliant at first and given additional time by the PCI Security Standards Council to comply with PCI 3.0?

Is PCI 3.0 Worth It?

PCI 3.0 is bigger, therefore harder and more expensive to implement than PCI 2.0, but it offers additional, critical security benefits. It will take more time and resources from merchants to stay in compliance with PCI 3.0. We at Kivu believe that going forward, it would be best to integrate PCI compliance activities into an organization’s year round IT Security Management process.

Most computer compromises aren’t discovered until after an attack—sometimes days or weeks later. Shutting down a computer may halt malware activity, but it could have negative and unforeseen consequences. For example, it could become difficult to retrace information infiltrated by a hacker or botnet. This is particularly important if significant time has transpired between an attack and discovery of malware.

During a forensic investigation, there should be a balance between rushing to remove malware and understanding the scope of the malware infestation in order to find a solution that deters future attacks.

What is Malware?

Malware is software that is designed for illicit and potentially illegal purposes. Malware may be a single software program or a collection of programs used to accomplish tasks such as:

  • Obtaining system control—for command and control of a computer
  • Acquiring unauthorized access to system resources—network intrusion
  • Interrupting business operation
  • Gathering information—reconnaissance
  • Holding digital assets hostage—ransomware

How Does Malware Infection Occur?

The Internet has opened the door to broad distribution of malware. It is possible for malware to originate from sources such as email, instant messaging, or infected file downloads. Malware can also spread through USB devices or connectivity to public WiFi hotspots.

The most complex malware tools may use a combination of distribution methods to infiltrate an organization. For example, an email may contain a hyperlink to a website that causes “dropper” software to download. The dropper software performs reconnaissance of its host computer and transmits results out to another computer on the Internet. The second computer analyzes the reconnaissance results and sends back malware that is customized to the host computer.

What are Common Types of Malware?

Virus. Virus software refers to software that inserts malicious code into a computer and has the capability of spreading to other computers. The ability to propagate is a requirement for malware to be classified as a virus or worm.

Worm. Worms are a type of malware that propagate across networks. A worm finds its way by reading network addresses or email contact lists and then copying itself to identified addresses. Worms may have specific capabilities, such as file encryption or installation of certain software, including remote access software.

Trojan Horse. This type of malware enables unauthorized access to a victim computer. Unauthorized access could result in theft of data or a computer that becomes part of a denial-of-service (DDoS) attack. Unlike viruses or worms, Trojan horse software does not spread to other computers.

Rootkits. Rootkits refers to malware that takes control of a host computer and is designed to evade detection. Rootkits accomplish evasion through tactics, such as hiding in protected directories or running hidden process names on DLL’s (Dynamic Link Libraries) as legitimate files, without the computer or user noticing an abnormality. Rootkits may defend themselves from deletion and may have the ability to re-spawn after deletion. Most notably, rootkits have the potential to operate in stealth mode for extensive periods of time and to communicate to external computers, often transmitting collected data from a victim computer.

Spyware. The purpose of spyware is to collect data from a victim computer. Spyware may exist as malware that is installed on a host computer or embedded within a browser. Spyware may collect data over an extensive time period without the victim ever knowing the extent of the spying activity. Spyware may collect keyboard strokes, take screenshots of user activity, or utilize built-in cameras to record video.

Browser Hijacker. This malware takes control of a user’s browser settings and changes the default home page and search engine. Browser hijacking software may disable search engine removal features and have the ability to re-generate after deletion. There may also be persistent, unwanted toolbars that attach to a browser.

Adware. Adware refers to software that has integrated advertising, particularly freeware software. Adware displays advertisements within the freeware product and transmits collected data back to a controlling party (e.g., an advertising distributor). A software creator may utilize advertisements to earn advertising revenue.

Ransomware. Ransomware is malware that encrypts part or all of a host computer. Encryption locks a victim out of important files or a computer until a ransom demand is paid, possibly in the form of bitcoins. If the ransom is paid, the victim has no guarantee that the ransomware will de-crypt the computer.

Investigating Malware

When a malware infection is suspected, care should be taken to investigate and collect evidence where possible while performing radiation to remove the malware infection. The following guidelines should be considered when malware is suspected. If a forensics team is involved with the investigation, the following points will be addressed by forensics examiners.

  1. Assess the implication of powering down the potentially infected computer. Powering down a computer may stop malware in its tracks and result in the loss of potential evidence. In the case of ransomware, a shutdown could results in permanently unrecoverable data. The first response to possible malware infestation should be an evaluation of the victim computer and gathering of key evidence. If the malware is associated with network intrusion or other nefarious activity, evidence gathering may extend across multiple computers and the respective network that hosts the victim computer.
  2. Collect a sample of Random Access Memory (RAM). RAM is temporary memory that exists while a computing device is powered on. RAM is particularly important since malware has the ability to operate (and hide) in RAM. Capturing an image of the infected computer’s RAM, prior to shut down, enables a forensic examiner to assess the potential activity and functionality of the malware. Artifacts that may reside in RAM include:
    • Network artifacts, such as connections, ARP tables, and open interfaces
    • Processes and programs
    • Encryption keys
    • Evidence of code injections
    • Root kit artifacts
    • DLL and driver information
    • Stored passwords for exfiltration containers
    • Typed commands in the DOS prompt
  3. Identify and preserve log files. Log files record a variety of information about system and application usage, user login events, unusual activity such as a software crash, virus activity, network traffic, etc. In the event of a potential malware infection or network intrusion event, log files should be collected and preserved for further analysis. If logging activity is turned off or log files are set for overwriting, they may be limited in value for an investigation.
  4. Interview users who may have received suspicious emails or observed unusual computer activity. Computer users and IT staff may have important information regarding the origin, timeline and possible activity of the malware. Early in an investigation, interviews should be conducted to assess the potential scope and breadth of an incident. If malware was introduced through user activity, such as a phishing email, the suspect email may still reside in a user’s email. In the case of malware that entered a computer through a software vulnerability (e.g., code injection through an unsecured website), IT staff may have information about unusual events in system logs or data leaving through a firewall at unusual times (e.g., after business hours).
  5. Determine whether to investigate other computers. Malware may spread through computers within the same network segment or a shared file server. Investigation of malware should include scans of potentially connected computers to assess the possibility of further malware infestation. Additionally, if external connections such as Remote Desktop or GoToMyPC exist and are active, then a determination should be made to analyze externally connected computers.

For more information about malware infection and forensic investigation, please contact Kivu.

One of the most popular email programs used today is Gmail.  Kivu initiated a project to determine the most efficient and defensible process to collect Gmail account information. This blog post is the second in a series of articles that evaluate Gmail collection options for computer forensic purposes.

A common email client that can be incorporated into a forensic email collection is (shock horror) Microsoft Outlook. Outlook is included in the Microsoft Office package, and for many years it was king of email clients for the business environment. As the popularity of mobile phones and web-based clients increased, however, Microsoft Outlook’s use has declined.

We will be using the latest version, Outlook 2013, for our collection of forensic data. While not usually seen as a part of the forensic investigator’s tool kit, Microsoft Outlook has some interesting attributes that can be verified in use, and tested as to its output. You just need to know what you’re doing and (as in all forensic work) be able to confirm the veracity of the data.

Outlook has an option for IMAP setup that allows automatic testing of account credentials. Outlook will send an email from the account to the account to ensure that the account credentials are correct. Outlook 2010 has the ability to disable this test, but in Outlook 2013 the option is greyed out, and the test email is sent automatically. If account intrusion needs to to be kept to a minimum, it is good to keep this in mind.

How to Use Microsoft Outlook for Gmail Collection, Step-by-Step

Change Microsoft Outlook Settings

To start your Gmail collection, check that the settings in the target Gmail account are set to IMAP. Then, open up the email account settings, either though Outlook File>Info>Account Settings or though the Control Panel>Mail>Email accounts. Selecting New… in the Email tab will prompt you for the service you wish to set up. Check E-mail Account, click on Next, and then select Manual Setup. Click Next again.

Unlike GM Vault, which we evaluated in the first article on this topic, a bit more work is needed to ensure a smooth email collection. In addition to User Name and Password, Outlook requests both the incoming and outgoing servers for the IMAP account.

User Information
Your Name:
(Top Level Email Name)
Email Address: (Collection Gmail address)
Server Information
Account Type:
IMAP
Incoming mail server: imap.gmail.com
Outgoing mail server (SMTP): smtp.gmail.com
Logon Information
User Name:
(Collection Gmail address)
Password: (Collection Gmail password)

Click on More Settings to open up Internet email settings. Under Outgoing Server check the box for Outgoing sever requires authentication and use the same setting for your incoming mail server. Click on the Advanced tab and change the server port numbers to 993 for incoming and 465 for outgoing. Select SSL for the encryption type for both, and set the server timeout to 5 min. These are Google’s recommended settings for using the Outlook client for Gmail accounts.

Start Gmail Collection

Go to the Send/Receive tab and click on the drop down list for Send/Receive Groups and select Define Send/Receive Groups…. In the pop-up window, select the All Accounts and click Edit on the right hand side of the window. Check all boxes except Send mail items and select Download complete items… If you want to collect only specific folders, use the custom behavior option to select the folders you to collect. Click OK and click OK again. Then you can either select the Group to Send/Receive drop down menu or use the short cut key (F9).

 

 

Track Gmail Collection

Once the collection has started, there are a few options and settings that can help minimize intrusion and track the collection – again, crucial steps if you are hoping to achieve a forensically sound collection. Outlook’s default setting marks an email as “Read” – whenever you select a new email, the previous email is marked as read. To change this setting, go into reading pane options either via the File>options>Mail>Outlook panes>Reading Pane… or the View tab and click on the Reading Pane drop down menu. In the options screen uncheck all of the boxes. Now, Outlook will not mark the emails you view as read when you look through them.

For tracking, to ensure that you have reviewed the correct number of emails, you’ll need to tell Outlook to show all items in a folder rather than just the unread items. Unfortunately, this can only be done folder by folder. Right click on a folder and select Properties. Select the option Show Total Numbers of Items then click OK. Repeat with all of the folders that you are collecting. If a folder does not show a number, there are 0 emails in the folder. Compare the folder numbers with the counts you can view online at: www.gmail.google.com. Once all of the folder counts match, the collection is finished.

Working with Offline Email Storage

Outlook uses an Off-line Storage Table (OST) format to store emails from POP, IMAP and other web- based email accounts offline when the Internet is not available. When the sever access is resumed, the accounts are synced to the cloud storage. Outlook also uses Personal Storage Tables (PST) files to back up and transfer email files and accounts. While some forensic processing tools can extract data from OST files, almost all of them can extract the data from PST files. PST files can also be opened up on any computer with Outlook.

To export the collected PST files, select File>Open>Import, Export to File, and then select Outlook Data File (.pst). Browse to where you want the file to be saved. Select Allow duplicate items to be created so all items will be exported. Once the PST has been backed up and you have verified that the item count is correct, you can remove the account from the account settings and undo any options changed in the Gmail account. Then, inform your client that they can now access their email and should consider changing their password.

Following are the Pros and Cons of Using Microsoft Outlook for Forensic Investigation:

Pros

• The wide availability of Outlook
• Once all options are set, processing is simple and quick
• Native PST export

Cons

• Options are expansive and sometimes unintuitive
• Can be intrusive – Outlook sends test emails during setup and may mark unread mail as read

About Kivu

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Thomas Larsen, is a data analyst in Kivu’s San Francisco office. For more information about how to retrieve and store Gmail messages for forensic investigation, please contact Kivu.

In yet another laptop data breach incident, Riverside County Regional Medical Center in Riverside, California reported that a lost laptop containing Personally Identifiable Information (“PII”) and Protected Health Information (“PHI”) for about 7,900 patients went missing in December 2014. According to a letter filed with the California State Attorney General, potentially exposed PII and PHI information may have included Social Security Numbers, demographic information (such as name or date of birth), medical record number, diagnosis, treatment, and other medical information. Ironically, breaches involving laptops are highly preventable with the use of encryption technology.

Encryption is the conversion of electronic data into another form, called ciphertext, which cannot be easily understood by anyone except authorized parties. To read the data, you need to use a key or password to unencrypt the data. Crucially, under the California Breach Notification Law SB 1386, and most other state breach notification laws, the fact that lost data was properly encrypted will avoid the need for public notification.

It’s therefore highly important to confirm that any device in use by an organization is actually encrypted.

Encryption typically operates in the background

On laptops or desktops, installed encryption products typically function in the background. For example, a billing analyst using an encrypted desktop may interact with billing software, Microsoft Excel and email throughout a business day to complete work. This analyst may only encounter encryption while logging in at the beginning of a day and may not realize encryption is present. While some products such as Microsoft BitLocker employ a lock symbol next to a drive icon to indicate the presence of active encryption, most encryption products bury the status of encryption in an operating system menu or within software. Determining whether encryption is present and active are two distinct steps that require knowledge about a computer’s operating system and the ability to search a computer.

BitLocker Enabled in Microsoft Windows
BitLocker Enabled in Microsoft Windows

How to Tell Whether Encryption is Present?

Ideally, encryption should be installed so that it protects an entire hard drive—“whole disk encryption” — and not just specific folders or email — “file-level encryption”. In newer computers, encryption is often integrated in the operating system (such as the encryption products built into Apple’s new operating system Yosemite or Microsoft’s Windows 7 and up). Encryption may be set-up for default installation (i.e., a user has to de-select encryption during computer set-up).

1. Determine the version of operating system (“OS”).

OS Type: Microsoft Windows 8.1

OS Type: Microsoft Windows 8.1

Kivu_Identify_Encryption_3
OS Type: Apple OSX Versions

2. If native OS encryption is available, locate built-in encryption and review status.

  • Windows. In computers running Microsoft Windows 7 Ultimate and Enterprise (as well as Windows 8 versions), BitLocker encryption is installed and provides whole disk encryption capability. There are caveats to the use of BitLocker (such as configuration with or without hardware-level encryption ), but the presence of BitLocker can be confirmed by searching for BitLocker in the Control Panel. More details are available at http://windows.microsoft.com/en-US/windows7/products/features/bitlocker.

Kivu_Identify_Encryption_4
Windows with BitLocker Activated

  • Apple. In Apple computers, FileVault 2 provides whole disk encryption capability. To determine the status of FileVault 2 whole disk encryption in Apple Yosemite, go to the Security & Privacy pane of System Preferences. For older Apple OSX versions with FileVault, encryption is limited to a user’s home folder rather whole disk encryption. More details are available at http://support.apple.com/en-us/HT4790.


Apple OSX FileVault 2 Menu

3. Look for a third-party application.

There are several third-party software applications that provide whole disk encryption (examples listed below). These applications can be found by searching a computer’s installed applications. To determine whether encryption is active, the application will need to be opened and reviewed. Many encryption applications will use a visual symbol or term such as “active” to indicate that encryption is functioning. (For a comparison of encryption products, review the following discussion: http://en.wikipedia.org/wiki/Comparison_of_disk_encryption_software.)

Software

Windows

Mac OSX

1. Built into Operating System (“OS”) BitLocker FileVault 2
2. Third-Party Software Products
Symantec PGP X X
Dell Data Protection Encryption (DDPE) X X
Check Point Full Disk Encryption Software Blade X X
Pointsec (Check Point) X
DriveCrypt X
  • Finding third-party software on a Windows computer.

i. Locate and open the Control Panel by clicking on the Start menu (not available in Windows 8) or using Windows search. (To learn more about the Control Panel, refer to the link http://support.microsoft.com/search?query=control%20panel.)

Windows Search
Windows Search

ii. Navigate to the Programs section of the Control Panel.

Windows Select Programs Section
Windows Select Programs Section

iii. Click on Programs and Features.

Windows Select Programs and Features
Windows Select Programs and Features

iv. Scroll through the installed software applications to determine whether third-party encryption software is installed.


Windows Review Installed Programs

  • Finding third-party software on an Apple computer.

i. Apple computers are configured with Spotlight — an Apple-native search utility that catalogues and organizes content. (See the following URL for information on Spotlight: http://support.apple.com/en-us/HT204014.)

ii. Spotlight can be found by clicking on the magnifying glass symbol in the upper right-hand corner of Apple’s menu bar.

iii. Enter the name of the third-party software into the Spotlight search box and review search results. (See the “quicktime” search example in the screenshot below.)


Apple Spotlight Search

Caution with the Use of Encryption

  1. User Versus IT (Information Technology department) Installation.

    In Apple FileVault 2 user guidance, three scenarios are identified for the installation of encryption — IT only, user with IT support or user only. These scenarios apply to the installation of any encryption and software product. While it is less expensive to have end users configure devices, encryption is the type of activity that can render a laptop useless if improperly deployed. As a rule of thumb, IT should direct installation and configuration of encryption to protect corporate assets.

  2. Properly Set Up Users.

    When encryption is deployed, there is often a requirement to set up “approved” users for access. If a user is not set up, then access is denied. If IT does not have user-level access, then IT may be locked out.

  3. Key Control.

    IT should maintain control of encryption keys. IT should have keys for each device with deployed encryption. Further, all encryption keys should be backed up to a source NOT controlled by IT. With tight control and access over encryption keys, an organization minimizes the chance that encryption will lock an organization out of corporate assets. Providing IT with access to each computer’s encryption keys also prevents a disgruntled employee from locking an organization out of their own computers.

  4. Fully Document IT Encrypting Devices.

    If a device is lost or stolen, it may be crucial to prove that the device was encrypted in order to avoid the need for a costly notification of any persons whose PII has been compromised. Make sure that IT has fully documented the encryption process and specific serial numbers of devices so protected.

  5. Don’t Forget Other Sources Such as Cloud Applications.

    Document and control cloud data storage of corporate assets. For each computer where cloud-based applications are running (including email), digital assets should be evaluated as to whether encryption is required locally and in the cloud. Many cloud storage applications offer encryption for stored data and data being transmitted.

Other References

Within the past year, Kivu has seen several malware trends emerging, including exploitation in widely used software applications (Heartbleed, Bash, and Shellshock), cycles of ransomware and destructive malware (Master boot wiper, HD wiper), and an increase of rootkits, botnets and traditional drive-by malware. In 2015, we expect to see new malware trends, including an increase in social engineering (attack the weakest link), exploitation of identified security flaws in newly developed mobile payment applications, exploitation of cloud SharePoint systems, and the continuation of exploitation of traditional Point of Sale (POS) credit card systems. Kivu also expects an increase in exploit kits for all types of mobile devices and traditional devices that contain diverse functionality.

Following is what Kivu recommends that companies do to help secure their systems and data.

Protecting Your Computer Environment Against Malware

To protect your environment, Kivu recommends a strength-in-depth approach, coupled with segmentation of sensitive data. Segmenting your network environment adds an additional security layer by separating your sensitive traffic from other regular network traffic. Servers with PHI, PII or PCI should be segmented from the backbone and WAN. A separate firewall should protect this segmented data.

Ensure that your firewall is fine-tuned, hardened, and that vital security logs are maintained for at least 2-3 months. Conduct regular external and internal vulnerability network scans to test your security perimeters and detect vulnerabilities. Remediate these security flaws within a timely manner.

Perimeter protection devices require regular maintenance and monitoring. Ensure that your ingress/egress protection devices (IDS/IPS) are monitoring real time to detect malicious network traffic.

Be sure to maintain and update your software and system applications on a regular basis to eliminate security flaws and loopholes. Verify that all security applications within your environment are fine-tuned and hardened and that security logs are maintained. Review your security logs on a regular basis to ensure that logging is enabled and that valid data is being captured and preserved for an extended time period without being overwritten.

Remote Access Considerations

Kivu recommends limiting and controlling remote access within your environment with two-factor authentication. Create a strong password policy that includes changing passwords frequently and eliminating default passwords for systems and software applications that are public facing.

For outsourced IT services, make sure your data security is in compliance with the latest standards and policies. Maintain and verify on a regular basis that all 3rd party vendors follow outlined security policies and procedures. Eliminate account and password sharing and ensure that all 3rd party vendors use defined and unique accounts for remote access.

Securing Vulnerable Data

Protecting your data is not only the responsibility of Information Security; it is everyone’s responsibility to do their part to keep your environment safe and secure. Encrypt, protect and maintain your critical data. Upgrade older systems when possible and verify that sensitive data is encrypted during transmission and data storage. Manage and verify data protection with all 3rd party vendors.

About Kivu

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Thomas Langer, EnCE, CEH, is an Associate Director in Kivu’s Washington DC office. For more information about malware trends and what your company can do to better protect its environment and data, please contact Kivu.

Despite the blizzards that hit the East Coast, I had the pleasure this week of presenting to the Business Law & Corporate Counsel Sections at the New York State Bar Annual Conference in a very cold Manhattan. The presentation was on the legal and privacy issues both before and after data breaches – especially the liability issues arising from the (almost) inevitable plaintiffs’ class actions, employee suits, and regulatory proceedings.

I continued to beat my drum about:

• the danger of relying on the wrong “reasonable standards” (given the different, and sometimes conflicting standards from different regulators and AG opinions);

• proving that you have identified and documented the relevant security standard and what your peers are doing BEFORE the breach – not as an after-thought when preparing for litigation;

• the very real danger of claiming false levels of security, particularly if you rely on third parties vendors who you don’t actually audit;

• and the increased granularity of regulatory scrutiny (e.g. under the new NY State Dept. of Financial Services examination procedure, where they want a copy of your CISO’s CV – which will be scary for those small financial institutions who have simply appointed the most tech-savvy executive as the de factor CISO – see the New Cyber Security Examination Process

Other take-aways from my great panel members:

Yanai Z. Siegel, Esq. (Co-Chair, Cyber Liability and Data Privacy Practice Group at Your House Counsel / Shafer Glazer, LLP):

1. In the event of a data breach, your computer system becomes a crime scene. Preserve the evidence for IT forensics, so any recourse and prosecution options remain available.

2. Personal information is like toxic waste. You don’t want to spill it. Check your statutes and regulations to find out what is on the hazardous materials list, and then find out if you are keeping any and where you’re keeping it on your computer system.

Patricia Harman (Editor-in-Chief, Claims Magazine):

No company, no matter how large or small, is immune to a cyberattack. It is not a matter of if a firm will be breached, but when. Companies need to develop an incident response team and an incident response plan before there is a breach. After the event will be too late.

Bruce Raymond, Esq. CIPP/US (Raymond Law Group LLC):

Privacy programs can be daunting for medium and small businesses, but all well managed companies need this protection. In today’s risk environment, it’s not a ” nice to have “, it’s a “need to have.”

Social media has become a notable source of potential forensic evidence, with social media giant Facebook being a primary source of interest. With over 1.35 billion monthly active users as of September 30, 2014 [1], Facebook is considered the largest social networking platform.

Kivu is finding that forensic collection of Facebook (and other sources of social media evidence) can be a significant challenge because of these factors:

1. Facebook content is not a set of static files, but rather a collection of rendered database content and active programmatic scripts. It’s an interactive application delivered to users via a web-browser. Each page of delivered Facebook content is uniquely created for a user on a specific device and browser.  Ignoring the authentication and legal evidentiary issues, screen prints or PDF printouts of Facebook web pages often do not suffice for collecting this type of information – they simply miss parts of what would have been visible to the user – including, interestingly the unique ads that were tailored to the specific user because of their preferences and prior viewing habits.

2. Most forensic collection tools have limitations in the capture of active Internet content, and this includes Facebook. Specialized tools, such as X1 Social Discovery and PageFreezer, can record and preserve Internet content, but gaps remain in the use of such tools. The forensic collection process must adapt to address the gaps (e.g., X1 Social Discovery does not capture all forms of video).

Below are guidelines that we at Kivu have developed for collecting Facebook account content as forensic evidence:

1. Identify the account or accounts that will be collected – Determine whether or not the custodian has provided their Facebook account credentials. If no credentials have been provided, the investigation is a “public collection” – that is, the collection needs to be based on what a Facebook user who is not “friends” with the target individual (or friends with any of the target individual’s friends, depending on how the target individual has set up their privacy settings) can access. If credentials have been provided, it is considered a “private collection, ” and the investigator will need to confirm the scope of the collection with attorneys or the client, including what content to collect.

2. Verify the ownership of the account – Verifying an online presence through a collection tool as well as a web browser is a good way to validate the presence of the target account.

3. Identify whether friends’ details will be collected.

4. Determine the scope of collection – (e.g. the entire account or just photos).

5. Determine how to perform the collection – which tool or combination of tools will be most effective? Make sure that that your tool of choice can access and view the target profile. The tool X-1 Social Discovery, for example, uses the Facebool API to collect information from Facebook. The Facebook API is documented and provides a foundation for consistent collection versus a custom-built application that may not be entirely validated. Further, Facebook collections from other sources such as cached Google pages provide a method of cross-validating the data targeted for collection.

6. Identify gaps in the collection methodology.

a. If photos are of importance and there is a large volume of photos to be collected, a batch script that can export all photos of interest can speed up the collection process. One method of doing so is a mouse recording tool.

b. Videos do not render properly while being downloaded for preservation, aeven when using forensic capture tools such as X-1 Social Discovery. If videos are an integral part of an investigation, the investigator will need to capture videos in their native format in addition to testing any forensic collection tool. It should be noted that there are tools such as downvids.net to download the videos, and these tools in combination with forensic collection tools such as X-1 Social Discovery provide the capability to authenticate and preserve video-based evidence.

7. Define the best method to deliver the collection – If there are several hundred photos to collect, determine whether all photos can be collected. Identify whether an automated screen capture method is needed.

8. If the collection is ongoing (e.g., once a week), define the recurring collection parameters.

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author Katherine Delude is a Digital Forensic Analyst in Kivu’s San Francisco office. To learn more about forensically preserving Facebook content, please contact Kivu.

[1] http://newsroom.fb.com/company-info/ Accessed 11 December 2014.

The cloud is becoming an ever-increasing repository for email storage. One of the more popular email programs is Gmail, with its 15 GB of free storage and easy access anywhere for users with an Internet connection. Due to the great number of email accounts, the potential for large amounts of data, and no direct income, Google has throttled back on backups to lessen the burden on their servers worldwide.

This blog post is the start of a series of articles that will review Gmail collection options for computer forensic purposes. Kivu initiated a project to find the most efficient and defensible process to collect Gmail account information. The methods tested were Microsoft Outlook, Gmvault, X1 Social Discovery and Google scripts.

All four programs were run through two Gmail collection processes, with a focus on:

  • Discovering how the program stores emails.
  • Identifying whether the program encounters throttling? If so, how does it deal with it?
  • Determining if current forensic tools can process the emails collected.
  • Measuring how long the program takes to process the email, and the level of examiner involvement necessary.

Kivu employees created two Google email accounts for this analysis. Each email account had over 30,000 individual emails, which is a sufficient amount for Google throttling to occur and differences in speed to become apparent. The data included attachments as well as multi-recipient emails to incorporate a wide range of options and test how the programs collect and sort variations in emails. Our first blog post focuses on Gmvault.

What is Gmvault and How Does It Work?

Gmvault is a third party Gmail backup application that can be downloaded at Gmvault.org. Gmvault uses the IMAP protocol to retrieve and store Gmail messages for backup and onsite storage. Gmvault has built-in protocols that help bypass most of the common issues with retrieving email from Google. The process is scriptable to run on a set schedule to ensure a constant backup in case disaster should happen. The file system database created by Gmvault can be uploaded to any other Gmail account for either consolidation or migration.

During forensic investigation, Gmvault can be used to collect Gmail account data with minimal examiner contact with the collected messages. The program requires user interaction with the account twice – once to allow application access to the account and again at the end to remove the access previously granted. Individual emails can be viewed without worrying about changing metadata, such as Read Status, and/or Folders/Labels because this information is stored in a separate file with a .meta file extension.

How to Use Gmvault for Forensic Investigation

Gmvault needs very little user input and can be initiated with this command:

$> gmvault sync [email address]

We suggest using the following options:

$> gmvault sync –d [Destination Directory] –no-compression [email address]

“d” enables the user to change where the download will go, allowing for the data extraction to go directly to an evidence drive, (default: Usercloudgmvault-db)

“no-compression” downloads .eml files rather than the .gzip default. Compression comes with a rare chance of data corruption during both the compression and decompression processes so, unless size is an issue, it is better to use the “no compression” option. Download speed is unaffected by the compression, although compressed files are roughly 50% of the uncompressed size.

Next, sign in to the Gmail account to authorize Gmvault access. The program will create 3 folders in the destination drive you set, and emails will be stored by month. The process is largely automated, and Gmvault manages Google throttling. It accomplishes this by disconnecting from Google, waiting a predetermined number of seconds and retrying. If this fails 4 times, the email is skipped, and Gmvault moves on to the next set of emails. When finished with the email backup, Gmvault checks for chats and downloads them as well.

When Gmvault is finished, a summary of the sync is displayed in the cmd shell. Gmvault performs a check to see if any of the emails were deleted from the account and removes them from the database. This should not be a problem for initial email collections, but it will need to be noted on further syncs for the same account. The summary shows the total time for the sync, number of emails quarantined, number of reconnects, number of emails that could not be fetched, and emails returned by Gmail as blank.

To obtain the emails that could not be fetched by Gmvault, simply run the same cmd line again:

$> gmvault sync –d [Destination Directory] –no-compression [email address]

Gmvault will check to see if the emails are already in the database, if so skip them, and then download the skipped items from the previous sync. It may take up to 10 times to recover all skipped emails, but the process can probably be completed within 5 minutes.

Be sure to remove authorization once the collection is complete.

Now you should have all of the emails from the account in .eml format, stored by date in multiple folders. Gmvault can then be used to export these files into a more useable storage system. The database can be exported as offlineimap, dovecot, maildir or mbox (default). Here’s how:

gmvault-shell>gmvault export -d[Destination Directory] [Export Directory]

Following are the Pros and Cons of Using Gmvault:

Pros:

  • Easy to setup and run
  • Counts total emails/collected emails to quickly know if emails are missing
  • 50% compression
  • Can be scripted to collect multiple accounts

Cons:

  • No friendly UI
  • Needs further processing to get to a user friendly deliverable
  • Will sometimes not retrieve the last few emails

The enduring onslaught of data breach events such as the theft of 4.5 million health records from Community Health Systems or the recent staggering loss of information for 76m JP Morgan accounts continues to highlight the need for robust information security and the ability to proactively prevent and redress potential security incidents. In response, organizations have increased investment in better information security programs and supporting technologies. However, while more organizations may be better positioned to cope with data breach events, information security continues to lack appropriate coverage of cloud and mobile device technology risks.

Lags in InfoSec Deployment:

According to the 2014 Global State of Information Security® Survey of information, executives and security practitioners, organizational leaders expressed confidence in their information security activities (nearly three-quarters of study respondents reported being somewhat or very confident). However, the survey reveals gaps in the application of information security for cloud and mobile technologies. Nearly half of respondents reported that their organizations used cloud computing services but only 18% reported having governance policies for cloud services. Furthermore, less than half of respondents reported having a mobile security strategy or mobile device security measures such as protection(s) for email/ calendaring on employee-owned devices.

Real Issue is Lack of Knowledge

Gaps in cloud and mobile information security represent a broader trend that even exists in regulated industries. For example, in the 2013 Ponemon report, “The Risk of Regulated Data on Mobile Devices & in the Cloud”, 80% of IT professionals could not define the proportion of regulated data stored in the cloud and on mobile devices. The gap in information security does not appear to be limited to the deployment of polices and controls. Instead the potential issues with cloud and mobile information security stem from lack of knowledge concerning storage and use of data. As noted in the study “Data Breach: The Cloud Multiplier Effect” their organizations as having low effectiveness in securing data and applications in the cloud.

Reducing Cloud and Mobile Technology Risks

Developing an appropriate security posture for cloud and mobile technologies should begin with the realization that information security requirements for these technologies differ from traditional IT infrastructure. For example, the responsibility for storage and use of data in the cloud is shared by a greater number of parties—organization, employees, external vendors, etc. Additionally, contracts and written policies for cloud applications must specify more granular coverage for access, use, tracking and management of data. In the event of a potential security incident, possible sources of evidence, such as security logs, are stored externally and may require the assistance of specific employees or service providers.

The following considerations provide a starting point for the development of information security practices that are relevant to cloud and mobile technologies.

1. Identify security measures that are commensurate with cloud and mobile technologies.

a. Use security features that are built into cloud and mobile technologies. This includes access controls and encryption. Frequently, security features that would have prevented major cloud-based breaches (such as multi-factor authentication and text-to-cellphone warnings of suspicious activity) are already made available by cloud service providers. However, users of these services, whether individuals or large corporate clients, are frequently delaying full implementation of available security options due to cost or organizational concerns.

b. Implement additional security tools or services to address gaps in specific cloud and mobile technologies. For example, software-based firewalls to manage traffic flow may also provide logging capability that is missing from a cloud service provider’s capabilities.

2. If possible, use comprehensive solutions for user, device, account, and data management.

a. Manage mobile devices and their contents. Mobile device management (MDM) solutions enable organizations to coordinate the use of applications and control organizational data across multiple users and mobile devices.

b. Use available tools in the cloud. Cloud service providers such as Google Apps provide tools for IT administration to manage users, data and specific services such as Google Drive data storage. Unfortunately, many organizations do not utilize these tools and take risks such as losing control over email account access and content.

3. Maintain control over organizational data.

a. IT should control applications used for file-sharing and collaboration. Cloud- based tools such as Dropbox provide a robust method of sharing data. Unfortunately, Dropbox accounts often belong to the employee and not the organization. In the case of a security incident, IT may be locked out of an employee’s personal account.

b. Users should not be responsible for security. Organizations often entrust employees and business partners with sensitive data. This includes maintaining security requirements such as use of encryption and strong passwords. The organization that owns the data (usually its IT department) should have responsibility for security, and this includes organizational data stored outside of an organization’s internal IT infrastructure.

c. Encryption keys should be secured and available to IT in the case of a potential incident. With the advent of malware such as ransomeware that holds data captive and employees who could destroy encryption keys, securing encryption keys has become becoming a vital step in the potential recovery of data. If IT does not maintain master control over encryption keys, important organizational data could be rendered inaccessible during a security incident.

4. Actively evaluate InfoSec response and readiness in the cloud.

a. IT should have a means to access potential sources of organizational data. If data is stored on an employee’s tablet or at a third-party data storage provider, IT should have a vetted plan for access and retrieval of organizational data. Testing should not occur when a potential security incident arises.

b. Important digital assets should be accessible from more than one source and should be available within hours and not days. IT should have backup repositories of corporate data, in particular for data stored in cloud environments. This may include using a combination of cloud providers to store data and having an explicit agreement on the timing and costs required to retrieve data (in the event of an incident).

c. Audit systems should be turned on and used. Cloud providers often have built-in auditing capability that ranges from data field tracking (e.g., a phone number) to file revision history. The responsibility for setting up audit capability belongs to the organization. As part of using a cloud provider’s technology, the use of auditing should be defined, documented and implemented.

d. IT staff should have the knowledge and skills to access and review log files. The diversity and complexity of log files have grown with the number of technologies in use by an organization. Cross-correlating logs files across differing technology platforms requires specialized knowledge and advanced training. If an organization lacks the skill to analyze logs files, the ability to detect and investigate potential security events may be severely compromised.

5. Incident response plans and investigation practices should cover scenarios where data is stored in the cloud or on mobile devices.

Hackers have become more aggressive in seeking out data repositories. As organizations continue to adopt cloud and mobile technologies, information security must keep pace and extend the same internal focus on information security to external sources of organizational data. In particular, incident response plans should cover an increasing phenomenon—where attackers infiltrate an organization’s physical network solely to gain the keys to its cloud data repository.

The financial industry has long been known for “repackaging risk” – slicing and dicing investments to lessen their aggregate risk. During the 2008 subprime mortgage crisis, the repackaging process eventually reached the point where no one knew the real financial risk, who exactly was exposed to it, and where and how the risk was concentrated.

A similar process is happening today for cyber risk. Known as “Cyberization,” organizations are unknowingly exposed to cyber risk outside of their own organizations because they have outsourced, interconnected or otherwise exposed themselves to an increasingly complex network of networks. Their cyber risk starts with their internal corporate network and security practices and expands outward to their counterparties and affiliates, their supply chain and outsourcing partners. This blog post from Kivu will help explain what Cyberization is and the aggregate risk that organizations face.

How Leveraging Technology Leads to Increased Cyber Risk

Organizations today are relying more and more on technology to increase efficiencies and lower costs, making it possible to be more profitable while deploying fewer resources. This trend makes global cyberization more likely because the Internet is a tightly coupled system with extensive aggregations, societies and economies. With so much interdependency, any disruption in the system is likely to have a cascading effect.

Cyber risk management often assumes that risk is simply the aggregation of local technology and procedures within an organization. In general, risk managers focus mostly on what is going on inside their own walls. Today’s cyber risk managers need to understand, however, that cyber risk is not self-contained within individual enterprises. They must expand their horizons and look far beyond their boundary walls.

Factors to Consider in Cyber Risk Management

Internal IT Enterprise

Risk associated with an organization’s IT.

Examples: hardware, software, people and processes.

Counterparties & Partners

Risk from dependence on or direct interconnection with outside organizations.

Examples: Partnerships, vendors, associations.

Outsourcing

Risk from contractual relationships with external suppliers of service.

Examples: IT and Cloud providers, HR, Legal, Accounting and Consultancy.

Supply Chain

Risk to the IT sector and traditional supply chain and logistics functions.

Examples: Exposure to country, counterfeit or tampered products.

Disruptive Technologies

Risk from the unseen effects of or disruptions from new technologies – those already existing and those due soon.

Examples: Driverless cars, automated digital appliances, embedded medical devices.

Upstream Infrastructure

Risk from disruptions to infrastructure relied upon by economies and societies, electric, oil or gas infrastructure, financial systems and telecom.

Examples: Internet Infrastructure, Internet governance.

External Shocks

Risk from incidents outside the control of an organization that are likely to have cascading effects.

Examples: International conflicts, malware pandemic, natural disasters.

About Kivu

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Elgan Jones, is the Director of Cyber Investigations at Kivu Consulting in Washington DC. For more information about cyber risk management and mitigating the effects of cyberization, please contact Kivu.

The Wayback Machine is a digital archive of Internet content, consisting of snapshots of web pages across time. The frequency of web page snapshots is variable, so all web site updates are not recorded.There are sometimes intervals of several weeks or years between snapshots. Web page snapshots usually become available and searchable on the Internet more than 6 months after they are archived. Kivu uses information archived in The Wayback Machine in its computer forensics investigations.

The Wayback Machine was founded in 1996 by Brewster Kahle and Bruce Gilliat, who were also the founders of a company known as Alexa Internet, now an Amazon company. Alexa is a search engine and analytics company that serves as a primary aggregator of Internet content sources, domains, for theWayback Machine. Individuals may also upload and publish a web page to The Wayback Machine for archiving.

Content accumulated within the Wayback Machine’s repository is collected using spidering or web-crawling software. The Wayback Machine’s spidering software identifies a domain, often derived from Alexa, and then follows a series of rules to catalog and retrieve content. The content is captured and stored as web pages.

The snapshots available for a specific domain can be viewed by using the Uniform Resource Locator(URL) formula in the table below. Using the URL formula, the term DOMAIN.COM (bold) is changed to the domain name of interest and then entered into a browser’s Uniform Resource Identifier (URI) address field.

 

The Wayback Machine does not record everything on the Internet

A web page’s robots.txt file identifies rules for spidering its content. If a web page domain does not permit crawling, the Wayback Machine does not index the domain’s content. In place of content, the Wayback Machine records a “no crawl” message in its archive snapshot for a domain.

The Wayback Machine does not capture content as a user would see content in a browser. Instead, the Wayback Machine extracts content from where it is stored on a server, often, HTML files. For each web page of content, the Wayback Machine captures content that is directly stored in the web page, and if possible, content that is stored in related external files (e.g., image files).

The Wayback Machine searches web pages in a domain by following hyperlinks to other content within the same domain. Hyperlinks to content outside of the domain are not indexed. The Wayback Machine may not capture all content within the same domain. In particular, dynamic web pages may contain missing content, as spidering may not be able to retrieve all software code, images, or other files.

The Wayback Machine works best at cataloging standard HTML pages. However, there are many cases where it does not catalog all content within a web page, and a web page may appear incomplete. Images that are restricted by a robots.txt file appear gray. Dynamic content such as flash applications or content that is reliant on server-side computer code may not be collected.

The Wayback Machine may attempt to compensate for the missing content by linking to other sources (originating from the same domain). One method to substitute missing content is linking to similar content in other Wayback Machine snapshots. A second method is linking to web pages on the “live” web, currently available web pages at the source domain. There are also cases where the Wayback Machine displays an “X”, such as for missing images, or presents what appears to be a blank web page.

HTML or other source code is also archived

The Wayback Machine may capture the links associated with the page content but not acquire all of the content to fully re-create a web page. In the case of a blank archived web page, for example, HTML and other software code can be examined to determine the contents of the page. A review of the underlying HTML code might reveal that the page content is a movie or a flash application. (Underlying software code can be examined using the “View Source” functionality within a browser.)

Wayback Machine data is archived in the United States

The Wayback Machine archives are stored in a Santa Clara, California data center. For disaster recovery purposes, a copy of the Wayback Machine is mirrored to Bibliotheca Alexandrina in Alexandria, Egypt.

Kivu is a licensed California private investigations firm, which combines technical and legal expertise to deliver investigative, discovery and forensic solutions worldwide. Author, Megan Bell, directs data analysis projects and manages business development initiatives at Kivu.

For more information about The Wayback Machine and how it is used in computer forensics investigations, please contact Kivu.

Cyber incidents and data breaches are often the result of computer security misconfigurations in a system’s network or software. We have found at Kivu Consulting that many of the same misconfigurations have allowed an intrusion to happen, an exploit to be executed or data to be extracted from a particular system. Security misconfigurations can also hamper an incident analysis by limiting the availability of important artifacts needed for a data breach investigation.

Listed below are the top 10 common computer security misconfigurations and how to avoid them:

1. Logging left at default or turned off

Many system logs, especially ones found in the Windows operating system, have a default size limit or a limit to the number of days that historical logs are kept. Many times, due to budget or storage constraints, standard system logging is left at the default setting or is disabled. This includes: account login/logout, failed login attempts, software installed and logs cleared. Unfortunately, when logs are disabled from collecting data, there is no record of what is happening to a computer system.

When an intruder guesses passwords or accounts, without system logs a business has no way of knowing if they are or were under attack. If an intrusion isn’t detected until several months later, important system records may be unavailable. Kivu recommends that every organization review its system logging procedures and ensure that critical information is stored for a sufficient amount of time.

Also, companies often record only failed login attempts. Logging failed attempts is a great way to detect if a computer system has been attacked, but what happens if the intruder actually gets in? If a company is not tracking successful logins, it might not know if an attack was successful. Tracking all logins is particularly important if a security breach has occurred from an unrecognized IP address (e.g. an IP address in China.)

2. 50 servers, 50 log locations!

In today’s environment of virtualized and cloud based computing, a system administrator may have to monitor dozens of servers across the globe. To simplify this task, Kivu recommends that companies collect logs from all of their servers into a single, centralized logging system, preferably one that indexes their logs, scans them for security events and alerts the appropriate staff member if an event is detected.

A centralized logging system that provides easy search and retrieval of historical log data is crucial for an incident investigation. Kivu has sometimes lost days while investigating a security incident, when every minute is critical, because important log data was stored in as many as 50 individual servers.

3. Former employee accounts not disabled or deleted

When an employee leaves an organization and has security credentials that allow remote connection or login from a workstation located on a trusted internal network, the ex-employee’s accounts should be immediately disabled. Kivu has seen many times that an old and still enabled VPN/administrative account has been used for intrusion.

4. Same root or local administrator password for all public facing computers

We see this system misconfiguration more often than any other problem. Many organizations’ servers have their root account (if Linux), Administrator, or super user account set with the same password across all systems, including: web servers, cloud based servers, and servers in the DMZ. If an intruder should compromise the root password, they may be able to log in to all of of a company’s servers, including the server that may be acting as an identity manager (e.g. SSH key master or domain controller).

Kivu recommends that organizations follow the simple practice of treating their public facing (untrusted) servers with the mindset that they will be compromised. We advise creating a different set of account credentials for the servers that reside on their trusted internal networks.

5. Root or administrator accounts can connect from the Internet or DMZ

The convenience of being able to troubleshoot and perform system and network administration remotely often comes with a cost. SSH, by default, does not allow the super user account root to log in remotely. Yet in many security incident investigations, Kivu has found that the system administrators have been ONLY logging in as root and have enabled root login from remote locations. This convenience also allows anyone from outside the organization to brute force the root password.

We recommend requiring system administrators to log in to a VPN before connecting to perform administrative or systems work. With cloud located servers, a VPN may not be an option. In that case, companies can lock down administrative access to only a few IP addresses. They can combine this action with a security appliance or snort on the host to detect and drop IP address spoofing. They can also consider an RSA certificate solution.

6. Default password on [insert network device name here]

A simple search on the Internet for “default password on insert network device vendor name here” will return all known default passwords for the admin or manager accounts on an organization’s network firewalls, routers and wireless access points. Any device setup manuals available online will also have the default passwords listed. Kivu recommends that companies change these defaults at configuration time and before deployment to avoid security incidents.

7. Administrative accounts using simple passwords

We continue to see easily guessed passwords used for administrative accounts. Dictionary words can be brute forced, even when vowels are swapped out with symbols, for example: “honeybadger” becomes “H0neyB@dger.” We have found that using a randomly generated 16-character password for root and other administrative accounts is beneficial for reducing an organization’s attack surface.

8. Remote desktop, public facing, default ports, no firewall or VPN

There are numerous exploits and vulnerabilities for many popular remote access software services. Kivu often sees no firewall or VPN between the computer offering remote access and the Internet. To reduce an organization’s risk, we recommend that companies implement remote access with multiple layers of security, preferably in a DMZ, where remote traffic is forced through an intrusion detection system.

9. No access control lists – EVERYONE group is granted access to everything

This issue is often common in smaller companies, non-profits and the education sector. Everyone in the organization has full access to all of the data. If an employee account is compromised, the account may have access to HR and Financial information, even though the employee does not work for those departments. Kivu recommends that organizations classify their data for different levels of confidentiality or access. Once data is classified, access can be controlled with security groups.

10. Absence of a regular software patching routine

Many security exploits that lead to an intrusion or data breach can be avoided by simply keeping up on software updates and vulnerability patches. If your company is not keeping up with software vulnerability patching, your public webserver or your customer database server is a security breach waiting to happen. We recommend that organizations have procedures in place to ensure that timely updates are performed.

Conclusion

While many of the above computer security misconfigurations are well known, they continue to occur on a regular basis. Kivu recommends that organizations regularly monitor their system logs and check with their software vendors for security recommendations particular to their computer environment. We also recommend that companies keep up-to-date by reading security blogs and checking in with the SANS Internet Storm Center.

For more information about Common Computer Security Misconfigurations, please contact Kivu Consulting.