Category Archives: Information Governance

CCPA and GDPR UPDATE: Unstructured Enterprise Data in Scope of Compliance Requirements

An earlier version of this article appeared on Legaltech News

By John Patzakis

A core requirement of both the GDPR and the similar California Consumer Privacy Act (CCPA), which becomes enforceable on July 1, is the ability to demonstrate and prove that personal data is being protected. This requires information governance capabilities that allow companies to efficiently identify and remediate personal data of EU and California residents. For instance, the UK Information Commissioner’s Office (ICO) provides that “The GDPR places a high expectation on you to provide information in response to a SAR (Subject Access Request). Whilst it may be challenging, you should make extensive efforts to find and retrieve the requested information.”CCPA GDPR

However, recent Gartner research notes that approximately 80% of information stored by companies is “dark data” that is in the form of unstructured, distributed data that can pose significant legal and operational risks. With much of the global workforce now working remotely, this is of special concern and nearly all the company data maintained and utilized by remote employees is in the form of unstructured data. Unstructured enterprise data generally refers to searchable data such as emails, spreadsheets and documents on laptops, file servers, and social media.

The GDPR

An organization’s GDPR compliance efforts need to address any personal data contained within unstructured electronic data throughout the enterprise, as well as the structured data found in CRM, ERP and various centralized records management systems. Personal data is defined in the GDPR as: “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

Under the GDPR, there is no distinction between structured versus unstructured electronic data in terms of the regulation’s scope. There is a separate guidance regarding “structured” paper records (more on that below). The key consideration is whether a data controller or processor has control over personal data, regardless of where it is located in the organization. Nonetheless, there is some confusion about the scope of the GDPR’s coverage across structured as well as unstructured electronic data systems.

The UK ICO is a key government regulator that interprets and enforces the GDPR, and has recently issued important draft guidance on the scope of GDPR data subject access rights, including as it relates to unstructured electronic information. Notably, the ICO notes that large data sets, including data analytics outputs and unstructured data volumes, “could make it more difficult for you to meet your obligations under the right of access. However, these are not classed as exemptions, and are not excuses for you to disregard those obligations.”

Additionally the ICO guidance advises that “emails stored on your computer are a form of electronic record to which the general principles (under the GDPR) apply.” In fact, the ICO notes that home computers and personal email accounts of employees are subject to GDPR if they contain personal data originating from the employers networks or processing activities. This is especially notable under the new normal of social distancing, where much of a company’s data (and associated personal information) is being stored on remote employee laptops.

The ICO also provides guidance on several related subjects that shed light on its stance regarding unstructured data:

Archived Data: According to the ICO, data stored in electronic archives is generally subject to the GDPR, noting that there is no “technology exemption” from the right of access. Enterprises “should have procedures in place to find and retrieve personal data that has been electronically archived or backed up.” Further, enterprises “should use the same effort to find information to respond to a SAR as you would to find archived or backed-up data for your own purposes.”

Deleted Data: The ICO’s view on deleted data is that it is generally within the scope of GDPR compliance, provided that there is no intent to, or a systematic ability to readily recover that data. The ICO says it “will not seek to take enforcement action against an organisation that has failed to use extreme measures to recreate previously ‘deleted’ personal data held in electronic form. We do not require organisations to use time and effort reconstituting information that they have deleted as part of their general records management.”

However, under this guidance organizations that invest in and deploy re-purposed computer forensic tools that feature automated un-delete capabilities may be held to a higher standard. Deploying such systems can reflect intent to as well as having the systematic technical ability to recover deleted data.

Paper Records: Paper records that are part of a “structured filing system” are subject to the GDPR. Specifically, if an enterprise holds “information about the requester in non-electronic form (e.g. in paper files or on microfiche records)” then such hard-copy records are considered personal data accessible via the right of access,” if such records are “held in a ‘filing system.” This segment of the guidance reflects that references to “unstructured data” in European parlance usually pertains to paper records. The ICO notes in separate guidance that “the manual processing of unstructured personal data, such as unfiled handwritten notes on paper” are outside the scope of GDPR.

GDPR Article 4 defines a “filing system” as meaning “any structured set of personal data which are accessible according to specific criteria, whether centralized, decentralized or dispersed on a functional or geographical basis.” The only form of “unstructured data” that would not be subject to GDPR would be unfiled paper records like handwritten notes or legacy microfiche.

The CCPA  

The California Attorney General (AG) released a second and presumably final round of draft regulations under the California Consumer Privacy Act (CCPA) that reflect how unstructured electronic data will be treated under the Act. The proposed rules outline how the California AG is interpreting and will be enforcing the CCPA. Under § 999.313(d)(2), data from archived or backup systems are—unlike the GDPR—exempt from the CCPA’s scope, unless those archives are restored and become active. Additional guidance from the Attorney General states: “Allowing businesses to delete the consumer’s personal information on archived or backup systems at the time that they are accessed or used balances the interests of consumers with the potentially burdensome costs of deleting information from backup systems that may never be utilized.”

What is very notable is that the only technical exception to the CCPA is unrestored archived and back-up data. Like the GDPR, there is no distinction between unstructured and structured electronic data. In the first round of public comments, an insurance industry lobbying group argued that unstructured data be exempted from the CCPA. As reflected by revised guidance, that suggestion was rejected by the California AG.

For the GDPR, the UK ICO correctly advises that enterprises “should ensure that your information management systems are well-designed and maintained, so you can efficiently locate and extract information requested by the data subjects whose personal data you process and redact third party data where it is deemed necessary.” This is why Forrester Research notes that “Data Discovery and Classification are the foundation for GDPR compliance.”

Establish and Enforce Data Privacy Policies

So to achieve GDPR and CCPA compliance, organizations must first ensure that explicit policies and procedures are in place for handling personal information. Once established, it is important to demonstrate to regulators that such policies and procedures are being followed and operationally enforced. A key first step is to establish a data map of where and how personal data is stored in the enterprise. This exercise is actually required under the GDPR Article 30 documentation provisions.

An operational data audit and discovery capability across unstructured data sources allows enterprises to efficiently map, identify, and remediate personal information in order to respond to regulators and data subject access requests from EU and California citizens. This capability must be able to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of weeks or months as is the case with traditional crawling tools. This includes laptops of employees working from home.

These processes and capabilities are not only required for data privacy compliance but are also needed for broader information governance and security requirements, anti-fraud compliance, and e-discovery.

Implementing these measures proactively, with routine and consistent enforcement using solutions such as X1 Distributed GRC, will go a long way to mitigate risk, respond efficiently to data subject access requests, and improve overall operational effectiveness through such overall information governance improvements.

Leave a comment

Filed under CaCPA, compliance, Corporations, Cyber security, Cybersecurity, Data Audit, GDPR, Information Governance, Information Management, Uncategorized

Remote ESI Collection and Data Audits in the Time of Social Distancing

By John Patzakis

The vital global effort to contain the COVID-19 pandemic will likely disrupt our lives and workflows for some time. While our personal and business lives will hopefully return to normal soon, the trend of an increasingly remote and distributed workforce is here to stay. This “new normal” will necessitate relying on the latest technology and updated workflows to comply with legal, privacy, and information governance requirements.

From an eDiscovery perspective, the legacy manual collection workflow involving travel, physical access and one-time mass collection of custodian laptops, file servers and email accounts is a non-starter under current travel ban and social distancing policies, and does not scale for the new era of remote and distributed workforces going forward. In addition to the public health constraints, manual collection efforts are expensive, disruptive and time-consuming as many times an “overkill” method of forensic image collection process is employed, thus substantially driving up eDiscovery costs.

When it comes to technical approaches, endpoint forensic crawling methods are now a non-starter. Network bandwidth constraints coupled with the requirement to migrate all endpoint data back to the forensic crawling tool renders the approach ineffective, especially with remote workers needing to VPN into a corporate network.  Right now, corporate network bandwidth is at a premium, and the last thing a company needs is their network shut down by inefficient remote forensic tools.

For example, with a forensic crawling tool, to search a custodian’s laptop with 10 gigabytes of email and documents, all 10 gigabytes must be copied and transmitted over the network, where it is then searched, all of which takes at least several hours per computer. So, most organizations choose to force collect all 10 gigabytes. The case of U.S. ex rel. McBride v. Halliburton Co.  272 F.R.D. 235 (2011), Illustrates this specific pain point well. In McBride, Magistrate Judge John Facciola’s instructive opinion outlines Halliburton’s eDiscovery struggles to collect and process data from remote locations:

“Since the defendants employ persons overseas, this data collection may have to be shipped to the United States, or sent by network connections with finite capacity, which may require several days just to copy and transmit the data from a single custodian . . . (Halliburton) estimates that each custodian averages 15–20 gigabytes of data, and collection can take two to ten days per custodian. The data must then be processed to be rendered searchable by the review tool being used, a process that can overwhelm the computer’s capacity and require that the data be processed by batch, as opposed to all at once.”

Halliburton represented to the court that they spent hundreds of thousands of dollars on eDiscovery for only a few dozen remotely located custodians. The need to force-collect the remote custodians’ entire set of data and then sort it out through the expensive eDiscovery processing phase, instead of culling, filtering and searching the data at the point of collection drove up the costs.

Solving this collection challenge is X1 Distributed Discovery, which is specially designed to address the challenges presented by remote and distributed workforces.  X1 Distributed Discovery (X1DD) enables enterprises to quickly and easily search across up to thousands of distributed endpoints and data servers from a central location.  Legal and compliance teams can easily perform unified complex searches across both unstructured content and metadata, obtaining statistical insight into the data in minutes, and full results with completed collection in hours, instead of days or weeks. The key to X1’s scalability is its unique ability to index and search data in place, thereby enabling a highly detailed and iterative search and analysis, and then only collecting data responsive to those steps. blog-relativity-collect-v3

X1DD operates on-demand where your data currently resides — on desktops, laptops, servers, or even the cloud — without disruption to business operations and without requiring extensive or complex hardware configurations. After indexing of systems has completed (typically a few hours to a day depending on data volumes), clients and their outside counsel or service provider may then:

  • Conduct Boolean and keyword searches of relevant custodial data sources for ESI, returning search results within minutes by custodian, file type and location.
  • Preview any document in-place, before collection, including any or all documents with search hits.
  • Remotely collect and export responsive ESI from each system directly into a Relativity® or RelativityOne® workspace for processing, analysis and review or any other processing or review platform via standard load file. Export text and metadata only or full native files.
  • Export responsive ESI directly into other analytics engines, e.g. Brainspace®, H5® or any other platform that accepts a standard load file.
  • Conduct iterative “search/analyze/export-into-Relativity” processes as frequently and as many times as desired.

To learn more about this capability purpose-built for remote eDiscovery collection and data audits, please contact us.

Leave a comment

Filed under Best Practices, Case Law, Case Study, ECA, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, ESI, Information Governance, Preservation & Collection, Relativity

Incident Reporting Requirements Under GDPR and CCPA Require Effective Incident Response

By John Patzakis

The European General Data Protection Regulation (GDPR) is now in effect, but many organizations have not fully implemented compliance programs. For many organizations, one of the top challenges is complying with the GDPR’s tight 72-hour data breach notification window. Under GDPR article 33, breach notification is mandatory where a data breach is likely to “result in a risk for the rights and freedoms of individuals.” This must be done within 72 hours of first having become aware of the breach.  Data processors will also be required to notify their customers, the controllers, “without undue delay” after first becoming aware of a data breach.GDPR-stamp

In order to comply, organizations must accelerate their incident response times to quickly detect and identify a breach within their networks, systems, or applications, and must also improve their overall privacy and security processes. Being able to follow the GDPR’s mandate for data breach reporting is equally important as being able to act quickly when the breach hits. Proper incident response planning and practice are essential for any privacy and security team, but the GDPR’s harsh penalties amplify the need to be prepared.

It is important, however, to note that the GDPR does not mandate reporting for every network security breach. It only requires reporting for breaches impacting the “personal data” of EU subjects. And Article 33 specifically notes that reporting is not required where “the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons.”

The California Consumer Privacy Act contains similar provisions. Notification is only required if a California resident’s data is actually compromised.

So after a network breach is identified, determining whether the personal data of an EU or California citizen was actually compromised is critical not only to comply where a breach actually occurred, but also limit unnecessary or over reporting where an effective response analysis can rule out an actual personal data breach.

These breaches are perpetrated by outside hackers, as well as insiders. An insider is any individual who has authorized access to corporate networks, systems or data.  This may include employees, contractors, or others with permission to access an organizations’ systems. With the increased volume of data and the increased sophistication and determination of attackers looking to exploit unwitting insiders or recruit malicious insiders, businesses are more susceptible to insider threats than ever before.

Much of the evidence of the scope of computer security incidents and whether subject personal data was actually compromised are not found in firewall logs and typically cannot be flagged or blocked by intrusion detection or intrusion prevention systems. Instead, much of that information is found in the emails and locally stored documents of end users spread throughout the enterprise on file servers and laptops. To detect, identify and effectively report on data breaches, organizations need to be able to search across this data in an effective and scalable manner. Additionally, proactive search efforts can identify potential security violations such as misplaced sensitive IP, or personal customer data or even password “cheat sheets” stored in local documents.

To date, organizations have employed limited technical approaches to try and identify unstructured distributed data stored across the enterprise, enduring many struggles. For instance, forensic software agent-based crawling methods are commonly attempted but cause repeated high computer resource utilization for each search initiated and network bandwidth limitations are being pushed to the limits rendering this approach ineffective, and preventing any compliance within tight reporting deadlines. So being able to search and audit across at least several hundred distributed end points in a repeatable and expedient fashion is effectively impossible under this approach.

What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, through the ability to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of days or weeks. None of the traditional approaches come close to meeting this requirement. This requirement, however, can be met by the latest innovations in enterprise eDiscovery software.

X1 Distributed GRC  represents a unique approach, by enabling enterprises to quickly and easily search across multiple distributed endpoints from a central location.  Legal, cybersecurity, and compliance teams can easily perform unified complex searches across both unstructured content and metadata, and obtain statistical insight into the data in minutes, instead of days or weeks. With X1 Distributed GRC, organizations can proactively or reactively search for confidential data leakage and also keyword signatures of personal data breach attacks, such as customized spear phishing attacks. X1 is the first product to offer true and massively scalable distributed searching that is executed in its entirety on the end-node computers for data audits across an organization. This game-changing capability vastly reduces costs and quickens response times while greatly mitigating risk and disruption to operations.

Leave a comment

Filed under compliance, Corporations, Cyber security, Cybersecurity, Data Audit, GDPR, Information Governance

USDOJ Expects Companies to Proactively Employ Data Analytics to Detect Fraud

By John Patzakis and Craig Carpenter

In corporate fraud enforcement actions, The US Department of Justice considers the effectiveness of a company’s compliance program as a key factor when deciding whether to bring charges and the severity of any resulting penalties. Recently, prosecutors increased their emphasis on this policy with new evaluation guidelines about what prosecutors expect from companies under investigation.DOJ

The USDOJ manual features a dedicated section on assessing the effectiveness of corporate compliance programs in corporate fraud prosecutions, including FCPA matters. This section is a must read for any corporate compliance professional, as it provides detailed guidance on what the USDOJ looks for in assessing whether a corporation is committed to good-faith self-policing or is merely making hollow pronouncements and going through the motions.

The USDOJ manual advises prosecutors to determine if the corporate compliance program “is adequately designed for maximum effectiveness in preventing and detecting wrongdoing by employees and whether corporate management is enforcing the program or is tacitly encouraging or pressuring employees to engage in misconduct to achieve business objectives,” and that “[p]rosecutors should therefore attempt to determine whether a corporation’s compliance program is merely a ‘paper program’ or whether it was designed, implemented, reviewed, and revised, as appropriate, in an effective manner.”

Recently, Deputy Assistant Attorney General Matthew Miner provided important additional guidance through official public comments establishing that the USDOJ will be assessing whether compliance officers proactively employ data analytics technology in their reviews of companies that are under investigation.

Miner noted that the Justice Department has had success in spotting corporate fraud by relying on data analytics, and said that prosecutors expect compliance officers to do the same: “This use of data analytics has allowed for greater efficiency in identifying investigation targets, which expedites case development, saves resources, makes the overall program of enforcement more targeted and effective.” Miner further noted that he “believes the same data can tell companies where to look for potential misconduct.” Ultimately, the federal government wants “companies to invest in robust and effective compliance programs in advance of misconduct, as well as in a prompt remedial response to any misconduct that is discovered.”

Finally, “if misconduct does occur, our prosecutors are going to inquire about what the company has done to analyze or track its own data resources—both at the time of the misconduct, as well as at the time we are considering a potential resolution,” Miner said. In other words, companies must demonstrate a sincere commitment to identifying and investigating internal fraud with proper resources employing cutting edge technologies, instead of going through the motions with empty “check the box” processes.

With these mandates from government regulators for actual and effective monitoring and enforcement through internal investigations, organizations need effective and operational mechanisms for doing so. In particular, any anti-fraud and internal compliance program must have the ability to search and analyze unstructured electronic data, which is where much of the evidence of fraud and other policy violations can be best detected.

But to utilize data analytics platforms in a proactive instead of a much more limited reactive manner, the process needs to be moved “upstream” where unstructured data resides. This capability is best enabled by a process that extracts text from unstructured, distributed data in place, and systematically sends that data at a massive scale to an analytics platform, with the associated metadata and global unique identifiers for each item.  One of the many challenges with traditional workflows is the massive data transfer associated with ongoing data migration of electronic files and emails, the latter of which must be sent in whole containers such as PST files. This process alone can take weeks, choke network bandwidth and is highly disruptive to operations. However, the load associated with text/metadata only is less than 1 percent of the full native item. So the possibilities here are very compelling. This architecture enables very scalable and proactive solutions to compliance, information security, and information governance use cases. The upload to AI engines would take hours instead of weeks, enabling continual machine learning to improve processes and accuracy over time and enable immediate action to be taken on identified threats or otherwise relevant information.

The only solution that we are aware of that fulfills this vision is X1 Enterprise Distributed GRC. X1’s unique distributed architecture upends the traditional collection process by indexing at the distributed endpoints, enabling a direct pipeline of extracted text to the analytics platform. This innovative technology and workflow results in far faster and more precise collections and a more informed strategy in any matter.

Deployed at each end point or centrally in virtualized environments, X1 Enterprise allows practitioners to query many thousands of devices simultaneously, utilize analytics before collecting and process while collecting directly into myriad different review and analytics applications like RelativityOne and Brainspace. X1 Enterprise empowers corporate eDiscovery, compliance, investigative, cybersecurity and privacy staff with the ability to find, analyze, collect and/or delete virtually any piece of unstructured user data wherever it resides instantly and iteratively, all in a legally defensible fashion.

X1 displayed these powerful capabilities with Compliance DS in a recent webinar with a brief but substantive demo of our X1 Distributed GRC solution, emphasizing our innovative support of analytics engines through our game-changing ability to extract text in place with a direct feed into AI solutions.

Here is a link to the recording with a direct link to the 5 minute demo portion.

In addition to saving time and money, these capabilities are important to demonstrate a sincere organizational commitment to compliance versus maintaining a mere “paper program” – which the USDOJ has just said can provide critical mitigation in the event of an investigation or prosecution.

Leave a comment

Filed under Best Practices, compliance, Corporations, Data Audit, eDiscovery & Compliance, Information Governance

Government Regulators Reject “Paper” Corporate Compliance Programs Lacking Actual Enforcement

By John Patzakis

Recently, US Government regulators fined Stanley Black & Decker $1.8m after its subsidiary illegally exported finished power tools and spare parts to Iran, in violation of sanctions. The Government found that the tool maker failed to “implement procedures to monitor or audit [its subsidiary] operations to ensure that its Iran-related sales did not recur.”

Notably, the employees of the subsidiary concealed their activities by creating bogus bills of lading that misidentified delivery locations and told customers to avoid writing “Iran” on business documents. This conduct underscores the importance of having a diligent internal monitoring and investigation capability that goes beyond mere review of standard transactional records in structured databases such as CRM systems. This type of conduct is best detected on employee’s laptops and other sources of unstructured data through effective internal investigations processes.Law Journal2

The Treasury Department stated the Stanley Black & Decker case “highlights the importance of U.S. companies to conduct sanctions-related due diligence both prior and subsequent to mergers and acquisitions, and to take appropriate steps to audit, monitor and verify newly acquired subsidiaries and affiliates for….compliance.”

Further to this point, the US Department of Justice Manual features a dedicated section on assessing the effectiveness of corporate compliance programs in corporate fraud prosecutions, including FCPA matters. This section is a must read for any corporate compliance professional, as it provides detailed guidance on what the USDOJ looks for in assessing whether a corporation is committed to good-faith self-policing or is merely making hollow pronouncements and going through the motions.

The USDOJ cites United States v. Potter, 463 F.3d 9 (1st Cir. 2006), which provides that a corporation cannot “avoid liability by adopting abstract rules” that forbid its agents from engaging in illegal acts, because “[e]ven a specific directive to an agent or employee or honest efforts to police such rules do not automatically free the company for the wrongful acts of agents.” Id. at 25-26. See also United States v. Hilton Hotels Corp., 467 F.2d 1000, 1007 (9th Cir. 1972) (noting that a corporation “could not gain exculpation by issuing general instructions without undertaking to enforce those instructions by means commensurate with the obvious risks”).

The USDOJ manual advises prosecutors to determine if the corporate compliance program “is adequately designed for maximum effectiveness in preventing and detecting wrongdoing by employees and whether corporate management is enforcing the program or is tacitly encouraging or pressuring employees to engage in misconduct to achieve business objectives,” and that “[p]rosecutors should therefore attempt to determine whether a corporation’s compliance program is merely a ‘paper program’ or whether it was designed, implemented, reviewed, and revised, as appropriate, in an effective manner.”

With these mandates from government regulators for actual and effective monitoring and enforcement through internal investigations, organizations need effective and operational mechanisms for doing so. In particular, any anti-fraud and internal compliance program must have the ability to search and analyze unstructured electronic data, which is where much of the evidence of fraud and other policy violations can be best detected.

To help meet the “actual enforcement” requirements of government regulators, X1 Distributed Discovery (X1DD) enables enterprises to quickly and easily search across up to thousands of distributed endpoints and data servers from a central location.  Legal and compliance teams can easily perform unified complex searches across both unstructured content and metadata, obtaining statistical insight into the data in minutes, and full results with completed collection in hours, instead of days or weeks. Built on our award-winning and patented X1 Search technology, X1DD is the first product to offer true and massively scalable distributed data discovery across an organization. X1DD replaces expensive, cumbersome and highly disruptive approaches to meet enterprise investigation, compliance, and eDiscovery requirements.

Once the legal team is satisfied with a specific search string, after sufficient iteration, the data can then be collected by X1DD by simply hitting the ‘collect’ button. The responsive data is “containerized” at each end point and automatically transmitted to either a central location, or uploaded directly to Relativity, using Relativity’s import API where all data is seamlessly ready for review. Importantly, all results are tied back to a specific custodian, with full chain of custody and preservation of all file metadata. Here is a recording of a live public demo with Relativity, showing the very fast direct upload from X1DD straight into RelativityOne.

This effort described above — from iterative, distributed search through collection and transmittal straight into Relativity from hundreds of endpoints — can be accomplished in a single day. Using manual consulting services, the same project would require several weeks and hundreds of thousands of dollars in collection costs alone, not to mention significant disruption to business operations. Substantial costs associated with over-collection of data would mount as well, and could even dwarf collection costs through unnecessary attorney review time.

In addition to saving time and money, these capabilities are important demonstrate a sincere organizational commitment to compliance versus maintaining a mere “paper program.”

1 Comment

Filed under Best Practices, Case Law, Case Study, compliance, Corporations, eDiscovery & Compliance, Enterprise eDiscovery, Information Governance