Category Archives: Information Management

Architecting a New Paradigm in Legal Governance

By Michael Rasmussen

Editor’s note: Today we are featuring a guest blog post from Michael Rasmussen, the GRC Pundit & Analyst at GRC 20/20 Research, LLC.

Exponential growth and change in business strategy, risks, regulations, globalization, distributed operations, competitive velocity, technology, and business data encumbers organizations of all sizes. Gone are the years of simplicity in business operations.

Managing the complexity of business from a legal and privacy perspective, governing information that is pervasive throughout the organization, and keeping continuous business and legal change in sync is a significant challenge for boards, executives, as well as the legal professionals in the legal department. Organizations need an integrated strategy, process, information, and technology architecture to govern legal, meet legal commitments, and manage legal uncertainty and risk in a way that is efficient, effective, and agile and extends into the broader enterprise GRC architecture.

In my previous blog, Operationalizing GRC in Context of Legal & Privacy: The Last Mile of GRC, I began this discussion, and here I aim to expound on it further from a legal context.

Legal today is more than legal matters, actions, and contracts. Today’s legal organization has to respond to incident/breach reporting and notification laws in a timely and compliant manner, respond to Data Subject Access Requests (DSAR), harmonize and monitor retentions obligations, conduct eDiscovery, manage legal holds on data, and continuously monitor regulations and legislation and apply them to a business context.

In today’s global business environment, a broad spectrum of economic, political, social, legal, and regulatory changes are continually bombarding the organization. The organization continues to see exponential growth of regulatory requirements and legal obligations (often conflicting and overlapping) that must be met, which multiply as the organization expands global operations, products, and services. This requires an integrated approach to legal governance, risk management, and compliance (GRC) with a goal to reliably achieve objectives while addressing uncertainty and act with integrity.[1] This includes adherence to mandatory legal requirements and voluntary organizational values and the boundaries each organization establishes. The legal department, with responsibility for understanding matter management, issue identification, investigations, policy management, reporting and filing, legal risk, and the regulatory obligations faced by the organization, is a critical player in GRC (what is understood as Enterprise or Integrated GRC), as well as improving GRC within the legal function itself.

A successful legal management information architecture will be able to connect information across risk management and business systems. This requires a robust and adaptable legal information architecture that can model the complexity of legal information, discovery, transactions, interactions, relationship, cause and effect, and the analysis of information, which can integrate and manage a range of business systems and external data. Key to this information architecture is a clear data inventory and map of information that informs the organization of what data it has, who in the organization owns it, what regulatory retention obligations are attached to it, and what third parties have access to it. This is a fundamental requirement for applying process and effectively operationalizing an organization’s GRC activities, as detailed in the previous blog.

There can and should be an integrated technology architecture that extends GRC technology and operationalizes it in a legal and privacy context. This connects the fabric of the legal processes, information, discovery, and other technologies together across the organization. This is a hub of operationalizing GRC and requires that it be able to integrate and connect with a variety of other business systems, such as specialized legal discovery solutions and integrate with broader enterprise GRC technology.

The right technology architecture choice for an organization involves the integration of several components into a core enterprise GRC and Legal GRC architecture – which can facilitate the integration and correlation of legal information, discovery, analytics, and reporting. Organizations suffer when they take a myopic view of GRC technology that fails to connect all the dots and provide context to discovery, business analytics, objectives, and strategy in the real-time that a business operates in. 

Extending and operationalizing GRC processes and technology in context of legal and privacy enables the organization to use its resources wisely to prevent undesirable outcomes and maximize advantages while striving to achieve its objectives. A key focus is to provide legal assurance that processes are designed to mitigate the most significant legal issues and are operating as designed. Effective management of legal risk and exposure is critical to the board and executive management, who need a reliable way to provide assurance to stakeholders that the enterprise plans to both preserve and create value. Mature GRC enables the organization to weigh multiple inputs from both internal and external contexts and use a variety of methods to analyze legal risk and provide analytics and modeling.


[1] This is the OCEG definition of GRC.

Leave a comment

Filed under Best Practices, CaCPA, eDiscovery & Compliance, GDPR, Information Governance, Information Management, Uncategorized

Operationalizing GRC in Context of Legal & Privacy: The Last Mile of GRC

By Michael Rasmussen

Editor’s note: Today we are featuring a guest blog post from Michael Rasmussen, the GRC Pundit & Analyst at GRC 20/20 Research, LLC.

At its core, GRC is the capability to reliably achieve objectives [GOVERNANCE], address uncertainty [RISK MANAGEMENT], and act with integrity [COMPLIANCE]. GRC is something organizations do, not something they purchase. They govern, they manage risk, and they comply with obligations. However, there is technology to enable GRC related processes, such as legal and privacy, to be more efficient, effective, and agile.

However, too often the focus on GRC technology is limited to the process management of forms, workflow, tasks, and reporting. These are critical and important elements, but the role of technology for GRC is so much broader to operationalize GRC activities that are labor intensive, particularly in the context of legal and privacy. Simply managing forms, workflow, and tasks are no longer enough. Organizations need to start thinking how they can integrate eDiscovery and data/information governance solutions within their core GRC architecture.

What is needed is the ability to search, find, monitor, interact, and control data throughout the business environment. GRC platforms are excellent at managing forms, workflow, tasks, analytics, and reporting. But behind the scenes there are still labor-intensive tasks or disconnected solutions that actually find, control, and assess the disposition of sensitive data in the enterprise. eDiscovery and information governance solutions have been disconnected and not strategically leveraged for GRC purposes. Together, the core GRC platform that integrates with eDiscovery and information governance technologies builds exponential economies in efficiency, effectiveness, and agility.

Specifically, an integrated GRC solution that weds the core GRC platform with eDiscovery and information governance technology delivers full value to an organization that:

  • Discovers the attributes and metadata of data no matter where it lives within the environment as a key component of GRC processes for legal and privacy compliance.
  • Enables 360° awareness to assessments by discovering the information needed to conduct and deliver assessments effectively into the core GRC platform.
  • Delivers a centralized console to interact with data/information and metadata of files on devices across the organization (such as network file shares, OneDrive, and Dropbox data).
  • Automates the ability to interact with downstream endpoints/systems to provide the ability to search the content of records for keywords and perform analysis using regular expressions and classifiers.
  • Controls data wherever it is with the ability to get to the data and analyze it from a centralized console.

An integrated approach that brings together the core GRC platform with eDiscovery and information governance technology enables the organization to discover, manage, monitor, and control data right from the central GRC platform console. It enables the organization to get centralized and accessible insight into where sensitive information is, how it is being used, and what can be done with it.

  • For example. Within the GRC platform I can initiate a search based on key words or patterns (e.g., social security number). The eDiscovery/information governance solution then finds where that information is throughout the enterprise and delivers a list of records back to the GRC platform for analysis and monitoring.

This enables an integrated GRC architecture that brings 360° contextual awareness into information across the enterprise. It delivers enhanced efficiency in time saved and money saved chasing information through disconnected solutions and processes, it provides greater effectiveness through insight and control of information and enables greater agility across a dynamic environment to be responsive to issues of information governance. Together, a GRC platform with eDiscovery/information governance capabilities enables and delivers more complete and accurate data governance and privacy assessments, integrated findings, with the ability to manage remediation tasks from one central place.

Leave a comment

Filed under Best Practices, CaCPA, Data Audit, eDiscovery & Compliance, GDPR, Information Governance, Information Management

CCPA and GDPR UPDATE: Unstructured Enterprise Data in Scope of Compliance Requirements

An earlier version of this article appeared on Legaltech News

By John Patzakis

A core requirement of both the GDPR and the similar California Consumer Privacy Act (CCPA), which becomes enforceable on July 1, is the ability to demonstrate and prove that personal data is being protected. This requires information governance capabilities that allow companies to efficiently identify and remediate personal data of EU and California residents. For instance, the UK Information Commissioner’s Office (ICO) provides that “The GDPR places a high expectation on you to provide information in response to a SAR (Subject Access Request). Whilst it may be challenging, you should make extensive efforts to find and retrieve the requested information.”CCPA GDPR

However, recent Gartner research notes that approximately 80% of information stored by companies is “dark data” that is in the form of unstructured, distributed data that can pose significant legal and operational risks. With much of the global workforce now working remotely, this is of special concern and nearly all the company data maintained and utilized by remote employees is in the form of unstructured data. Unstructured enterprise data generally refers to searchable data such as emails, spreadsheets and documents on laptops, file servers, and social media.

The GDPR

An organization’s GDPR compliance efforts need to address any personal data contained within unstructured electronic data throughout the enterprise, as well as the structured data found in CRM, ERP and various centralized records management systems. Personal data is defined in the GDPR as: “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

Under the GDPR, there is no distinction between structured versus unstructured electronic data in terms of the regulation’s scope. There is a separate guidance regarding “structured” paper records (more on that below). The key consideration is whether a data controller or processor has control over personal data, regardless of where it is located in the organization. Nonetheless, there is some confusion about the scope of the GDPR’s coverage across structured as well as unstructured electronic data systems.

The UK ICO is a key government regulator that interprets and enforces the GDPR, and has recently issued important draft guidance on the scope of GDPR data subject access rights, including as it relates to unstructured electronic information. Notably, the ICO notes that large data sets, including data analytics outputs and unstructured data volumes, “could make it more difficult for you to meet your obligations under the right of access. However, these are not classed as exemptions, and are not excuses for you to disregard those obligations.”

Additionally the ICO guidance advises that “emails stored on your computer are a form of electronic record to which the general principles (under the GDPR) apply.” In fact, the ICO notes that home computers and personal email accounts of employees are subject to GDPR if they contain personal data originating from the employers networks or processing activities. This is especially notable under the new normal of social distancing, where much of a company’s data (and associated personal information) is being stored on remote employee laptops.

The ICO also provides guidance on several related subjects that shed light on its stance regarding unstructured data:

Archived Data: According to the ICO, data stored in electronic archives is generally subject to the GDPR, noting that there is no “technology exemption” from the right of access. Enterprises “should have procedures in place to find and retrieve personal data that has been electronically archived or backed up.” Further, enterprises “should use the same effort to find information to respond to a SAR as you would to find archived or backed-up data for your own purposes.”

Deleted Data: The ICO’s view on deleted data is that it is generally within the scope of GDPR compliance, provided that there is no intent to, or a systematic ability to readily recover that data. The ICO says it “will not seek to take enforcement action against an organisation that has failed to use extreme measures to recreate previously ‘deleted’ personal data held in electronic form. We do not require organisations to use time and effort reconstituting information that they have deleted as part of their general records management.”

However, under this guidance organizations that invest in and deploy re-purposed computer forensic tools that feature automated un-delete capabilities may be held to a higher standard. Deploying such systems can reflect intent to as well as having the systematic technical ability to recover deleted data.

Paper Records: Paper records that are part of a “structured filing system” are subject to the GDPR. Specifically, if an enterprise holds “information about the requester in non-electronic form (e.g. in paper files or on microfiche records)” then such hard-copy records are considered personal data accessible via the right of access,” if such records are “held in a ‘filing system.” This segment of the guidance reflects that references to “unstructured data” in European parlance usually pertains to paper records. The ICO notes in separate guidance that “the manual processing of unstructured personal data, such as unfiled handwritten notes on paper” are outside the scope of GDPR.

GDPR Article 4 defines a “filing system” as meaning “any structured set of personal data which are accessible according to specific criteria, whether centralized, decentralized or dispersed on a functional or geographical basis.” The only form of “unstructured data” that would not be subject to GDPR would be unfiled paper records like handwritten notes or legacy microfiche.

The CCPA  

The California Attorney General (AG) released a second and presumably final round of draft regulations under the California Consumer Privacy Act (CCPA) that reflect how unstructured electronic data will be treated under the Act. The proposed rules outline how the California AG is interpreting and will be enforcing the CCPA. Under § 999.313(d)(2), data from archived or backup systems are—unlike the GDPR—exempt from the CCPA’s scope, unless those archives are restored and become active. Additional guidance from the Attorney General states: “Allowing businesses to delete the consumer’s personal information on archived or backup systems at the time that they are accessed or used balances the interests of consumers with the potentially burdensome costs of deleting information from backup systems that may never be utilized.”

What is very notable is that the only technical exception to the CCPA is unrestored archived and back-up data. Like the GDPR, there is no distinction between unstructured and structured electronic data. In the first round of public comments, an insurance industry lobbying group argued that unstructured data be exempted from the CCPA. As reflected by revised guidance, that suggestion was rejected by the California AG.

For the GDPR, the UK ICO correctly advises that enterprises “should ensure that your information management systems are well-designed and maintained, so you can efficiently locate and extract information requested by the data subjects whose personal data you process and redact third party data where it is deemed necessary.” This is why Forrester Research notes that “Data Discovery and Classification are the foundation for GDPR compliance.”

Establish and Enforce Data Privacy Policies

So to achieve GDPR and CCPA compliance, organizations must first ensure that explicit policies and procedures are in place for handling personal information. Once established, it is important to demonstrate to regulators that such policies and procedures are being followed and operationally enforced. A key first step is to establish a data map of where and how personal data is stored in the enterprise. This exercise is actually required under the GDPR Article 30 documentation provisions.

An operational data audit and discovery capability across unstructured data sources allows enterprises to efficiently map, identify, and remediate personal information in order to respond to regulators and data subject access requests from EU and California citizens. This capability must be able to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of weeks or months as is the case with traditional crawling tools. This includes laptops of employees working from home.

These processes and capabilities are not only required for data privacy compliance but are also needed for broader information governance and security requirements, anti-fraud compliance, and e-discovery.

Implementing these measures proactively, with routine and consistent enforcement using solutions such as X1 Distributed GRC, will go a long way to mitigate risk, respond efficiently to data subject access requests, and improve overall operational effectiveness through such overall information governance improvements.

Leave a comment

Filed under CaCPA, compliance, Corporations, Cyber security, Cybersecurity, Data Audit, GDPR, Information Governance, Information Management, Uncategorized

Three Key eDiscovery Preservation Lessons from Small v. University Medical Center

Small v. University Medical Center is a recent 123-page decision focused exclusively on issues and challenges related to preservation of electronically stored information in a large enterprise. Its an important ESI preservation case with some very instructive takeaways for organizations and their counsel.  In Small, Plaintiffs brought an employment wage & hour class action against University Medical Center of Southern Nevada (UMC). Such wage & hour employment matters invariably involve intensive eDiscovery, and this case was no exception. When it became evident that UMC was struggling mightily with their ESI preservation and collection obligations, the Nevada District Court appointed a special master, who proved to be tech-savvy with a solid understanding of eDiscovery issues.Case Law

In August 2014, the special master issued a report, finding that UMC’s destruction of relevant information “shock[ed] the conscious.” Among other things, the special master recommended that the court impose a terminating sanction in favor of the class action plaintiffs. The findings of the special master included the following:

  • UMC had no policy for issuing litigation holds, and no such hold was issued for at least the first eight months of this litigation.
  • UMC executives were unaware of their preservation duties, ignoring them altogether, or at best addressing them “in a hallway in passing.”
  • Relevant ESI from laptops, desktops and local drives were not preserved until some 18 months into this litigation.
  • ESI on file servers containing policies and procedures regarding meal breaks and compensation were not preserved.
  • These issues could have been avoided using best practices and if chain-of-custody paperwork had been completed.
  • All of UMC’s multiple ESI vendors repeatedly failed to follow best practices

After several years of considering and reviewing the special master’s detailed report and recommendations, the court finally issued its final discovery order last month. The court concurred with the special master’s findings, holding that UMC and its counsel failed to take reasonable efforts to identify, preserve, collect, and produce relevant information. The court imposed monetary sanctions against UMC, including the attorney fees and costs incurred by opposing counsel. Additionally, the court ordered that should the matter proceed to trial, the jury would be instructed that “the court has found UMC failed to comply with its legal duty to preserve discoverable information… and failed to comply with a number of the court’s orders,” and that “these failures resulted in the loss or destruction of some ESI relevant to the parties’ claims and defenses and responsive to plaintiffs’ discovery requests, and that the jury may consider these findings with all other evidence in the case for whatever value it deems appropriate.” Such adverse inference instructions are invariably highly impactful if not effectively dispositive in a jury trial.

There are three key takeaways from Small:

  1. UMC’s Main Failing was Lacking an Established Process

UMC’s challenges all centered on its complete lack of an existing process to address eDiscovery preservation. UMC and their counsel could not identify the locations of potentially relevant ESI because there was no data map. ESI was not timely preserved because no litigation hold process existed. And when the collection did finally occur under the special master’s order, it was highly reactive and very haphazard because UMC had no enterprise-capable collection capability.

When an organization does not have a systematic and repeatable process in place, the risks and costs associated with eDiscovery increase exponentially. Such a failure also puts outside counsel in a very difficult situation, as reflected by this statement from the Small Court: “One of the most astonishing assertions UMC made in its objection to the special master’s R & R is that UMC did not know what to preserve. UMC and its counsel had a legal duty to figure this out. Collection and preservation of ESI is often an iterative process between the attorney and the client.”

Some commentators have focused on the need to conduct custodian questionnaires, but a good process will obviate or at least reduce your reliance on often unreliable custodians to locate potentially relevant ESI.

  1. UMC Claims of Burden Did Not Help Their Cause

UMC tried arguing that it was too burdensome and costly for them to collect ESI from hundreds of custodians, claiming that it took IT six hours to merely search the email account of a single custodian. Here at X1, I wear a couple of hats, including compliance and eDiscovery counsel. In response to a recent GDPR audit, we searched dozens of our email accounts in seconds. This capability not only dramatically reduces our costs, but also our risk by allowing us to demonstrate diligent compliance.

In the eDiscovery context, the ability to quickly pinpoint potentially responsive data enables corporate counsel to better represent their client. For instance, they are then able to intelligently negotiate keywords and overall preservation scope with opposing counsel, instead of flying blind. Also, with their eDiscovery house in order, they can focus on more strategic priorities in the case, including pressing the adversary on their discovery compliance, with the confidence that your client does not live in a glass house.

Conversely, the Small opinion documents several meet and confer meetings and discovery hearings where UMC’s counsel was clearly at a significant disadvantage, and progressively lost credibility with the court because they didn’t know what they didn’t know.

  1. Retaining Computer Forensics Consultants Late in the Game Did Not Save the Day

Eventually UMC retained forensic collection consultants several months after the duty to preserve kicked in. This reflects an old school reactive, “drag the feet” approach some organizations still take, where they try to deflect preservation obligations and then, once opposing counsel or the court force the issue, scramble and retain forensic consultants to parachute in.  In this situation it was already too late, as much the data had already been spoliated. And because of the lack of a process, including a data map, the collection efforts were disjointed and a haphazard. The opinion also reflects that this reactive fire drill resulted in significant data over-collection at significant cost to UMC.

In sum, Small v. University Medical Center is a 123 page illustration of what often happens when an organization does not have a systematic eDiscovery process in place. An effective process is established through the right people, processes and technology, such as the capabilities of the X1 Distributed Discovery platform. A complete copy of the court opinion can be accessed here: Small v. University Medical Center

1 Comment

Filed under Best Practices, Case Law, compliance, Corporations, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, GDPR, Information Governance, Information Management, Preservation & Collection

When your “Compliance” and eDiscovery Processes Violate the GDPR

Time to reevaluate tools that rely on systemic data duplication

The European Union (EU) General Data Protection Regulation (GDPR) became effective in May 2018. To briefly review, the GDPR applies to the processing of “personal data” of EU citizens and residents (a.k.a. “data subjects”).” Personal data” is broadly defined to include “any information relating to an identified or identifiable natural person.” That could include email addresses and transactional business communications that are tied to a unique individual. GDPR is applicable to any organization that provides goods and services to individuals located in the EU on a regular enough basis, or maintains electronic records of their employees who are EU residents.

In additional to an overall framework of updated privacy policies and procedures, GDPR requires the ability to demonstrate and prove that personal data is being protected. Essential components for such compliance are data audit and discovery capabilities that allow companies to efficiently search and identify the information necessary, both proactively, and also reactively to respond to regulators and EU private citizen’s requests. As such, any GDPR compliance programs are ultimately hollow without consistent, operational execution and enforcement through an effective eDiscovery information governance platform.

However, some content management and archiving tool providers are repurposing their messaging with GDPR compliance. For example, an industry executive contact recently recounted a meeting with such a vendor, where their tool involved duplicating all of the emails and documents in the enterprise and then migrating all those copies to a central server cluster. That way, the tool could theoretically manage all the documents and emails centrally. Putting aside the difficulty of scaling up that process to manage and sync hundreds of terabytes of data in a medium-sized company (and petabytes in a Fortune 500), this anecdote underscores a fundamental flaw in tools that require systemic data duplication in order to search and manage content.

Under the GDPR, data needs to be minimized, not systematically duplicated en masse. It would be extremely difficult under such an architecture to sync up and remediate non-compliant documents and emails back at the original location. So at the end the day, this proposed solution would actually violate the GDPR by making duplicate copies of data sets that would inevitably include non-compliant information, without any real means to sync up remediation.Desktop_virtualization

The same is true for the much of the traditional eDiscovery workflows, which require numerous steps involving data duplication at every turn. For instance, data collection is often accomplished through misapplied forensic tools that operate by a broadly collecting copies through over collection. As the court said in In re Ford Motor Company, 345 F.3d 1315 (11th Cir. 2003): “[E]xamination of a hard drive inevitably results in the production of massive amounts of irrelevant, and perhaps privileged, information…” Even worse, the collected data is then re-duplicated one or often two more times by the examiner for archival purposes. And then the data is sent downstream for processing, which results in even more data duplication. Load files are created for further transfers, which are also duplicated.

Chad Jones of D4 explains on a recent webinar and in his follow-on blog post about how such manual and inefficient handoffs throughout the discovery process greatly increase risk as well as cost. Like antiquated factories spewing tons of pollution, outdated eDiscovery processes spin out a lot of superfluous data duplication. Much of that data likely contains non-compliant information, thus “polluting” your organization, including through your eDiscovery services vendors, with increased GDPR and other regulatory risk.

In light of the above, when evaluating your compliance and eDiscovery software, organizations should keep in mind these five key requirements to keep in line with GDPR and good overall information governance:

  1. Search data in place. Data on laptops and file servers need to be in searched in place. Tools that require copy and migration to central locations to search and manage are part of the problem, not the solution.
  1. Delete Data in Place. GDPR requires that non-compliance data be deleted on demand. Purging data on managed archives does not suffice if other copies are on laptops, unmanaged servers and other unstructured sources. Your search in place solution should also delete in place.
  1. Data Minimization. GDPR requires that organizations minimize data as opposed to exploding data through mass duplication.
  1. Targeted and Efficient Data Collection: Only potentially relevant data should be collected for eDiscovery and data audits. Over-collection leads to much greater cost and risk.
  1. Seamless integration with attorney review platforms, to bypass the processing steps which requires manual handoffs and load files.

X1 Data Audit & Compliance is a ground-breaking platform that meets these criterion while enabling system-wide data discovery supporting GDPR and many other information governance requirements.   Please visit here to learn more.

Leave a comment

Filed under Best Practices, compliance, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, GDPR, Information Governance, Information Management, Uncategorized