Category Archives: Information Management

Three Key eDiscovery Preservation Lessons from Small v. University Medical Center

Small v. University Medical Center is a recent 123-page decision focused exclusively on issues and challenges related to preservation of electronically stored information in a large enterprise. Its an important ESI preservation case with some very instructive takeaways for organizations and their counsel.  In Small, Plaintiffs brought an employment wage & hour class action against University Medical Center of Southern Nevada (UMC). Such wage & hour employment matters invariably involve intensive eDiscovery, and this case was no exception. When it became evident that UMC was struggling mightily with their ESI preservation and collection obligations, the Nevada District Court appointed a special master, who proved to be tech-savvy with a solid understanding of eDiscovery issues.Case Law

In August 2014, the special master issued a report, finding that UMC’s destruction of relevant information “shock[ed] the conscious.” Among other things, the special master recommended that the court impose a terminating sanction in favor of the class action plaintiffs. The findings of the special master included the following:

  • UMC had no policy for issuing litigation holds, and no such hold was issued for at least the first eight months of this litigation.
  • UMC executives were unaware of their preservation duties, ignoring them altogether, or at best addressing them “in a hallway in passing.”
  • Relevant ESI from laptops, desktops and local drives were not preserved until some 18 months into this litigation.
  • ESI on file servers containing policies and procedures regarding meal breaks and compensation were not preserved.
  • These issues could have been avoided using best practices and if chain-of-custody paperwork had been completed.
  • All of UMC’s multiple ESI vendors repeatedly failed to follow best practices

After several years of considering and reviewing the special master’s detailed report and recommendations, the court finally issued its final discovery order last month. The court concurred with the special master’s findings, holding that UMC and its counsel failed to take reasonable efforts to identify, preserve, collect, and produce relevant information. The court imposed monetary sanctions against UMC, including the attorney fees and costs incurred by opposing counsel. Additionally, the court ordered that should the matter proceed to trial, the jury would be instructed that “the court has found UMC failed to comply with its legal duty to preserve discoverable information… and failed to comply with a number of the court’s orders,” and that “these failures resulted in the loss or destruction of some ESI relevant to the parties’ claims and defenses and responsive to plaintiffs’ discovery requests, and that the jury may consider these findings with all other evidence in the case for whatever value it deems appropriate.” Such adverse inference instructions are invariably highly impactful if not effectively dispositive in a jury trial.

There are three key takeaways from Small:

  1. UMC’s Main Failing was Lacking an Established Process

UMC’s challenges all centered on its complete lack of an existing process to address eDiscovery preservation. UMC and their counsel could not identify the locations of potentially relevant ESI because there was no data map. ESI was not timely preserved because no litigation hold process existed. And when the collection did finally occur under the special master’s order, it was highly reactive and very haphazard because UMC had no enterprise-capable collection capability.

When an organization does not have a systematic and repeatable process in place, the risks and costs associated with eDiscovery increase exponentially. Such a failure also puts outside counsel in a very difficult situation, as reflected by this statement from the Small Court: “One of the most astonishing assertions UMC made in its objection to the special master’s R & R is that UMC did not know what to preserve. UMC and its counsel had a legal duty to figure this out. Collection and preservation of ESI is often an iterative process between the attorney and the client.”

Some commentators have focused on the need to conduct custodian questionnaires, but a good process will obviate or at least reduce your reliance on often unreliable custodians to locate potentially relevant ESI.

  1. UMC Claims of Burden Did Not Help Their Cause

UMC tried arguing that it was too burdensome and costly for them to collect ESI from hundreds of custodians, claiming that it took IT six hours to merely search the email account of a single custodian. Here at X1, I wear a couple of hats, including compliance and eDiscovery counsel. In response to a recent GDPR audit, we searched dozens of our email accounts in seconds. This capability not only dramatically reduces our costs, but also our risk by allowing us to demonstrate diligent compliance.

In the eDiscovery context, the ability to quickly pinpoint potentially responsive data enables corporate counsel to better represent their client. For instance, they are then able to intelligently negotiate keywords and overall preservation scope with opposing counsel, instead of flying blind. Also, with their eDiscovery house in order, they can focus on more strategic priorities in the case, including pressing the adversary on their discovery compliance, with the confidence that your client does not live in a glass house.

Conversely, the Small opinion documents several meet and confer meetings and discovery hearings where UMC’s counsel was clearly at a significant disadvantage, and progressively lost credibility with the court because they didn’t know what they didn’t know.

  1. Retaining Computer Forensics Consultants Late in the Game Did Not Save the Day

Eventually UMC retained forensic collection consultants several months after the duty to preserve kicked in. This reflects an old school reactive, “drag the feet” approach some organizations still take, where they try to deflect preservation obligations and then, once opposing counsel or the court force the issue, scramble and retain forensic consultants to parachute in.  In this situation it was already too late, as much the data had already been spoliated. And because of the lack of a process, including a data map, the collection efforts were disjointed and a haphazard. The opinion also reflects that this reactive fire drill resulted in significant data over-collection at significant cost to UMC.

In sum, Small v. University Medical Center is a 123 page illustration of what often happens when an organization does not have a systematic eDiscovery process in place. An effective process is established through the right people, processes and technology, such as the capabilities of the X1 Distributed Discovery platform. A complete copy of the court opinion can be accessed here: Small v. University Medical Center

Leave a comment

Filed under Best Practices, Case Law, compliance, Corporations, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, GDPR, Information Governance, Information Management, Preservation & Collection

When your “Compliance” and eDiscovery Processes Violate the GDPR

Time to reevaluate tools that rely on systemic data duplication

The European Union (EU) General Data Protection Regulation (GDPR) became effective in May 2018. To briefly review, the GDPR applies to the processing of “personal data” of EU citizens and residents (a.k.a. “data subjects”).” Personal data” is broadly defined to include “any information relating to an identified or identifiable natural person.” That could include email addresses and transactional business communications that are tied to a unique individual. GDPR is applicable to any organization that provides goods and services to individuals located in the EU on a regular enough basis, or maintains electronic records of their employees who are EU residents.

In additional to an overall framework of updated privacy policies and procedures, GDPR requires the ability to demonstrate and prove that personal data is being protected. Essential components for such compliance are data audit and discovery capabilities that allow companies to efficiently search and identify the information necessary, both proactively, and also reactively to respond to regulators and EU private citizen’s requests. As such, any GDPR compliance programs are ultimately hollow without consistent, operational execution and enforcement through an effective eDiscovery information governance platform.

However, some content management and archiving tool providers are repurposing their messaging with GDPR compliance. For example, an industry executive contact recently recounted a meeting with such a vendor, where their tool involved duplicating all of the emails and documents in the enterprise and then migrating all those copies to a central server cluster. That way, the tool could theoretically manage all the documents and emails centrally. Putting aside the difficulty of scaling up that process to manage and sync hundreds of terabytes of data in a medium-sized company (and petabytes in a Fortune 500), this anecdote underscores a fundamental flaw in tools that require systemic data duplication in order to search and manage content.

Under the GDPR, data needs to be minimized, not systematically duplicated en masse. It would be extremely difficult under such an architecture to sync up and remediate non-compliant documents and emails back at the original location. So at the end the day, this proposed solution would actually violate the GDPR by making duplicate copies of data sets that would inevitably include non-compliant information, without any real means to sync up remediation.Desktop_virtualization

The same is true for the much of the traditional eDiscovery workflows, which require numerous steps involving data duplication at every turn. For instance, data collection is often accomplished through misapplied forensic tools that operate by a broadly collecting copies through over collection. As the court said in In re Ford Motor Company, 345 F.3d 1315 (11th Cir. 2003): “[E]xamination of a hard drive inevitably results in the production of massive amounts of irrelevant, and perhaps privileged, information…” Even worse, the collected data is then re-duplicated one or often two more times by the examiner for archival purposes. And then the data is sent downstream for processing, which results in even more data duplication. Load files are created for further transfers, which are also duplicated.

Chad Jones of D4 explains on a recent webinar and in his follow-on blog post about how such manual and inefficient handoffs throughout the discovery process greatly increase risk as well as cost. Like antiquated factories spewing tons of pollution, outdated eDiscovery processes spin out a lot of superfluous data duplication. Much of that data likely contains non-compliant information, thus “polluting” your organization, including through your eDiscovery services vendors, with increased GDPR and other regulatory risk.

In light of the above, when evaluating your compliance and eDiscovery software, organizations should keep in mind these five key requirements to keep in line with GDPR and good overall information governance:

  1. Search data in place. Data on laptops and file servers need to be in searched in place. Tools that require copy and migration to central locations to search and manage are part of the problem, not the solution.
  1. Delete Data in Place. GDPR requires that non-compliance data be deleted on demand. Purging data on managed archives does not suffice if other copies are on laptops, unmanaged servers and other unstructured sources. Your search in place solution should also delete in place.
  1. Data Minimization. GDPR requires that organizations minimize data as opposed to exploding data through mass duplication.
  1. Targeted and Efficient Data Collection: Only potentially relevant data should be collected for eDiscovery and data audits. Over-collection leads to much greater cost and risk.
  1. Seamless integration with attorney review platforms, to bypass the processing steps which requires manual handoffs and load files.

X1 Data Audit & Compliance is a ground-breaking platform that meets these criterion while enabling system-wide data discovery supporting GDPR and many other information governance requirements.   Please visit here to learn more.

Leave a comment

Filed under Best Practices, compliance, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, GDPR, Information Governance, Information Management, Uncategorized

Effective Information Governance Requires Effective Enterprise Technology

Information governance is the compilation of policies, processes, and controls enforced and executed with effective technology to manage electronically stored information throughout the enterprise. Leading IT industry research firm Gartner states that “the goal of information governance is to ensure compliance with laws and regulations, mitigate risks and protect the confidentiality of sensitive company and customer data.” A strong, proactive information governance strategy that strikes the balance between under-retention and over-retention of information can provide dramatic cost savings while significantly reducing risk.

However, while policies, procedures and documentation are important, information governance programs are ultimately hollow without consistent, operational execution and enforcement. CIOs and legal and compliance executives often aspire to implement information governance programs like defensible deletion, data migration, and data audits to detect risks and remediate non-compliance. However, without an actual and scalable technology platform to effectuate these goals, those aspirations remain just that. For instance, recent IDG research suggests that approximately 70% of information stored by companies is “dark data” that is in the form of unstructured, distributed data that can pose significant legal and operational risk and cost.

To date, organizations have employed limited technical approaches to try and execute on their information governance initiatives, enduring many struggles. For instance, software agent-based crawling methods are commonly attempted and can cause repeated high user computer resources utilization for each search initiated and network bandwidth limitations being pushed to the limits rendering the approach ineffective. So being able to search and audit across at least several hundred distributed end points in a repeatable and quick fashion is effectively impossible under this approach.

Another tactic attempted by some CIOs to attempt to address this daunting challenge is to periodically migrate disparate data from around the global enterprise into a central location. The execution of this strategy will still leave the end user’s computer needing to be scanned as there is never a moment when all users in the enterprise have just finished this process with no new data created. That means now that both the central repository and the end-points will need to be searched and increasing the complexity and management of the job. Boiling the ocean through data migration and centralization is extremely expensive, highly disruptive, and frankly unworkable as it never removes the need to conduct constant local computer searching, again through problematic crawling methods.

What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, through the ability to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of days or weeks. None of the other approaches outlined above come close to meeting this requirement and in fact actually perpetuate information governance failures.

X1 Distributed Discovery (X1DD) represents a unique approach, by enabling enterprises to quickly and easily search across multiple distributed endpoints and data servers from a central location.  Legal and compliance teams can easily perform unified complex searches across both unstructured content and metadata, obtaining statistical insight into the data in minutes, instead of days or weeks. With X1DD, organizations can also automatically migrate, collect, or take other action on the data as a result of the search parameters.  Built on our award-winning and patented X1 Search technology, X1DD is the first product to offer true and massively scalable distributed searching that is executed in its entirety on the end-node computers for data audits across an organization. This game-changing capability vastly reduces costs while greatly mitigating risk and disruption to operations.

X1DD operates on-demand where your data currently resides — on desktops, laptops, servers, or even the Cloud — without disruption to business operations and without requiring extensive or complex hardware configurations. Beyond enterprise eDiscovery and information governance functionality, organizations offer employees at the same time, the award-winning X1 Search, improving productivity while effectuating that all too illusive actual compliance with information governance programs.

1 Comment

Filed under Best Practices, eDiscovery & Compliance, Information Governance, Information Management, Records Management, SharePoint, X1 Search 8

True Enterprise-Wide eDiscovery Collection is Finally Here

My previous post discussed the inability of any software provider to solve a critical need by delivering a truly scalable eDiscovery preservation and collection solution that can search across thousands of enterprise endpoints in a short period of time. In the absence of such a “holy grail” solution, eDiscovery collection remains dominated by either unsupervised custodian self-collection or manual services, driving up costs while increasing risk and disruption to business operations.

So today, we at X1 are excited to announce the release of X1 Distributed Discovery. X1 Distributed Discovery (X1DD) enables enterprises to quickly and easily search across up to tens of thousands of distributed endpoints and data servers from a central location.  Legal and compliance teams can easily perform unified complex searches across both unstructured content and metadata, obtaining statistical insight into the data in minutes, and full results with completed collection in hours, instead of days or weeks. Built on our award-winning and patented X1 Search technology, X1DD is the first product to offer true and massively scalable distributed data discovery across an organization. X1DD replaces expensive, cumbersome and highly disruptive approaches to meet enterprise discovery, preservation, and collection needs.

x1dd_diagram

Enterprise eDiscovery collection remains a significant pain point, subjecting organizations to both substantial cost and risk. X1DD addresses this challenge by starting to show results from distributed data across global enterprises within minutes instead of today’s standard of weeks, and even months. This game-changing capability vastly reduces costs while greatly mitigating risk and disruption to operations.

Targeted and iterative end point search is a quantum leap in early data assessment, which is critical to legal counsel at the outset of any legal matter. However, under today’s industry standard, the legal team is typically kept in the dark for weeks, if not months, as the manual identification and collection process of distributed, unstructured data runs its expensive and inefficient course.  To illustrate the power and capabilities of X1DD, imagine being able to perform multiple detailed Boolean keyword phrase searches with metadata filters across the targeted end points of your global enterprise. The results start returning in minutes, with granular statistical data about the responsive documents and emails associated with specific custodians or groups of custodians.

Once the legal team is satisfied with a specific search string, after sufficient iteration, the data can then be collected by X1DD by simply hitting the “collect” button. The responsive data is “containerized” at each end point and automatically transmitted to a central location, where all data is seamlessly indexed and ready for further culling and first pass review. Importantly, all results are tied back to a specific custodian, with full chain of custody and preservation of all file metadata.

This effort described above — from iterative distributed search through collection, transmittal to a central location, and indexing of data from thousands of endpoints — can be accomplished in a single day. Using manual consulting services, the same project would require several weeks and hundreds of thousands of dollars in collection costs alone, not to mention significant disruption to business operations. Substantial costs associated with over-collection of data would mount as well.

X1DD operates on-demand where your data currently resides — on desktops, laptops, servers, or even the Cloud — without disruption to business operations and without requiring extensive or complex hardware configurations. Beyond enterprise eDiscovery and investigation functionality, organizations can offer employees the award-winning X1 Search, improving productivity while maintaining compliance.

X1DD will be featured in an April 19 webinar with eDiscovery expert Erik Laykin of Duff & Phelps. Watch a full briefing and technical demo of X1DD and find out for yourself why X1 Distributed Discovery is a game-changing solution. Or please contact us to arrange for a private demo.

Leave a comment

Filed under Best Practices, Corporations, Desktop Search, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, Information Governance, Information Management, Preservation & Collection, X1 Search 8

Enterprise eDiscovery Collection Remains Costly and Inefficient

2016 marks my sixteenth year as a senior executive in the eDiscovery business. I began my career as a co-founder at Guidance Software (EnCase), serving as General Counsel, CEO and then Vice Chairman and Chief Strategy Officer from 1999 through 2009. After becoming the dominant solution for computer forensics in the early part of the last decade, Guidance set out to define a new field — enterprise discovery collection. Despite a good foundational concept, a truly scalable solution that could search across hundreds, or even thousands, of enterprise endpoints in a short period of time never came to fruition. To date, no other eDiscovery vendor has delivered on the promise of such a “holy grail” solution either. As a result, eDiscovery collection remains dominated by either unsupervised custodian self-collection, or manual services.

tron1

 

Organizations employ limited technical approaches in an effort to get by, and thus enterprise eDiscovery collection remains a significant pain point, subjecting organizations to both significant cost and risks. This post is the first of a two part series on the status of the broken enterprise eDiscovery collection process. Part two will outline a proposed solution.

Currently, enterprises employ four general approaches to eDiscovery collection, with two involving mostly manual methodologies, and the other two predominantly technology-based. Each of the four methods are fraught with inefficiencies and challenges.

The first and likely most common approach, is custodian self-collection, where custodians are sent manual instructions to search, review and upload data that they subjectively determine to be responsive to a matter. This method is plagued with severe defensibility concerns, with several courts disapproving of the practice due to poor compliance, modifying metadata, and inconsistency of results. See Geen v. Blitz, 2011 WL 806011, (E.D. Tex. Mar. 1, 2011), Nat’l Day Laborer Org. v. U.S. Immigration and Customs Enforcement Agency, 2012 WL 2878130 (S.D.N.Y. July 13, 2012).

The second approach is manual services, usually performed by eDiscovery consultants. This method is expensive, disruptive and time-consuming as many times an “overkill” method of forensic image collection process is employed. It also often results in over collection, as the collector typically only gets one bite at the apple, thus driving up eDiscovery costs. While attorney review and processing represent the bulk of eDiscovery costs, much of these expenses stem from over-collection, and thus can be mitigated with a smarter and more efficient process.

When it comes to technical approaches, endpoint forensic crawling methods are employed on a limited basis. While this can be feasible for a small number of custodians, network bandwidth constraints coupled with the requirement to migrate all endpoint data back to the forensic crawling tool renders the approach ineffective. For example, to search a custodian’s laptop with 10 gigabytes of email and documents, all 10 gigabytes must be copied and transmitted over the network, where it is then searched, all of which takes at least several hours per computer. So, most organizations choose to force collect all 10 gigabytes. The case of U.S. ex rel. McBride v. Halliburton Co.  272 F.R.D. 235 (2011), Illustrates this specific pain point well. In McBride, Magistrate Judge John Facciola’s instructive opinion outlines Halliburton’s eDiscovery struggles to collect and process data from remote locations:

“Since the defendants employ persons overseas, this data collection may have to be shipped to the United States, or sent by network connections with finite capacity, which may require several days just to copy and transmit the data from a single custodian . . . (Halliburton) estimates that each custodian averages 15–20 gigabytes of data, and collection can take two to ten days per custodian. The data must then be processed to be rendered searchable by the review tool being used, a process that can overwhelm the computer’s capacity and require that the data be processed by batch, as opposed to all at once.”

Halliburton represented to the court that they spent hundreds of thousands of dollars on eDiscovery for only a few dozen remotely located custodians. The need to force-collect the remote custodians’ entire set of data and then sort it out through the expensive eDiscovery processing phase, instead of culling, filtering and searching the data at the point of collection drove up the costs.

And finally, another tactic attempted by some CIOs to attempt to address this daunting challenge is to periodically migrate disparate data from around the global enterprise into a central location. This Quixotic endeavor is perceived necessary as traditional information management and electronic discovery tools are not architected and not suited to address large and disparate volumes of data located in hundreds of offices and work sites across the globe.  But, boiling the ocean through data migration and centralization is extremely expensive, disruptive and frankly unworkable.

What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, through the ability to search and collect across several hundred endpoints and other unstructured data sources such as file shares and SharePoint, and return results within minutes instead of days or weeks. None of the four approaches outlined above come close to meeting this requirement and in fact actually perpetuate eDiscovery pain.

Is there a fifth option? Stay tuned for my next post coming soon.

Leave a comment

Filed under Best Practices, Case Law, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, Information Governance, Information Management, Preservation & Collection