Tag Archives: distributed data

Changing the Game for Rule 26(f) Meet and Confer Efforts with Pre-Collection Early Data Assessment

One of the most important provisions of the Federal Rules of Civil Procedure that impact eDiscovery is Rule 26(f), which requires the parties’ counsel to “meet and confer” in Meet and Conferadvance of the pre-trial scheduling conference on key discovery matters, including the preservation, disclosure and exchange of potentially relevant electronically stored information (ESI).  With the risks and costs associated with eDiscovery, this early meeting of counsel is a critically important means to manage and control the cost of eDiscovery, and to prevent the failure to preserve relevant ESI.

A key authority on the Rule 26(f) eDiscovery topics to be addressed is the “Suggested Protocol for Discovery of Electronically Stored Information,” provided by Magistrate Judge Paul W. Grimm and his joint bar-court committee. Under Section 8 of the Model Protocol, the topics to be discussed at the Rule 26(f) conference include: “Search methodologies for retrieving or reviewing ESI such as identification of the systems to be searched;” and “the use of key word searches, with an agreement on the words or terms to be searched” and “limitations on the time frame of ESI to be searched; limitations on the fields or document types to be searched.”

However, Rule 26(f) conferences occur early on in the litigation, typically within weeks of the case’s filing. As such, attorneys representing enterprises are essentially flying blind at this pre-collection stage, without any real visibility into the potentially relevant ESI across an organization. This is especially true in regard to unstructured, distributed data, which is invariably the majority of ESI that is ultimately collected in a given matter.

Ideally, an effective early data assessment (EDA) capability can enable counsel to set reasonable discovery limits and ultimately process, host, review and produce less ESI.  Counsel can further use EDA to gather key information, develop a litigation budget, and better manage litigation deadlines. EDA also can foster cooperation and proportionality in discovery by informing the parties early in the process about where relevant ESI is located and what ESI is significant to the case.

The problem is any keyword protocols are mostly guesswork at the early stage of litigation, as under current eDiscovery practices, the costly and time consuming step of actual data collection must occur before pre-processing EDA can take place. When you hear eDiscovery practitioners talk about EDA, they are invariably speaking of a post-collection, pre-review process. But without requisite pre-collection visibility into distributed ESI, counsel typically resort to directing broad collection efforts, resulting in much greater costs, burden and delays.

What is clearly needed is the ability to perform pre-collection early data assessment, instead of EDA after the costly, time consuming and disruptive collection phase.  X1 Distributed Discovery (X1DD) offers a game-changing new approach to the traditional eDiscovery model.  X1DD enables enterprises to quickly and easily search across thousands of distributed endpoints from a central location.  This allows organizations to easily perform unified complex searches across content, metadata, or both and obtain full results in minutes, enabling true pre-collection EDA with live keyword analysis and distributed processing and collection in parallel at the custodian level. This dramatically shortens the identification/collection process by weeks if not months, curtails processing and review costs from not over-collecting data, and provides confidence to the legal team with a highly transparent, consistent and systemized process.

A recent webinar featuring Duff & Phelps Managing Director and 20-year eDiscovery and computer forensics veteran Erik Laykin included a live demonstration of X1DD searching across 20 distributed endpoints in a manner of seconds. In reaction to this demonstration, Laykin commented “the ability to instantaneously search for keywords across the enterprise for a small or large group of custodians is in its own right a killer application. This particular feature gives you instantaneous answers to one of the key questions folks have been wrestling with for quite some time.”

You can now view a recording of last month’s webinar: eDiscovery Collection: Existing Challenges and a Game Changing Solution, which features an overview of the existing broken state of enterprise eDiscovery collection, culminating with a demonstration of X1 Distributed Discovery. The recorded demo will help illustrate how pre-collection EDA can greatly strengthen counsel’s approach to eDiscovery collection and meet and confer processes.

Leave a comment

Filed under eDiscovery, Preservation & Collection

The Challenge of Defensible Deletion of Distributed Legacy Data

According to industry studies, it is common for companies to preserve over 250,000 pages and manually review over 1,000 pages for every page produced in discovery. However, when companies cull down their information through systematic execution of a defensible retention schedule, they dramatically reduce the costs and risks of discovery and greatly improve operational effectiveness. The challenge is to operationalize existing information retention and management policies in an automated, scalable and accurate manner, especially for legacy data that exists in many different information silos across larger organizations that face frequent litigation.

This is much easier said than done. Most all archiving and information systems are built on the centralization model, where all the data to be searched, categorized and managed needs to be migrated to a central location. This is fine for some email archives and traditional business records, but does not address the huge challenge of legacy data and other information “in the wild.” As leading information management consulting firm Jordan Lawrence pointed out on our recent webinar, organizations cannot be expected to radically change how they conduct business by centralizing their data in order to meet information governance requirements. Knowledge workers typically create, collaborate on and access information in their group and department silos, which are decentralized across large enterprises. Forcing centralization on these many pockets of productivity is highly disruptive and rarely effective due to scalability, network bandwidth and other logistical challenges.

So what this leaves is the reality that for any information remediation process to be effective, it must be executed within these departmentalized information silos. This past week, X1 Discovery, in conjunction with our partner Jordan Lawrence presented a live webinar where we presented a compelling solution to this challenge. Jordan Lawrence has over 25 years experience in the records management field, providing best practices, metrics and deep insights into the location, movement, access and retention of sensitive and personal information within the enterprise to over 1,000 clients.

In the webinar, we presented a comprehensive approach that companies can implement in a non-disruptive fashion to reduce the storage costs and legal risks associated with the retention of electronically stored information (ESI). Guest speaker attorney and former Halliburton senior counsel Ron Perkowski noted that organizations can avoid court sanctions while at the same time eliminating ESI that has little or no business value through a systematic and defensible process, citing Federal Rule of Civil Procedure 37(e) (The so-called “Safe Harbor Rule” and the case of FTC v. Lights of America, (C.D. Cal. Jan. 2012)

Both Ron Perkowski and Jordan Lawrence EVP Marty Provin commented that X1 Rapid Discovery represents game-changing technology to effectuate the remediation of distributed legacy data due to its ability to install on demand virtually anywhere in the enterprise, including remote data silos, its light footprint web browser access, and intuitive interface. X1 Rapid Discovery enables for effective assessment, reporting, categorization and migration and remediation of distributed information assets by accessing, searching and managing the subject data in place without the need for migration to the appliance or a central repository.

> The recording of the free webinar is now available here.

Leave a comment

Filed under Best Practices, Information Management