Category Archives: eDiscovery & Compliance

New Federal Rule of Evidence to Directly Impact Computer Forensics and eDiscovery Preservation Best Practices

At X1, an essential component of our mission is to develop and support exceptional technology for collecting electronic evidence to meet eDiscovery, investigative and compliance requirements. It is also our goal to keep you abreast of important developments in the industry that could ultimately impact collection strategies in the future and, consequently, your business.  To that end, I recently learned about a crucial new legal development scheduled to take place on December 1, 2017, which we believe will have a very significant impact on the practices of our customers and partners.

In a nutshell, the new development is a significant planned amendment to Federal Rule of Evidence 902 that will go into effect one year from now. This amendment, in the form of new subsection (14), is anticipated by the legal community to significantly impact eDiscovery and computer forensics software and its use by establishing that electronic data recovered “by a process of digfederalrulesofevidence-188x300_flat2ital identification” is to be self-authenticating, thereby not routinely necessitating the trial testimony of a forensic or technical expert where best practices are employed, as certified through a written affidavit by a “qualified person.” Notably, the accompanying official Advisory Committee notes specifically reference the importance of both generating “hash values” and verifying them post-collection as a means to meet this standard for self-authentication. This digital identification and verification process can only be achieved with purpose-built computer forensics or eDiscovery collection and preservation tools.

Rule 902, in its current form, enumerates a variety of documents that are presumed to be self-authenticating without other evidence of authenticity. These include public records and other government documents, notarized documents, newspapers and periodicals, and records kept in the ordinary course of business. New subpart (14) will now include electronic data collected via a process of digital identification as a key addition to this important rule.

Amended Rule 902, in pertinent part, reads as follows:

Rule 902. Evidence That Is Self-Authenticating
The following items of evidence are self-authenticating; they require no extrinsic evidence of authenticity in order to be admitted:
* * *
(14) Certified Data Copied from an Electronic Device, Storage Medium, or File.
Data copied from an electronic device, storage medium, or file, if authenticated by a process of digital identification, as shown by a certification of a qualified person that complies with the certification requirements of Rule 902(11) or (12).

The reference to the “certification requirements of Rule 902(11) or (12)” is a process by which a proponent seeking to introduce electronic data into evidence must present a certification in the form of a written affidavit that would be sufficient to establish authenticity were that information provided by a witness at trial. This affidavit must be provided by a “qualified person,” which generally would be a computer forensics, eDiscovery or information technology practitioner, who collected the evidence and can attest to the requisite process of digital identification utilized.

In applying Rule 902(14), the courts will heavily rely on the accompanying Judicial Conference Advisory Committee notes, which provide guidance and insight concerning the intent of the laws and how they should be applied. The Advisory Committee notes are published alongside the statute and are essentially considered an extension of the rule. The second paragraph of committee note to Rule 902(14) states, in its entirety, as follows:

“Today, data copied from electronic devices, storage media, and electronic files are ordinarily authenticated by ‘hash value.’ A hash value is a number that is often represented as a sequence of characters and is produced by an algorithm based upon the digital contents of a drive, medium, or file. If the hash values for the original and copy are different, then the copy is not identical to the original. If the hash values for the original and copy are the same, it is highly improbable that the original and copy are not identical. Thus, identical hash values for the original and copy reliably attest to the fact that they are exact duplicates. This amendment allows self-authentication by a certification of a qualified person that she checked the hash value of the proffered item and that it was identical to the original. The rule is flexible enough to allow certifications through processes other than comparison of hash value, including by other reliable means of identification provided by future technology.”

The Advisory Committee notes further state that Rule 902(14) is designed to streamline the admission of electronic evidence where its foundation is not at issue, while providing a notice procedure where “the parties can determine in advance of trial whether a real challenge to authenticity will be made, and can then plan accordingly.” While this rule provides that properly certified electronic data is now afforded a strong presumption of authenticity, the opponent may still lodge an objection, but the opponent now has the burden to overcome that presumption.  Additionally, the opponent remains free to object to admissibility on other grounds, such as relevance or hearsay.

Significant Impact Expected

While Rule 902(14) applies to the Federal Courts, the Rules of Evidence for most states either mirror or closely resemble the Federal Rules of Evidence, and it is thus expected that most if not all 50 states will soon adapt this amendment.

Rule 902(14) will most certainly and significantly impact computer forensics and eDiscovery practitioners by reinforcing best practices. The written certification required by Rule 902(14) must be provided by a “qualified person” who utilized best practices for the collection, preservation and verification of the digital evidence sought to be admitted. At the same time, this rule will in effect call into question electronic evidence collection methods that do not enable a defensible “digital identification” and verification process. In fact, the Advisory Committee notes specifically reference the importance of computer forensics experts, noting that a “challenge to the authenticity of electronic evidence may require technical information about the system or process at issue, including possibly retaining a forensic technical expert.”

In the eDiscovery context, I have previously highlighted the perils of both custodian self-collection for enterprise ESI collection and “print screen” methods for social media and website preservation. Rule 902(14) should provide the final nail in the coffin for those practices. For instance, if key social media evidence is collected through manual print screen, which is not a “process of digital identification” under Rule 902(14), then not only will the proponent of that evidence fail to take advantage of the efficiencies and cost-savings provided by the rule, they will also invite heightened scrutiny for not preserving the evidence utilizing best practices. The same is true for custodian self-collection in the enterprise. Many emails and other electronic documents preserved and disclosed by the producing party are often favorable to their case.  Without best practices utilized for enterprise data collection, such as with X1 Distributed Discovery, that information may not be deemed self-authenticating under this new rule.

In the law enforcement field, untrained patrol officers or field investigators are too often collecting electronic evidence in a manual and haphazard fashion, without utilizing the right tools that qualify as a “process of digital identification.” So for an example, if an untrained investigator collects a web page via the computer’s print screen process, that printout will not be deemed to be self-authenticating under Rule 902(14), and will face significant evidentiary hurdles compared to a properly collected web page via a solution such as X1 Social Discovery.

Also being added to Federal Rule of Evidence 902 is subpart (13), which provides that “a record generated by an electronic process or system that produces an accurate result” is similarly self-authenticating. This subpart will also have a beneficial impact on the computer forensics and eDiscovery field, but to a lesser degree than subpart (14). I will be addressing Rule 902(13) in a future post. The public comment period on amendments (13) and (14) is now closed and the Judicial Conference of the United States has issued its final approval. The amendments are currently under review by the US Supreme Court. If the Supreme Court approves these amendments as expected, they will become effective on December 1, 2017 absent Congressional intervention.

To learn more about this Rule 902(14) and other related topics, we’d like to invite you to watch this 45 minute webinar discussion led by David Cohen, Partner and Chair of Records & eDiscovery Group at Reed Smith LLP. The 45 minute webinar includes a Q&A following the discussion. We look forward to your participation.

Watch now > 

Leave a comment

Filed under Authentication, Best Practices, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, Information Governance, Social Media Investigations

Effective Information Governance Requires Effective Enterprise Technology

Information governance is the compilation of policies, processes, and controls enforced and executed with effective technology to manage electronically stored information throughout the enterprise. Leading IT industry research firm Gartner states that “the goal of information governance is to ensure compliance with laws and regulations, mitigate risks and protect the confidentiality of sensitive company and customer data.” A strong, proactive information governance strategy that strikes the balance between under-retention and over-retention of information can provide dramatic cost savings while significantly reducing risk.

However, while policies, procedures and documentation are important, information governance programs are ultimately hollow without consistent, operational execution and enforcement. CIOs and legal and compliance executives often aspire to implement information governance programs like defensible deletion, data migration, and data audits to detect risks and remediate non-compliance. However, without an actual and scalable technology platform to effectuate these goals, those aspirations remain just that. For instance, recent IDG research suggests that approximately 70% of information stored by companies is “dark data” that is in the form of unstructured, distributed data that can pose significant legal and operational risk and cost.

To date, organizations have employed limited technical approaches to try and execute on their information governance initiatives, enduring many struggles. For instance, software agent-based crawling methods are commonly attempted and can cause repeated high user computer resources utilization for each search initiated and network bandwidth limitations being pushed to the limits rendering the approach ineffective. So being able to search and audit across at least several hundred distributed end points in a repeatable and quick fashion is effectively impossible under this approach.

Another tactic attempted by some CIOs to attempt to address this daunting challenge is to periodically migrate disparate data from around the global enterprise into a central location. The execution of this strategy will still leave the end user’s computer needing to be scanned as there is never a moment when all users in the enterprise have just finished this process with no new data created. That means now that both the central repository and the end-points will need to be searched and increasing the complexity and management of the job. Boiling the ocean through data migration and centralization is extremely expensive, highly disruptive, and frankly unworkable as it never removes the need to conduct constant local computer searching, again through problematic crawling methods.

What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, through the ability to search and report across several thousand endpoints and other unstructured data sources, and return results within minutes instead of days or weeks. None of the other approaches outlined above come close to meeting this requirement and in fact actually perpetuate information governance failures.

X1 Distributed Discovery (X1DD) represents a unique approach, by enabling enterprises to quickly and easily search across multiple distributed endpoints and data servers from a central location.  Legal and compliance teams can easily perform unified complex searches across both unstructured content and metadata, obtaining statistical insight into the data in minutes, instead of days or weeks. With X1DD, organizations can also automatically migrate, collect, or take other action on the data as a result of the search parameters.  Built on our award-winning and patented X1 Search technology, X1DD is the first product to offer true and massively scalable distributed searching that is executed in its entirety on the end-node computers for data audits across an organization. This game-changing capability vastly reduces costs while greatly mitigating risk and disruption to operations.

X1DD operates on-demand where your data currently resides — on desktops, laptops, servers, or even the Cloud — without disruption to business operations and without requiring extensive or complex hardware configurations. Beyond enterprise eDiscovery and information governance functionality, organizations offer employees at the same time, the award-winning X1 Search, improving productivity while effectuating that all too illusive actual compliance with information governance programs.

1 Comment

Filed under Best Practices, eDiscovery & Compliance, Information Governance, Information Management, Records Management, SharePoint, X1 Search 8

True Enterprise-Wide eDiscovery Collection is Finally Here

My previous post discussed the inability of any software provider to solve a critical need by delivering a truly scalable eDiscovery preservation and collection solution that can search across thousands of enterprise endpoints in a short period of time. In the absence of such a “holy grail” solution, eDiscovery collection remains dominated by either unsupervised custodian self-collection or manual services, driving up costs while increasing risk and disruption to business operations.

So today, we at X1 are excited to announce the release of X1 Distributed Discovery. X1 Distributed Discovery (X1DD) enables enterprises to quickly and easily search across up to tens of thousands of distributed endpoints and data servers from a central location.  Legal and compliance teams can easily perform unified complex searches across both unstructured content and metadata, obtaining statistical insight into the data in minutes, and full results with completed collection in hours, instead of days or weeks. Built on our award-winning and patented X1 Search technology, X1DD is the first product to offer true and massively scalable distributed data discovery across an organization. X1DD replaces expensive, cumbersome and highly disruptive approaches to meet enterprise discovery, preservation, and collection needs.

x1dd_diagram

Enterprise eDiscovery collection remains a significant pain point, subjecting organizations to both substantial cost and risk. X1DD addresses this challenge by starting to show results from distributed data across global enterprises within minutes instead of today’s standard of weeks, and even months. This game-changing capability vastly reduces costs while greatly mitigating risk and disruption to operations.

Targeted and iterative end point search is a quantum leap in early data assessment, which is critical to legal counsel at the outset of any legal matter. However, under today’s industry standard, the legal team is typically kept in the dark for weeks, if not months, as the manual identification and collection process of distributed, unstructured data runs its expensive and inefficient course.  To illustrate the power and capabilities of X1DD, imagine being able to perform multiple detailed Boolean keyword phrase searches with metadata filters across the targeted end points of your global enterprise. The results start returning in minutes, with granular statistical data about the responsive documents and emails associated with specific custodians or groups of custodians.

Once the legal team is satisfied with a specific search string, after sufficient iteration, the data can then be collected by X1DD by simply hitting the “collect” button. The responsive data is “containerized” at each end point and automatically transmitted to a central location, where all data is seamlessly indexed and ready for further culling and first pass review. Importantly, all results are tied back to a specific custodian, with full chain of custody and preservation of all file metadata.

This effort described above — from iterative distributed search through collection, transmittal to a central location, and indexing of data from thousands of endpoints — can be accomplished in a single day. Using manual consulting services, the same project would require several weeks and hundreds of thousands of dollars in collection costs alone, not to mention significant disruption to business operations. Substantial costs associated with over-collection of data would mount as well.

X1DD operates on-demand where your data currently resides — on desktops, laptops, servers, or even the Cloud — without disruption to business operations and without requiring extensive or complex hardware configurations. Beyond enterprise eDiscovery and investigation functionality, organizations can offer employees the award-winning X1 Search, improving productivity while maintaining compliance.

X1DD will be featured in an April 19 webinar with eDiscovery expert Erik Laykin of Duff & Phelps. Watch a full briefing and technical demo of X1DD and find out for yourself why X1 Distributed Discovery is a game-changing solution. Or please contact us to arrange for a private demo.

Leave a comment

Filed under Best Practices, Corporations, Desktop Search, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, Information Governance, Information Management, Preservation & Collection, X1 Search 8

Enterprise eDiscovery Collection Remains Costly and Inefficient

2016 marks my sixteenth year as a senior executive in the eDiscovery business. I began my career as a co-founder at Guidance Software (EnCase), serving as General Counsel, CEO and then Vice Chairman and Chief Strategy Officer from 1999 through 2009. After becoming the dominant solution for computer forensics in the early part of the last decade, Guidance set out to define a new field — enterprise discovery collection. Despite a good foundational concept, a truly scalable solution that could search across hundreds, or even thousands, of enterprise endpoints in a short period of time never came to fruition. To date, no other eDiscovery vendor has delivered on the promise of such a “holy grail” solution either. As a result, eDiscovery collection remains dominated by either unsupervised custodian self-collection, or manual services.

tron1

 

Organizations employ limited technical approaches in an effort to get by, and thus enterprise eDiscovery collection remains a significant pain point, subjecting organizations to both significant cost and risks. This post is the first of a two part series on the status of the broken enterprise eDiscovery collection process. Part two will outline a proposed solution.

Currently, enterprises employ four general approaches to eDiscovery collection, with two involving mostly manual methodologies, and the other two predominantly technology-based. Each of the four methods are fraught with inefficiencies and challenges.

The first and likely most common approach, is custodian self-collection, where custodians are sent manual instructions to search, review and upload data that they subjectively determine to be responsive to a matter. This method is plagued with severe defensibility concerns, with several courts disapproving of the practice due to poor compliance, modifying metadata, and inconsistency of results. See Geen v. Blitz, 2011 WL 806011, (E.D. Tex. Mar. 1, 2011), Nat’l Day Laborer Org. v. U.S. Immigration and Customs Enforcement Agency, 2012 WL 2878130 (S.D.N.Y. July 13, 2012).

The second approach is manual services, usually performed by eDiscovery consultants. This method is expensive, disruptive and time-consuming as many times an “overkill” method of forensic image collection process is employed. It also often results in over collection, as the collector typically only gets one bite at the apple, thus driving up eDiscovery costs. While attorney review and processing represent the bulk of eDiscovery costs, much of these expenses stem from over-collection, and thus can be mitigated with a smarter and more efficient process.

When it comes to technical approaches, endpoint forensic crawling methods are employed on a limited basis. While this can be feasible for a small number of custodians, network bandwidth constraints coupled with the requirement to migrate all endpoint data back to the forensic crawling tool renders the approach ineffective. For example, to search a custodian’s laptop with 10 gigabytes of email and documents, all 10 gigabytes must be copied and transmitted over the network, where it is then searched, all of which takes at least several hours per computer. So, most organizations choose to force collect all 10 gigabytes. The case of U.S. ex rel. McBride v. Halliburton Co.  272 F.R.D. 235 (2011), Illustrates this specific pain point well. In McBride, Magistrate Judge John Facciola’s instructive opinion outlines Halliburton’s eDiscovery struggles to collect and process data from remote locations:

“Since the defendants employ persons overseas, this data collection may have to be shipped to the United States, or sent by network connections with finite capacity, which may require several days just to copy and transmit the data from a single custodian . . . (Halliburton) estimates that each custodian averages 15–20 gigabytes of data, and collection can take two to ten days per custodian. The data must then be processed to be rendered searchable by the review tool being used, a process that can overwhelm the computer’s capacity and require that the data be processed by batch, as opposed to all at once.”

Halliburton represented to the court that they spent hundreds of thousands of dollars on eDiscovery for only a few dozen remotely located custodians. The need to force-collect the remote custodians’ entire set of data and then sort it out through the expensive eDiscovery processing phase, instead of culling, filtering and searching the data at the point of collection drove up the costs.

And finally, another tactic attempted by some CIOs to attempt to address this daunting challenge is to periodically migrate disparate data from around the global enterprise into a central location. This Quixotic endeavor is perceived necessary as traditional information management and electronic discovery tools are not architected and not suited to address large and disparate volumes of data located in hundreds of offices and work sites across the globe.  But, boiling the ocean through data migration and centralization is extremely expensive, disruptive and frankly unworkable.

What has always been needed is gaining immediate visibility into unstructured distributed data across the enterprise, through the ability to search and collect across several hundred endpoints and other unstructured data sources such as file shares and SharePoint, and return results within minutes instead of days or weeks. None of the four approaches outlined above come close to meeting this requirement and in fact actually perpetuate eDiscovery pain.

Is there a fifth option? Stay tuned for my next post coming soon.

Leave a comment

Filed under Best Practices, Case Law, eDiscovery, eDiscovery & Compliance, Enterprise eDiscovery, Information Governance, Information Management, Preservation & Collection

Amazon Re:Invent – With the Cloud, Avoid Mistakes of the Past

Last week, I had the opportunity to attend the Amazon Re:Invent conference in Las Vegas. Over 13,000 people took over the Palazzo for deep dive technical sessions to learn how to harness the power of Amazon Web Services (AWS). reinventThis show had a much different energy than other enterprise software conferences, such as VMworld.  Whereas most conferences feature a great deal of selling and marketing by the host, Amazon Re:Invent was truly more of a training show. Cloud architects spent a lot of time in technical bootcamps learning how AWS works and getting certified as administrators.

That is not to say that there was no selling or marketing going on; the exhibition hall featured myriad vendors that augment or assist with AWS deployments and solutions. The focus on the deep technical details, though, does point out the fact that we are still in the very early days of the cloud. Most of the focus of the keynotes was about getting compute workloads to the cloud – there was not a lot of mention of moving actual data to the cloud, even though that is certainly beginning to happen.  But, that is how the evolution goes. IT departments need to be comfortable moving workloads to the cloud as they begin to leverage the cloud. Building this foundation is also important to Amazon – the goal would be for many companies to completely outsource the IT data center.

It is important, however, to proactive plan for information management as more workloads and, importantly, data move to the cloud.  As the internet first emerged, companies dove into new technologies like email and network file shares only to create eDiscovery nightmares and make it virtually impossible to find information within digital landfills. It is key to learn from those mistakes rather than to repeat them when leveraging cloud-based technologies. In order to ensure both that end-users are happy with search experiences on data in the cloud and that Legal can do what they need to do from an eDiscovery standpoint. This means providing business workers with unified access to email, files, and SharePoint information regardless of where the data lives. It also means giving Legal teams fast search queries and collections. But, Cloud search is slow, as indexes live far from the information. This results in frustrated workers and Legal teams afraid that eDiscovery cannot be completed in time.

If a customer wanted to speed up search, it would have to essentially attach an appliance to a hot-air balloon and send it up to the Cloud provider so that the customer’s index could live on that appliance (or farm of appliances) in the Cloud providers data center, physically near the data. There are many reasons, however, that a Cloud provider would not allow a customer to do that:

  • Long install process
  • Challenging Pre-requisites
  • 3rd party installation concerns
  • Physical access
  • Specific hardware requirements
  • They only scale vertically

The solution to a faster search is a cloud-deployable search application, such as X1 Rapid Discovery. This creates a win-win for Cloud providers and customers alike. As enterprises move more and more information to the Cloud, it will be important to think about workers’ experiences with Cloud systems – and search is one of those user experiences that, if it is a bad one, can really negatively affect a project and cause user revolt. eDiscovery is also a major concern – I’ve worked with organizations that moved data to the cloud before planning how they would handle eDiscovery. That left Legal teams to clean up messes, or more realistically, just deal with the messes. By thinking about these issues before moving data to the cloud, it is possible to avoid these painful occurrences and leverage the cloud without headaches. At X1, we look forward to working closely with Amazon to help customers have the search and eDiscovery solutions they need as more and more data goes to AWS.

Leave a comment

Filed under Cloud Data, eDiscovery & Compliance, Enterprise eDiscovery, Enterprise Search, Hybrid Search, Information Access, Information Governance, Information Management