Tag Archives: X1 Discovery

The Global De-Centralized Enterprise: An Un-Met eDiscovery Challenge

Enterprises with data situated within a multitude of segmented networks across North America and the rest of the world face unique challenges for eDiscovery and compliance-related investigation requirements. In particular, the wide area networks of large project engineering, oil & gas, and systems integration firms typically contain terabytes of geographically disparate information assets in often harsh operating environments with very limited network bandwidth. Information management and eDiscovery tools that require data centralization or run on expensive and inflexible hardware appliances cannot, by their very nature, address critical project information in places like Saudi Arabia, China, or the Alaskan North Slope.

Despite vendor marketing hype, network bandwidth constraints coupled with the requirement to migrate data to a single repository render traditional information management and eDiscovery tools ineffective to address de-centralized global enterprise data. As such, the global decentralized enterprise represents a major gap for in-house eDiscovery processes, resulting in significant expense and inefficiencies. The case of U.S. ex rel. McBride v. Halliburton Co. [1]  illustrates this pain point well. In McBride, Magistrate Judge John Facciola’s instructive opinion outlines Halliburton’s eDiscovery struggles to collect and process data from remote locations:

Since the defendants employ persons overseas, this data collection may have to be shipped to the United States, or sent by network connections with finite capacity, which may require several days just to copy and transmit the data from a single custodian . . . (Halliburton) estimates that each custodian averages 15–20 gigabytes of data, and collection can take two to ten days per custodian. The data must then be processed to be rendered searchable by the review tool being used, a process that can overwhelm the computer’s capacity and require that the data be processed by batch, as opposed to all at once. [2]

Halliburton represented to the court that they spent hundreds of thousands of dollars on eDiscovery for only a few dozen remotely located custodians. The need to force-collect the remote custodians’ entire set of data and then sort it out through the expensive eDiscovery processing phase instead of culling, filtering and searching the data at the point of collection drove up the costs.

Despite the burdens associated with the electronic discovery of distributed data across the four corners of the earth, such data is considered accessible under the Federal Rules of Civil Procedure and thus must be preserved and collected if relevant to a legal matter. However, the good news is that the preservation and collection efforts can and should be targeted to only potentially relevant information limited to only custodians and sources with a demonstrated potential connection to the litigation matter in question.

This is important as the biggest expense associated with eDiscovery is the cost of overly inclusive preservation and collection. Properly targeted preservation initiatives are permitted by the courts and can be enabled by adroit software that is able to quickly and effectively access and search these data sources throughout the enterprise. The value of targeted preservation is recognized in the Committee Notes to the FRCP amendments, which urge the parties to reach agreement on the preservation of data and the key words, date ranges and other metadata to identify responsive materials. [3]  And In re Genetically Modified Rice Litigation, the court noted that “[p]reservation efforts can become unduly burdensome and unreasonably costly unless those efforts are targeted to those documents reasonably likely to be relevant or lead to the discovery of relevant evidence.” [4]

However, such targeted collection and ECA in place is not feasible in the decentralized global enterprise with current eDiscovery and information management tools. What is needed to address these challenges for the de-centralized enterprise is a field-deployable search and eDiscovery solution that operates in distributed and virtualized environments on-demand within these distributed global locations where the data resides. In order to meet such a challenge, the eDiscovery and search solution must immediately and rapidly install, execute and efficiently operate in a localized virtualized environment, including public or private cloud deployments, where the site data is located, without rigid hardware requirements or on-site physical access.

This is impossible if the solution is fused to hardware appliances or otherwise requires a complex on-site installation process. After installation, the solution must be able to index the documents and other data locally and serve up those documents for remote but secure access, search and review through a web browser. As the “heavy lifting” (indexing, search, and document filtering) is all performed locally, this solution can effectively operate in some of the harshest local environments with limited network bandwidth. The data is not only collected and culled within the local area network, but is also served up for full early case assessment and first pass review on site, so that only a much smaller data set of potentially relevant data is ultimately transmitted to a central location.

This ground breaking capability is what X1 Rapid Discovery provides. Its ability to uniquely deploy and operate in the IaaS cloud also means that the solution can install anywhere within the wide-area network, remotely and on-demand. This enables globally decentralized enterprises to finally address their overseas data in an efficient, expedient defensible and highly cost effective manner.

If you have any thoughts or experiences with the unique eDiscovery challenges of the de-centralized global enterprise, feel free to email me. I welcome the collaboration.

___________________________________________

[1] 272 F.R.D. 235 (2011)

[2] Id at 240.

[3] Citing the Manual for Complex Litigation (MCL) (4th) §40.25 (2)):

[4] 2007 WL 1655757 (June 5, 2007 E.D.Mo.)

Leave a comment

Filed under eDiscovery & Compliance, Enterprise eDiscovery

Authenticating Internet Web Pages as Evidence: a New Approach

By John Patzakis and Brent Botta

In recent posts, we have addressed the issue of evidentiary authentication of social media data. (See previous entries here and here). General Internet site data available through standard web browsing, instead of social media data provided by APIs or user credentials, presents slightly different but just as compelling challenges.

The Internet provides torrential amounts of evidence potentially relevant to litigation matters, with courts routinely facing proffers of data preserved from various websites. This evidence must be authenticated in all cases, and the authentication standard is no different for website data or chat room evidence than for any other. Under Federal Rule of Evidence 901(a), “The requirement of authentication … is satisfied by evidence sufficient to support a finding that the matter in question is what its proponent claims.” United States v. Simpson, 152 F.3d 1241, 1249 (10th Cir. 1998).

Ideally, a proponent of the evidence can rely on uncontroverted direct testimony from the creator of the web page in question. In many cases, however, that option is not available. In such situations, the testimony of the viewer/collector of the Internet evidence “in combination with circumstantial indicia of authenticity (such as the dates and web addresses), would support a finding” that the website documents are what the proponent asserts. Perfect 10, Inc. v. Cybernet Ventures, Inc. (C.D.Cal.2002) 213 F.Supp.2d 1146, 1154. (emphasis added) (See also, Lorraine v. Markel American Insurance Company, 241 F.R.D. 534, 546 (D.Md. May 4, 2007) (citing Perfect 10, and referencing MD5 hash values as an additional element of potential “circumstantial indicia” for authentication of electronic evidence).

One of the many benefits of X1 Social Discovery is its ability to preserve and display all the available “circumstantial indicia” – to borrow the Perfect 10 court’s term — to the user in order to present the best case possible for the authenticity of Internet-based evidence collected with the software. This includes collecting all available metadata and generating a MD5 checksum or “hash value” of the preserved data.

But html web pages pose unique authentication challenges and merely generating an MD5 checksum of the entire web page, or just the web page source file, provides limited value because web pages are constantly changing due to their very fluid and dynamic nature. In fact, a web page collected from the Internet in immediate succession would very likely calculate two different MD5 checksums. This is because web pages typically feature links to many external items that are dynamically loaded upon each page view. These external links take the form of cascading style sheets (CSS), graphical images, JavaScripts and other supporting files. This linked content can be stored on another server in the same domain, but is often located somewhere else on the Internet.

When the Web browser loads a web page, it consolidates all these items into one viewable page for the user. Since the Web page source file contains only the links to the files to be loaded, the MD5 checksum of the source file can remain unchanged even if the content of the linked files become completely different.  Therefore, the content of the linked items must be considered in the authenticity of the Web page. X1 Social Discovery addresses these challenges by first generating an MD5 checksum log representing each item that constitutes the Web page, including the main Web page’s source. Then an MD5 representing the content of all the items contained within the web page is generated and preserved.

To further complicate Web collections, entire sections of a Web page are often not visible to the viewer. These hidden areas serve various purposes, including metatagging for Internet search engine optimization. The servers that host Websites can either store static Web pages or dynamically created pages that usually change each time a user visits the Website, even though the actual content may appear unchanged.

In order to address this additional challenge, X1 Social Discovery utilizes two different MD5 fields for each item that makes a Web page.  The first is the acquisition hash that is from the actual collected information.  The second is the content hash.  The content hash is based on the actual “BODY” of a Web page and ignores the hidden metadata.  By taking this approach, the content hash will show if the user viewable content has actually changed, not just a hidden metadata tag provided by the server. To illustrate, below is a screenshot from the metadata view of X1 Social Discovery for website capture evidence, reflecting the generation of MD5 checksums for individual objects on a single webpage:

The time stamp of the capture and url of the web page is also documented in the case. By generating hash values of all individual objects within the web page, the examiner is better able to pinpoint any changes that may have occurred in subsequent captures. Additionally, if there is specific item appearing on the web page, such as an incriminating image, then is it is important to have an individual MD5 checksum of that key piece of evidence. Finally, any document file found on a captured web page, such as a pdf, Powerpoint, or Word document, will also be individually collected by X1 Social Discovery with corresponding acquisition and content hash values generated.

We believe this approach to authentication of website evidence is unique in its detail and presents a new standard. This authentication process supports the equally innovative automated and integrated web collection capabilities of X1 Social Discovery, which is the only solution of its kind to collect website evidence both through a one-off capture or full crawling, including on a scheduled basis, and have that information instantly reviewable in native file format through a federated search that includes multiple pieces of social media and website evidence in a single case. In all, X1 Social Discovery is a powerful solution to effectively collect from social media and general websites across the web for both relevant content and all available “circumstantial indicia.”

Leave a comment

Filed under Authentication, Best Practices, Preservation & Collection

eDiscovery Search and Collection in the Cloud

After several dozen posts on social media eDiscovery, we are going to focus the next few weeks on the related issue of eDiscovery in the cloud. As we see it, despite the enormous cost benefits of the cloud, concerns about the feasibility of eDiscovery and general search across an organization’s critical cloud-resident data has to some degree prevented broader adoption.

The cloud means many things to many people, but I believe the real eDiscovery action (and pain point) is in Infrastructure as a Service (IaaS) cloud deployments (such as the Amazon cloud, Rackspace, or pure enterprise cloud providers such as Fujitsu). According to a recent PwC report, Cloud IaaS will account for 30% of IT expenditures by 2014.  IaaS currently provides the means for organizations to aggressively store and virtualize their enterprise data and software, thus potentially spawning the same large data volumes and requiring the same critical search and eDiscovery requirements as traditional enterprise environments.  Amazon Web Services, the leading IaaS cloud provider, reports in our discussions with them extensive customer eDiscovery requirements that are currently addressed by inefficient and manual means.  So for purposes of this discussion, IaaS, which is essentially cloud for the enterprise and where there is a current significant eDiscovery challenge, is what we will focus on.

So if an organization maintains two terabytes of documents in the Amazon or Rackspace cloud, how do they quickly access, search, triage and collect that data in its existing cloud environment if a critical eDiscovery or compliance search requirement suddenly arises? This scenario is a current significant pain point for IaaS cloud.  In such situations, the organization is typically resorting to one of two agonizingly inefficient processes. The first option involves shipping the provider hard drives for their IT staff to copy the data in bulk for download and having that data shipped back. Rackspace’s guidelines provide that a transfer of 2 terabytes of bulk files would cost over $10,000 in fees and require about four to six weeks. And then all the company gets is a full 2 terabyte duplicate of its data that still must be searched, processed and reviewed.

The other alternative is to slowly download the data through a secure file transfer protocol connection. However, even with a robust T2 line, it would take three to six weeks to transfer the two TBs, depending on how much dedicated bandwidth IT would be willing to dedicate to the exercise.

So what is needed is robust eDiscovery software that can truly support the IaaS cloud where the data resides without first requiring mass data export. We will discuss what that entails and the requirements of truly cloud capable eDiscovery software in our next post, so please stay tuned!

2 Comments

Filed under Cloud Data, IaaS

The Future for eDiscovery: Social Media and the Cloud

Greetings and welcome to all. This is the inaugural post of Next Generation eDiscovery, a blog that will focus on legal, technical and compliance issues related to the collection, preservation and early case assessment of social media and other cloud-based data. To provide some context, the team here at X1 Discovery is experienced in developing and supporting technology for collecting electronic evidence in the enterprise to meet eDiscovery and investigation requirements. Many of us hail from Guidance Software, the developer of EnCase, which is the leading eDiscovery and investigative solution for collecting from hard drives, both standalone and within the enterprise. And now we turn our focus to current trends and the future.

And the future for eDiscovery is about social media and the cloud. In fact, it seems like just this year when social media became a compelling issue in eDiscovery and is reaching critical mass given the level of rising discourse. With over 700 million Facebook users and 200 million people with Twitter accounts, evidence from social media sites can be relevant to just about every litigation dispute and investigation matter. Social media evidence is widely discoverable and generally not subject to privacy constraints when established to be relevant to a case, particularly when that data is held by a party to litigation or even a key witness.

It seems like in recent months there has been much talk in the eDiscovery and digital investigation fields about social media, mostly outlining the scope of the problem and the need to put corporate policies and procedures in place concerning social media.  That discussion is an important first step, but it’s time for actual solutions in terms of technical, legal and investigation techniques. This blog will seek to identify and foster discussion points, educate, and even pontificate at times but also learn from our readers, customers and non-customers alike. We look forward to the dialogue.

Leave a comment

Filed under Preservation & Collection