The Global De-Centralized Enterprise: An Un-Met eDiscovery Challenge

Enterprises with data situated within a multitude of segmented networks across North America and the rest of the world face unique challenges for eDiscovery and compliance-related investigation requirements. In particular, the wide area networks of large project engineering, oil & gas, and systems integration firms typically contain terabytes of geographically disparate information assets in often harsh operating environments with very limited network bandwidth. Information management and eDiscovery tools that require data centralization or run on expensive and inflexible hardware appliances cannot, by their very nature, address critical project information in places like Saudi Arabia, China, or the Alaskan North Slope.

Despite vendor marketing hype, network bandwidth constraints coupled with the requirement to migrate data to a single repository render traditional information management and eDiscovery tools ineffective to address de-centralized global enterprise data. As such, the global decentralized enterprise represents a major gap for in-house eDiscovery processes, resulting in significant expense and inefficiencies. The case of U.S. ex rel. McBride v. Halliburton Co. [1]  illustrates this pain point well. In McBride, Magistrate Judge John Facciola’s instructive opinion outlines Halliburton’s eDiscovery struggles to collect and process data from remote locations:

Since the defendants employ persons overseas, this data collection may have to be shipped to the United States, or sent by network connections with finite capacity, which may require several days just to copy and transmit the data from a single custodian . . . (Halliburton) estimates that each custodian averages 15–20 gigabytes of data, and collection can take two to ten days per custodian. The data must then be processed to be rendered searchable by the review tool being used, a process that can overwhelm the computer’s capacity and require that the data be processed by batch, as opposed to all at once. [2]

Halliburton represented to the court that they spent hundreds of thousands of dollars on eDiscovery for only a few dozen remotely located custodians. The need to force-collect the remote custodians’ entire set of data and then sort it out through the expensive eDiscovery processing phase instead of culling, filtering and searching the data at the point of collection drove up the costs.

Despite the burdens associated with the electronic discovery of distributed data across the four corners of the earth, such data is considered accessible under the Federal Rules of Civil Procedure and thus must be preserved and collected if relevant to a legal matter. However, the good news is that the preservation and collection efforts can and should be targeted to only potentially relevant information limited to only custodians and sources with a demonstrated potential connection to the litigation matter in question.

This is important as the biggest expense associated with eDiscovery is the cost of overly inclusive preservation and collection. Properly targeted preservation initiatives are permitted by the courts and can be enabled by adroit software that is able to quickly and effectively access and search these data sources throughout the enterprise. The value of targeted preservation is recognized in the Committee Notes to the FRCP amendments, which urge the parties to reach agreement on the preservation of data and the key words, date ranges and other metadata to identify responsive materials. [3]  And In re Genetically Modified Rice Litigation, the court noted that “[p]reservation efforts can become unduly burdensome and unreasonably costly unless those efforts are targeted to those documents reasonably likely to be relevant or lead to the discovery of relevant evidence.” [4]

However, such targeted collection and ECA in place is not feasible in the decentralized global enterprise with current eDiscovery and information management tools. What is needed to address these challenges for the de-centralized enterprise is a field-deployable search and eDiscovery solution that operates in distributed and virtualized environments on-demand within these distributed global locations where the data resides. In order to meet such a challenge, the eDiscovery and search solution must immediately and rapidly install, execute and efficiently operate in a localized virtualized environment, including public or private cloud deployments, where the site data is located, without rigid hardware requirements or on-site physical access.

This is impossible if the solution is fused to hardware appliances or otherwise requires a complex on-site installation process. After installation, the solution must be able to index the documents and other data locally and serve up those documents for remote but secure access, search and review through a web browser. As the “heavy lifting” (indexing, search, and document filtering) is all performed locally, this solution can effectively operate in some of the harshest local environments with limited network bandwidth. The data is not only collected and culled within the local area network, but is also served up for full early case assessment and first pass review on site, so that only a much smaller data set of potentially relevant data is ultimately transmitted to a central location.

This ground breaking capability is what X1 Rapid Discovery provides. Its ability to uniquely deploy and operate in the IaaS cloud also means that the solution can install anywhere within the wide-area network, remotely and on-demand. This enables globally decentralized enterprises to finally address their overseas data in an efficient, expedient defensible and highly cost effective manner.

If you have any thoughts or experiences with the unique eDiscovery challenges of the de-centralized global enterprise, feel free to email me. I welcome the collaboration.

___________________________________________

[1] 272 F.R.D. 235 (2011)

[2] Id at 240.

[3] Citing the Manual for Complex Litigation (MCL) (4th) §40.25 (2)):

[4] 2007 WL 1655757 (June 5, 2007 E.D.Mo.)

Leave a comment

Filed under eDiscovery & Compliance, Enterprise eDiscovery

Mid-Year Report: Legal Cases Involving Social Media Rapidly Increasing

As part of our ongoing effort to monitor legal developments concerning social media evidence, we again searched online legal databases of state and federal court decisions across the United States — this time to identify the number of cases in the first half of 2012 where evidence from social networking sites played a significant role. The results are available here in a detailed spreadsheet listing each case, allowing for anyone to review the cases and conduct their own analysis. The cases are accessible for free on Google Scholar.  The overall tally come in at 319 cases for this 6 month period, which is about an 85 percent increase in the number of published social media cases over the same period in 2011, as reported by our prior survey earlier this year.

As with the last survey, we reviewed all the search results and added annotations for the more notable cases, and were sure to eliminate duplicates and to not count de minimis entries — defined as cases with merely cursory or passing mentions of social media sites.  As only a very small number of cases–approximately one percent of all filed cases– involve a published decision that we can access online, it is safe to assume that several thousand, if not tens of thousands more cases involved social media evidence during this time period. Additionally, many of these published decisions involve fact patterns from as far back as 2008, as they are now just working their way through the appeals process. Finally, these cases do not reflect the presumably many thousands of more instances where social media evidence was relevant to an internal investigation or compliance audit, yet did not evolve into actual litigation. Even so, this limited survey is an important data point establishing the ubiquitous nature of social media evidence, its escalating importance and the necessity of best practices technology to search and collect this data for litigation and compliance requirements.

The search, limited to the top four social networking sites, tallied as follows: Facebook is now far in the lead with 197 cases, MySpace tallied in at 89, mostly with fact patterns circa 2009, Twitter with 25 and LinkedIn with 8. Criminal matters marked the most common category of cases involving social media evidence, followed by employment related litigation, insurance claims/personal injury, family law and general business litigation (trademark infringement/libel/unfair competition). One interesting and increasingly common theme involved social media usage being considered as a factor in establishing minimum contacts for jurisdictional purposes. (See Juniper Networks, Inc. v. JUNIPER MEDIA, LLC, and Lyons v. RIENZI & SONS, INC, as examples)

We plan on providing a complete summary for all of the 2012 cases in early January and it safe to assume that the second half of 2012 will continue to see a sharp increase in the presence of social media evidence.

> View all 2012 cases and more now

4 Comments

Filed under Case Law

Authenticating Internet Web Pages as Evidence: a New Approach

By John Patzakis and Brent Botta

In recent posts, we have addressed the issue of evidentiary authentication of social media data. (See previous entries here and here). General Internet site data available through standard web browsing, instead of social media data provided by APIs or user credentials, presents slightly different but just as compelling challenges.

The Internet provides torrential amounts of evidence potentially relevant to litigation matters, with courts routinely facing proffers of data preserved from various websites. This evidence must be authenticated in all cases, and the authentication standard is no different for website data or chat room evidence than for any other. Under Federal Rule of Evidence 901(a), “The requirement of authentication … is satisfied by evidence sufficient to support a finding that the matter in question is what its proponent claims.” United States v. Simpson, 152 F.3d 1241, 1249 (10th Cir. 1998).

Ideally, a proponent of the evidence can rely on uncontroverted direct testimony from the creator of the web page in question. In many cases, however, that option is not available. In such situations, the testimony of the viewer/collector of the Internet evidence “in combination with circumstantial indicia of authenticity (such as the dates and web addresses), would support a finding” that the website documents are what the proponent asserts. Perfect 10, Inc. v. Cybernet Ventures, Inc. (C.D.Cal.2002) 213 F.Supp.2d 1146, 1154. (emphasis added) (See also, Lorraine v. Markel American Insurance Company, 241 F.R.D. 534, 546 (D.Md. May 4, 2007) (citing Perfect 10, and referencing MD5 hash values as an additional element of potential “circumstantial indicia” for authentication of electronic evidence).

One of the many benefits of X1 Social Discovery is its ability to preserve and display all the available “circumstantial indicia” – to borrow the Perfect 10 court’s term — to the user in order to present the best case possible for the authenticity of Internet-based evidence collected with the software. This includes collecting all available metadata and generating a MD5 checksum or “hash value” of the preserved data.

But html web pages pose unique authentication challenges and merely generating an MD5 checksum of the entire web page, or just the web page source file, provides limited value because web pages are constantly changing due to their very fluid and dynamic nature. In fact, a web page collected from the Internet in immediate succession would very likely calculate two different MD5 checksums. This is because web pages typically feature links to many external items that are dynamically loaded upon each page view. These external links take the form of cascading style sheets (CSS), graphical images, JavaScripts and other supporting files. This linked content can be stored on another server in the same domain, but is often located somewhere else on the Internet.

When the Web browser loads a web page, it consolidates all these items into one viewable page for the user. Since the Web page source file contains only the links to the files to be loaded, the MD5 checksum of the source file can remain unchanged even if the content of the linked files become completely different.  Therefore, the content of the linked items must be considered in the authenticity of the Web page. X1 Social Discovery addresses these challenges by first generating an MD5 checksum log representing each item that constitutes the Web page, including the main Web page’s source. Then an MD5 representing the content of all the items contained within the web page is generated and preserved.

To further complicate Web collections, entire sections of a Web page are often not visible to the viewer. These hidden areas serve various purposes, including metatagging for Internet search engine optimization. The servers that host Websites can either store static Web pages or dynamically created pages that usually change each time a user visits the Website, even though the actual content may appear unchanged.

In order to address this additional challenge, X1 Social Discovery utilizes two different MD5 fields for each item that makes a Web page.  The first is the acquisition hash that is from the actual collected information.  The second is the content hash.  The content hash is based on the actual “BODY” of a Web page and ignores the hidden metadata.  By taking this approach, the content hash will show if the user viewable content has actually changed, not just a hidden metadata tag provided by the server. To illustrate, below is a screenshot from the metadata view of X1 Social Discovery for website capture evidence, reflecting the generation of MD5 checksums for individual objects on a single webpage:

The time stamp of the capture and url of the web page is also documented in the case. By generating hash values of all individual objects within the web page, the examiner is better able to pinpoint any changes that may have occurred in subsequent captures. Additionally, if there is specific item appearing on the web page, such as an incriminating image, then is it is important to have an individual MD5 checksum of that key piece of evidence. Finally, any document file found on a captured web page, such as a pdf, Powerpoint, or Word document, will also be individually collected by X1 Social Discovery with corresponding acquisition and content hash values generated.

We believe this approach to authentication of website evidence is unique in its detail and presents a new standard. This authentication process supports the equally innovative automated and integrated web collection capabilities of X1 Social Discovery, which is the only solution of its kind to collect website evidence both through a one-off capture or full crawling, including on a scheduled basis, and have that information instantly reviewable in native file format through a federated search that includes multiple pieces of social media and website evidence in a single case. In all, X1 Social Discovery is a powerful solution to effectively collect from social media and general websites across the web for both relevant content and all available “circumstantial indicia.”

Leave a comment

Filed under Authentication, Best Practices, Preservation & Collection

Case Study: The Importance of Integrated Social Media and Website Crawling Collection

One of the benefits of the very strong market adoption of our X1 Social Discovery software is that we receive a significant amount of invaluable and excellent customer feedback from very seasoned eDiscovery and law enforcement professionals. Many of these experts report that a good number of their social media investigation and collection cases also require general website collection. For instance, a person on Facebook promoting infringing technology may also be posting relevant information to industry web bulletin boards or maintaining their own website. It is thus important that a social media eDiscovery and investigation process feature integrated web collection and social media support.

For an effective process, website data should be collected, searched and reviewed alongside social media collections in the same interface. The collected website data should not be a mere image capture or pdf, but a full HTML (native file) collection, to ensure preservation of all metadata and other source information as well as to enable instant and full search and effective evidentiary authentication. All of the evidence should be searched with one pass, reviewed, tagged and, if needed, exported to an attorney review platform from a single workflow.

To illustrate what this looks like in the field, we recorded an 8 minute demonstration based in part upon a real life example reported to us by one of our customers. This case study, performed by our CTO Brent Botta, involves the collection of social media data as well as message board posts on the web. Importantly, this evidence is consolidated into a unified workflow to be searched in one single pass.

The investigation features X1 Social Discovery as the platform, which now features automated and integrated web crawling capabilities in addition to its renowned functionality for the collection and analysis of Facebook and Twitter content. We believe this is the only solution of its kind to collect website evidence both through a one-off capture or full crawling, including on a scheduled basis, and have that information instantly reviewable in native file format through a federated search that includes multiple pieces of social media and website evidence in a single case. Up to millions of web captures and social media items are searched instantly with the patented X1 search, tagged and exported from a single interface.

Like social media content, web pages bring their own unique but important challenges for evidentiary authentication. In the next week, we will be posting on best practices for the collection and authentication of web pages as evidence, so stay tuned!

Leave a comment

Filed under Best Practices, Preservation & Collection

Judge Peck: Cloud For Enterprises Not Cost-Effective Without Efficient eDiscovery Process

Hon. Andrew J. Peck
United States Magistrate Judge

Federal Court Magistrate Judge Andrew Peck of the New York Southern District is known for several important decisions affecting the eDiscovery field including the ongoing  Monique da Silva Moore v. Publicis Group SA, et al, case where he issued a landmark order authorizing the use of predictive coding, otherwise known as technology assisted review. His Da Silva Moore ruling is clearly an important development, but also very noteworthy are Judge Peck’s recent public comments on eDiscovery in the cloud.

eDiscovery attorney Patrick Burke, a friend and former colleague at Guidance Software, reports on his blog some interesting comments asserted on the May 22 Judges panel session at the 2012 CEIC conference. UK eDiscovery expert Chris Dale also blogged about the session, where Judge Peck noted that data stored in the cloud is considered accessible data under the Federal Rules of Civil Procedure (see, FRCP Rule 26(b)(2)(B)) and thus treated no differently by the courts in terms of eDiscovery preservation and production requirements as data stored within a traditional network. This brought the following cautionary tale about the costs associated with not having a systematic process for eDiscovery:

Judge Peck told the story of a Chief Information Security Officer who had authority over e-discovery within his multi-billion dollar company who, when told that the company could enjoy significant savings by moving to “the cloud”, questioned whether the cloud provider could accommodate their needs to adapt cloud storage with the organization’s e-discovery preservation requirements. The cloud provider said it could but at such an increased cost that the company would enjoy no savings at all if it migrated to the cloud.

In previous posts on this blog, we outlined how significant cost-benefits associated with cloud migration can be negated when eDiscovery search and retrieval of that data is required.  If an organization maintains two terabytes of documents in the Amazon or other IaaS cloud deployments, how do they quickly access, search, triage and collect that data in its existing cloud environment if a critical eDiscovery or compliance search requirement suddenly arises?  This is precisely the reason why we developed X1 Rapid Discovery, version 4. X1RD is a proven and now truly cloud-deployable eDiscovery and enterprise search solution enabling our customers to quickly identify, search, and collect distributed data wherever it resides in the Infrastructure as a Service (IaaS) cloud or within the enterprise. While it is now trendy for eDiscovery software providers to re-brand their software as cloud solutions, X1RD is now uniquely deployable anywhere, anytime in the IaaS cloud within minutes. X1RD also features the ability to leverage the parallel processing power of the cloud to scale up and scale down as needed. In fact, X1RD is the first pure eDiscovery solution (not including a hosted email archive tool) to meet the technical requirements and be accepted into the Amazon AWS ISV program.

As far as the major cloud providers, the ones who choose to solve this eDiscovery challenge (along with effective enterprise search) with best practices technology will not only drive significant managed services revenue but will enjoy a substantial competitive advantage over other cloud services providers.

1 Comment

Filed under Best Practices, Case Law, Cloud Data, Enterprise eDiscovery, IaaS, Preservation & Collection