Tag Archives: eDiscovery

Important SaaS Architecture Considerations for Legal Tech Software

by Kunjan Zaveri

With nearly all eDiscovery software now being offered on a SaaS basis, the cloud architecture decisions supporting the vendor’s platform are pivotal. Decisions on architecture design can lead to either very successful or very poor outcomes. The right architecture depends on the company’s SaaS delivery strategy, their customer profile and size, and the volume and nature of their anticipated transactions. These considerations are especially important in the legal tech space, which has some unique requirements and market dynamics such as heighted security and customization for large clients, and channel support (requiring platform portability), which are generally not as relevant to general SaaS architecture considerations.

At a high level, it is important to understand the two main SaaS architectures: multi-tenancy and single-tenancy. In cloud computing, tenancy refers to the allocation of computing resources in a cloud environment. In SaaS, tenancy is categorized into two formats: single-tenant SaaS and multi-tenant SaaS. In the single-tenant SaaS environment, each client has a dedicated infrastructure. Single-tenant products can’t be shared between clients and the buyer can customize the software according to their requirements. Multi-tenancy is an architecture where a single instance of a software application serves multiple customers. In a multi-tenant SaaS environment, many organizations share the same software and usually the same database (or at least a portion of a common database) to save and store data.

Single-tenancy and multi-tenancy SaaS each have their advantages and disadvantages, and the selection of either approach by a legal tech SaaS vendor should depend on their overall product and go-to-market strategy. Here are some of the advantages of a single-tenancy architecture:

1. Improved Security

With single-tenancy, each customer’s data is completely isolated from other customers with fewer and more trusted points of entry. The result is better overall security from outside threats and the prevention of one customer accessing another’s sensitive information, either intentionally or inadvertently.

2. Reliable Operations and Individual Tenant Scalability

Single-tenant SaaS architectures are considered more reliable as there is not a single point of failure that can affect all customers. For example, if one client uploads a massive amount of corrupt data that taxes resources and crashes the system, it won’t affect another clients’ instances. Single-tenancy is actually more scalable within an individual client instance, while multi-tenancy can better scale the addition and management of many customers.

3. Customization

Many large customers need specific features or unique security measures that require custom development, which can be very difficult in a multi-tenancy environment. Companies that use single-tenancy architecture can upgrade their services individually. Rather than waiting for the software provider to launch a universal update, users can update their accounts as soon as the download is available or decline patches that are not needed by a specific customer.  

4. Portability

With single-tenancy, a vendor can host their platform in their own SaaS environment, a channel partner’s environment, or enable their customers to install the solution behind their firewall or in their private cloud. Multi-tenancy SaaS does not allow for this flexibility.

Multi-tenant SaaS Advantages

Multi-tenancy is commonly utilized as most SaaS offerings are consumer or otherwise high-volume commoditized offerings, which necessitates such an architecture. Here are some of the key advantages of multi-tenant SaaS architecture over single-tenant:

1. Lower Costs

Since computing services are all shared under a multi-tenant architecture, it can cost less than a single-tenant structure. Scaling across the customer base is easier as new users utilize the same uniform software and resources as all the other customers.

2. Efficient Resources Spread Across all Customers

Because all resources are shared and uniform, multi-tenant architecture uses resources that, once engineered, offer optimum efficiency. Since it’s a changing environment where resources are accessed simultaneously, multi-tenant SaaS software needs to be engineered to have the capacity for powering multiple customers at once.

3. Fewer Maintenance Costs

Maintenance costs are usually associated with a SaaS subscription and aren’t passed through to the customer or incurred by the channel partner like with a single-tenant structure.

4. Shared Data Centers

Unlike a single-tenant environment, a vendor doesn’t have to create a new instance within the datacenter for every new user. Customers have to use a common infrastructure that removes the need to continually add partitioned instances for each new tenant.

So which architecture is the right one for a legal tech SaaS vendor? It completely depends on the company’s strategy, pricing, and nature of the offering. To illustrate this point, consider the examples of two hypothetical legal tech SaaS vendors: Acme and Widget.

Acme provides do it yourself data processing on a high-volume, low-cost basis, handling about 700 matters a week at an average project value of $400. Acme’s customer base is primarily small to medium size law firms and service providers who have multiple projects on different cases over the course of a year. Acme’s clients do not want to fuss with hardware or any software maintenance requirements.

Widget offers an enterprise-grade compliance and security data analytics platform, sold at an average sale price (ASP) of $400,000, but as high as $2 million for a dedicated annual license. Widget has 32 active enterprise customers and hopes to grow to 70 customers in three years with an even higher ASP. About a third of Widget’s clients prefer that Widget host the solution in Widget’s cloud instance. Another group of clients are large financial institutions that, for security and governance purposes, insist on self-hosting the platform in their own private cloud. The rest are instances sold through channel partners who prefer to host the platform themselves and provide value added services. Many Widget customers have particularized compliance requirements and other unique circumstances that require customization to support their needs.

For Acme, the correct choice is multi-tenancy. Acme offers a commoditized SaaS service, and it needs a high volume of individual customers to drive more transactional revenue growth. A single-tenancy architecture would prevent the company from scaling, would be too expensive, and unmanageable. However, some legal tech companies who have opted for this architectural approach have made the mistake of pursuing a more low-market commoditized strategy without making the initial considerable investment in engineering expertise and resources to build such an architecture.

In contrast, single-tenancy is the optimal architecture choice for Widget. While single-tenancy cloud is slightly more challenging to support, Widgets’ premium enterprise offering requires portability for the channel and rigorous security minded clients as well as customization, and thus is a clear fit for single-tenancy. In the future, Widget may have closer to a thousand customers or be acquired by a much larger company that will want to deploy the solution to their extensive client base. It would be a good idea for Widget to architect their single-tenancy platform in a manner, such as employing microservices, that will allow it to readily port it to a multi-tenancy environment when warranted.

So, for legal tech executives, the question to ask is whether your strategy and product offering is more in line with Widget or Acme. But the bottom line is to make sure your strategy drives your choice of architecture and not the other way around.

Kunjan Zaveri is the Chief Technology Officer of X1. (www.x1.com)

Leave a comment

Filed under Best Practices, Cloud Data, eDiscovery, Enterprise eDiscovery, SaaS

Microsoft Office 365 is Disrupting the eDiscovery Industry in a Major and Permanent Fashion

The adoption of cloud-based Microsoft Office 365 (“O365”) within enterprises is growing exponentially. According to a 2016 Gartner survey, 78 percent of enterprises use or plan to use Office 365, up from 64 percent in mid-2014. O365 includes built-in eDiscovery tools in the Security and Compliance Center at an additional cost. Many, but not all, O365 customers are utilizing the internal eDiscovery module, to which Microsoft is dedicating a lot of effort and resources in order to provide a go-to solution for the eDiscovery of all information located within O365. o365-logoBased upon my assessment through product demos and discussions with industry colleagues, I believe Microsoft will achieve this goal relatively soon for data housed within its O365 platform. The Equivio eDiscovery team that transitioned over to Microsoft in a 2015 acquisition is very dedicated to this effort and they know what they are doing.

But as I see it, the O365 revolution presents two major takeaways for the rest of the eDiscovery software and services industry. The first major point comes down to simple architecture. Most eDiscovery tools operate by making bulk copies of data associated with individual custodians, and then permanently migrate that data to their processing and/or review platform. This workflow applies to all non-Microsoft email archiving platforms, appliance-based processing platforms, and hosted review platforms. As far as email archiving, a third-party email archive solution requires the complete and redundant duplication, migration and storage of copies of all emails already located in O365. This is counter-productive to the very purpose of a cloud-based O365 investment. We have already seen non-Microsoft email archiving solutions on the decline in terms of market share, and with MS Exchange archiving becoming much more robust, we will only see that trend accelerate.

eDiscovery processing tools and review platforms are also fighting directly against the O365 tide.  This is especially true for processing appliances (whether physical or virtual), which address O365 collections through bulk copy and export of all of the target custodians’ data from O365 and into their appliance, where the data is then re-indexed. Such an effort is costly, time consuming, and inefficient. But the main problem is that clients who are investing in O365 do not want to see all their data routinely exported out of its native environment every time there is an eDiscovery or compliance investigation. Organizations are fine with a very narrow data set of relevant ESI leaving O365 after it has been reviewed and is ready to be produced in a litigation or regulatory matter. What they do not want is a mass export of terabytes of data because eDiscovery and processing tools need to broadly ingest that data in their platform in order to begin the indexing, culling and searching process. For these reasons, most eDiscovery software and compliance archiving tools do not play well with O365, and that will prove to be a significant problem for those developers and the service providers who utilize those tools for their processes.

The second major O365 consideration is that organizations, especially larger enterprises, rarely house all or even most of their data within O365, with hybrid cloud and on-premise environments being the norm. The O365 eDiscovery tools can only address what is contained within O365. Any on-premise data, including on-premise Microsoft sources (SharePoint, Exchange and Office docs on File Shares) cannot be readily consolidated by O365, and neither can data from other cloud sources such as Google Drive, Box, Dropbox and AWS. And of course, desktops, whether physical or virtual, are critical to eDiscovery collections and are also not supported by the O365 eDiscovery tools, with Microsoft indicating that they do not have any plans to soon address all these non-O365 data sources in a unified fashion.

So eDiscovery software providers need to have a good process to perform unified search and collection of non-O365 sources and to consolidate those results with responsive O365 data. This process should be efficient and not simply involve mass export of data out of O365 to achieve such data consolidation.

X1 Distributed Discovery (X1DD) is uniquely suited to complement and support O365 with an effective and defensible process and has distinct advantages over other eDiscovery tools that solely rely on permanently migrating ESI out of O365. X1DD enables organizations to perform targeted search and collection of the ESI of up to thousands of endpoints, as well as O365 and other sources, all in a unified fashion. The search results are returned in minutes, not weeks, and thus can be highly granular and iterative, based upon multiple keywords, date ranges, file types, or other parameters. Using X1DD, O365 data sources are searched in place in a very targeted and efficient manner, and all results can be consolidated into Microsoft’s Equivio review platform or another review platform such as Relativity. This approach typically reduces the eDiscovery collection and processing costs by at least one order of magnitude (90%). For a demonstration or briefing on X1 Distributed Discovery, please contact us.

2 Comments

Filed under Cloud Data, compliance, eDiscovery, Uncategorized

The Three Categories of eDiscovery Spoliation Sanctions

My last post discussed the important new Sedona Conference guidance, The Sedona Principles, Third Edition: Best Practices, Recommendations & Principles for Addressing Electronic Document Production. The revised principles are compelling, providing important direction to lawyers and eDiscovery practitioners alike. The Sedona Principles often make their way into court opinions and thus inform eDiscovery case law. In my view, the most interesting component of the updated Sedona Principles is its stance against full disk imaging for routine eDiscovery preservation, labeling the practice as unnecessary and unduly burdensome. Full disk imaging is still very widely used (some attorneys would say abused) for eDiscovery collection, which is an issue I highlighted at length last year on this blog.

The Sedona commentary brings into focus the judges’ rationale when issuing sanctions for failure to properly preserve ESI. Specifically, what types of conduct resulting in the destruction of ESI do the courts actually impose penalties for? I have been monitoring the caselaw involving failure to preserve ESI sanctions for over 15 years, and such cases fall under three general categories.

The first and most obvious category involves intentional conduct to delete or otherwise destroy potentially relevant ESI. There are many examples of such cases, including Sekisui Am. Corp. v. Hart, 2013 WL 4116322 (S.D.N.Y. Aug. 15, 2013), and Rimkus Consulting Group, Inc. v. Cammarata, 688 F. Supp. 2d 598 (S.D. Tex. 2010).

The second category involves situations where there is no process in place and the organization asserts little or no effort to preserve ESI. In a recent example, a magistrate judge imposed spoliation sanctions where the Plaintiff made no effort to preserve their emails — even after it sent a letter to the defendant threatening litigation. (Matthew Enter., Inc. v. Chrysler Grp. LLC, 2016 WL 2957133 (N.D. Cal. May 23, 2016). The court, finding that the defendant suffered substantial prejudice by the loss of potentially relevant ESI, imposed severe evidentiary sanctions under Rule 37(e)(1), including allowing the defense to use the fact of spoliation to rebut testimony from the plaintiff’s witnesses. The court also awarded reasonable attorneys fees incurred by the defendant in bringing the motion. See also, Internmatch v. Nxtbigthing, LLC, 2016 WL 491483 (N.D. Cal. Feb. 8, 2016), where a U.S. District Court imposed similar sanctions based upon the corporate defendant’s suspect preservation efforts.

The final category involves situations where an organization does have a palpable ESI preservation process, but one that perilously relies on custodian self-collection. In a recent illustrative case, a company found themselves on the wrong end of a $3 million sanctions penalty for spoliation of evidence. The case illustrates that establishing a litigation hold and notifying the custodians is just the first step. Effective monitoring and diligent compliance with the litigation hold is essential to avoid punitive sanctions. GN Netcom, Inc. v. Plantronics, Inc., No. 12-1318-LPS, 2016 U.S. Dist. LEXIS 93299 (D. Del. July 12, 2016). Even with effective monitoring, severe defensibility concerns plague custodian self-collection, with several courts disapproving of the practice due to poor compliance, metadata alteration, and inconsistency of results. See Geen v. Blitz, 2011 WL 806011, (E.D. Tex. Mar. 1, 2011), Nat’l Day Laborer Org. v. U.S. Immigration and Customs Enforcement Agency, 2012 WL 2878130 (S.D.N.Y. July 13, 2012).

So those are the three general categories for ESI preservation sanctions. But here is the question that the new Sedona commentary indirectly raises: Are there any cases out there where a court sanctions a party who; one, had a sound and reasonable ESI preservation process in place, and two, reasonably followed and executed that process in good faith, but were sanctioned anyway because that one document or email slipped through the cracks, which theoretically could have been prevented by employing full disk imaging as a routine practice? I believe this is an important question because some organizations and/or their outside counsel cite this concern as justification for full disk imaging across multitudes of custodians as a routine (but very expensive and burdensome) eDiscovery preservation practice. This still occurs even with the 2015 amendments to the Federal Rules of Civil Procedure, specifically FRCP 26(b)(1), which requires the application of proportionality to all aspects of eDiscovery, including collection and preservation.

I am unaware of any such case described in the previous paragraph. But if anyone is, please let me know in the comments below!

 

5 Comments

Filed under eDiscovery

Key to Improving Predictive Coding Results: Effective ECA

Predictive Coding, when correctly employed, can significantly reduce legal review costs with generally more accurate results than other traditional legal review processes. However, the benefits associated with predictive coding are often undercut by the over-collection and over-inclusion of Electronically Stored Information (ESI) into the predictive coding process. This is problematic for two reasons.

The first reason is obvious, the more data introduced into the process, the higher the cost and burden. Some practitioners believe it is necessary to over-collect and subsequently over-include ESI to allow the predictive coding process to sort everything out. Many service providers charge by volume, so there can be economic incentives that conflict with what is best for the end-client. In some cases, the significant cost savings realized through predictive coding are erased by eDiscovery costs associated with overly aggressive ESI inclusion on the front end.

The second reason why ESI over-inclusion is detrimental is less obvious, and in fact counter intuitive to many. Some discovery practitioners believe as much data as possible needs to be put through the predictive coding process in order to “better train” the machine learning algorithms. However this is contrary to what is actually true. The predictive coding process is much more effective when the initial set of data has a higher richness (also referred to as “prevalence”) ratio. In other words, the higher the rate of responsive data in the initial data set, the better. It has always been understood that document culling is very important to successful, economical document review, and that includes predictive coding.

Robert Keeling, a senior partner at Sidley Austin and the co-chair of the firm’s eDiscovery Task Force, is a widely recognized legal expert in the areas of predictive coding and technology assisted review.  At Legal Tech New York earlier this year, he presented at an Emerging Technology Session: “Predictive Coding: Deconstructing the Secret Sauce,” where he and his colleagues reported on a comprehensive study of various technical parameters that affect the outcome of a predictive coding effort.  According to Robert, the study revealed many important findings, one of them being that a data set with a relatively high richness ratio prior to being ingested into the predictive coding process was an important success factor.

To be sure, the volume of ESI is growing exponentially and will only continue to do so. The costs associated with collecting, processing, reviewing, and producing documents in litigation are the source of considerable pain for litigants. The only way to reduce that pain to its minimum is to use all tools available in all appropriate circumstances within the bounds of reasonableness and proportionality to control the volumes of data that enter the discovery pipeline, including predictive coding.

Ideally, an effective early case assessment (ECA) capability can enable counsel to set reasonable discovery limits and ultimately process, host, review and produce less ESI.  Counsel can further use ECA to gather key information, develop a litigation budget, and better manage litigation deadlines. ECA also can foster cooperation and proportionality in discovery by informing the parties early in the process about where relevant ESI is located and what ESI is significant to the case. And with such benefits also comes a much more improved predictive coding process.

X1 Distributed Discovery (X1DD) uniquely fulfills this requirement with its ability to perform pre-collection early case assessment, instead of ECA after the costly, time consuming and disruptive collection phase, thereby providing a game-changing new approach to the traditional eDiscovery model.  X1DD enables enterprises to quickly and easily search across thousands of distributed endpoints from a central location.  This allows organizations to easily perform unified complex searches across content, metadata, or both and obtain full results in minutes, enabling true pre-collection ECA with live keyword analysis and distributed processing and collection in parallel at the custodian level. To be sure, this dramatically shortens the identification/collection process by weeks if not months, curtails processing and review costs from not over-collecting data, and provides confidence to the legal team with a highly transparent, consistent and systemized process. And now we know of another key benefit of an effective ECA process: much more accurate predictive coding.

Leave a comment

Filed under ECA, eDiscovery

LTN: Social Media Evidence Even More Important than email and “Every Litigator” Needs to Address It

legaltech-news-thumbBrent Burney, a top eDiscovery tech writer of Legaltech News, recently penned a detailed product review of X1 Social Discovery after his extensive testing of the software. (Social Media: A Different Type of E-Discovery Collection, Legaltech News, September 2016). The verdict on X1 Social Discovery is glowing, but more on that in bit. Burney also provides very remarkable general commentary on how social media and other web-based evidence is essential for every litigation matter, noting that “email does not hold a flicker of a candle to what people post, state, admit and display in social media.” In emphasizing the critical importance of social media and other web-based evidence, Burney notes that addressing this evidentiary treasure trove is essential for all types and sizes of litigation matters.

Consistent to that point, there is a clear dramatic increase in legal and compliance cases involving social media evidence. Top global law firm Gibson Dunn recently reported that “the use of social media continues to proliferate in business and social contexts, and that its importance is increasing in litigation, the number of cases focusing on the discovery of social media continued to skyrocket.” Undoubtedly, this is  why Burney declares that “every litigator should include (X1 Social Discovery) in their technical tool belt,” and that X1 Social Discovery is “necessary for the smallest domestic issue all the way up to the largest civil litigation matter.” Burney bases his opinion on both the critical importance of social media evidence, and his verdict on the effectiveness of X1 Social Discovery, which he lauds as featuring an interface that “is impressive and logical” and providing “the ideal method” to address social media evidence for court purposes.

From a legal commentary standpoint, two relevant implications of the LTN article stand out. First, the article represents important peer review, publication and validation of X1 Social Discovery under the Daubert Standard, which includes those factors, among others, as a framework for judges to determine whether scientific or other technical evidence is admissible in federal court.

Secondly, this article reinforces the view of numerous legal experts and key Bar Association ethics opinions, asserting that a lawyer’s duty of competence requires addressing social media evidence. New Hampshire Bar Association’s oft cited ethics opinion states that lawyers “have a general duty to be aware of social media as a source of potentially useful information in litigation, to be competent to obtain that information directly or through an agent, and to know how to make effective use of that information in litigation.” The New York State Bar similarly weighed in noting that “A lawyer has a duty to understand the benefits and risks and ethical implications associated with social media, including its use as a … means to research and investigate matters.” And the America Bar Association recently published Comment [8] to Model Rule 1.1, which provides that a lawyer “should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”

The broader point in Burney’s article is that X1 Social Discovery is enabling technology that provides the requisite feasibility for law firms, consultants, and other practitioners to transition from just talking about social media discovery to establishing it as a standard practice.  With the right software, social media collections for eDiscovery matters and law enforcement investigations can be performed in a very scalable, efficient and highly accurate process. Instead of requiring hours to manually review and collect a public Facebook account, X1 Social Discovery can collect all the data in minutes into an instantly searchable and reviewable format.

So as with any form of digital investigation, feasibility (as well as professional competence) often depends on utilizing the right technology for the job.  As law firms, law enforcement, eDiscovery service providers and private investigators all work social discovery investigations into standard operating procedures, it is critical that best practices technology is incorporated to get the job done. This important LTN review is an emphatic punctuation of this necessity.

 

Leave a comment

Filed under Social Media Investigations