By Kelly Twigger and John Patzakis
Implementing AI within a corporate environment is no longer a matter of “if” but “how.” We recently addressed these challenges in our webinar, “Navigating Legal and Compliance Risks in AI,” where our panel of experts discussed the strategic transition required to build a robust risk mitigation framework. While the efficiency gains of AI—such as automating workflows and surfacing deep insights—are compelling, introducing sensitive enterprise data into these models without a tactical plan can lead to unintended consequences. These risks range from the dilution of trade secrets to complex eDiscovery obligations and substantial regulatory exposure under the GDPR.
To leverage AI safely, counsel should focus on the following grounded strategies for risk management.
Protect Trade Secrets
Under federal law, trade secret status is contingent upon the owner taking “reasonable measures” to maintain secrecy. This is a rigorous standard; if proprietary information—such as source code or high-value technical data—is fed into an unsecured AI model without strict access controls, a company risks losing its legal protections entirely.
- Review the Judicial Standard: In Snyder v. Beam Technologies, Inc., the 10th Circuit affirmed that failing to use confidentiality protections or allowing information to reside on unsecured devices can defeat trade secret status.
- Maintain Active Safeguards: Courts emphasize that consistent and active safeguards are required to maintain secrecy. Lax internal controls during AI interactions can be cited as evidence that “reasonable measures” were not maintained.
- Implement No-Prompt Zones: Establish “No-Prompt Zones” for your organization’s most sensitive intellectual property. By isolating core IP from third-party cloud models, you maintain a defensible record of “reasonable measures” that can withstand scrutiny in litigation.
Manage the eDiscovery Paper Trail
AI interactions—both the prompts submitted by employees and the responses generated by the tools—are considered discoverable Electronically Stored Information (ESI). These records are part of the corporate record and are subject to subpoena and legal holds.
- Understand the Technical Reality: Microsoft has confirmed that Microsoft 365 Copilot interactions are logged through the Purview unified audit log, making them searchable, preservable, and producible via eDiscovery tools.
- Assess Scope of Exposure: Because these chats are treated no differently than emails, they may inadvertently expose privileged or damaging material if not managed properly.
- Map Information Logs: Update your legal hold workflows to specifically include AI conversation logs and audit trails. Mapping where these logs live before litigation arises ensures a more controlled and cost-effective discovery process.
Navigate GDPR and Data Privacy
Processing customer or employee data through AI models requires strict adherence to the GDPR principles of data minimization, purpose limitation, and lawfulness. Feeding sensitive data into AI models without a clearly articulated lawful basis—such as consent or legitimate interest—can result in significant administrative fines.
- Meet Compliance Requirements: European authorities require organizations to demonstrate compliance by documenting purposes, limiting data inputs, and ensuring appropriate safeguards are in place.
- Identify Special Categories: The GDPR is particularly restrictive regarding health information or data revealing racial or ethnic origin, requiring specific exemptions for processing.
- Conduct Privacy Impact Assessments: Perform mandatory Privacy Impact Assessments (PIAs) for any AI tool that touches personal data. Documenting the purpose and necessity of the processing is critical for maintaining regulatory standing during an audit.
Leverage In-Place AI Functionality
A critical strategy for reducing risk is shifting where the AI processing occurs. Rather than routing data through external, third-party cloud-hosted AI services, organizations should consider prioritizing workflows where AI is applied in-place within the corporate network or controlled enterprise environment.
- Secure the Data Perimeter: By keeping data and AI processing behind the organization’s own security firewall, you materially reduce the risk of trade secret leakage and data exfiltration.
- Minimize Third-Party Footprint: Applying AI in-place narrows the scope of discoverable third-party records, as the interactions remain within your internal infrastructure rather than residing on a vendor’s servers.
- Establish Full Governance Control: This model provides counsel with direct control over privacy, retention, and audit obligations—essentially giving you the “kill switch” for data that you simply do not have with external cloud vendors.
Tactical Governance and Ethical Oversight
Counsel must navigate the professional and technical nuances of AI deployment to ensure long-term stability.
- Ensure Professional Competence: The ethical duty of technological competence requires attorneys to understand the limitations of the tools they use. AI should be treated as a “junior associate”—capable of great speed but requiring diligent human verification of all output.
- Apply Risk-Based Tiering: Not all AI use cases carry the same weight. We recommend a tiered approach:
o Tier 1 (Administrative): Low-risk tasks involving non-sensitive data.
o Tier 2 (Internal/Marketing): Standard communications requiring routine oversight.
o Tier 3 (High-Value/Restricted): High-stakes processing involving PII, health data, or proprietary IP, requiring senior legal sign-off and strict data handling protocols. - Execute Proactive Vendor Vetting: Move from consumer-grade tools to enterprise solutions that offer SOC 2 Type 2 attestations. Ensure contracts explicitly prohibit the vendor from using your data to train their global models.
In light of these risks, corporate counsel should take a proactive, structured approach to AI governance. This includes implementing data classification and usage controls to prevent sensitive trade secrets from being exposed to AI systems without safeguards; establishing clear policies governing AI prompts, outputs, retention, and eDiscovery treatment; and conducting privacy impact assessments to ensure personal data processing complies with GDPR and similar regulations. In addition, counsel should carefully evaluate AI deployment models and consider workflows in which AI models are deployed in-place within the corporate network or controlled enterprise environment, rather than routed through third-party cloud-hosted AI services. Keeping data and AI processing inside the organization’s security perimeter can materially reduce trade secret leakage risk, narrow the scope of discoverable third-party records, and provide greater control over privacy, retention, and audit obligations—while still allowing the enterprise to realize the benefits of advanced AI capabilities.
For a deeper dive into these strategies and more case studies, you can watch the full session here.





