Rethinking NDAs in the Age of AI
Why MedTech Companies Must Update Their Confidentiality Agreements for the Era of Artificial Intelligence
RESPONSIBLE AI & GOVERNANCE
Manfred Maiers
10/20/20253 min read


Why MedTech Companies Must Update Their Confidentiality Agreements for the Era of Artificial Intelligence
In the world of MedTech, Non-Disclosure Agreements (NDAs) have long served as the first line of defense for protecting intellectual property, trade secrets, and proprietary knowledge. They are the foundation of every collaboration, from early supplier discussions and co-development partnerships to contract research and manufacturing agreements.
At NoioMed, we’ve seen firsthand how Artificial Intelligence reshaping collaboration, communication, and data protection in MedTech is. AI tools now interact with sensitive product data, design documentation, and quality records in ways traditional legal frameworks never predicted.
An NDA’s purpose is still simple yet vital: to ensure that any confidential information shared between parties stays protected and is used only for the intended business purpose.
But as companies integrate AI tools, external consultants, and cloud-based systems into daily workflows, traditional NDAs are no longer sufficient.
Confidential obligations that once focused on emails, file transfers, and physical documents now extend into AI prompts, datasets, and machine-learning models, all capable of keeping, reproducing, or inadvertently exposing sensitive data.
🤖 The New Confidentiality Risk: AI Tools and Consultants
In earlier articles, we discussed:
The need for AI Policies that govern internal use, and
The importance of AI Supplier Controls under 21 CFR Part 820.50.
The third pillar is contractual: ensuring that NDAs explicitly address AI usage, storage, and disclosure risks.
Consider these emerging scenarios:
An AI consultant uses a generative model (like ChatGPT, Claude, or DeepSeek) to analyze proprietary MedTech design data.
A contract engineer feeds confidential device drawings into a third-party cloud-based AI system to “accelerate” documentation.
A supplier uses AI to draft a test report, unaware that the underlying model keeps snippets of client data on remote servers.
In each case, a traditional NDA offers limited protection. Without explicit AI clauses, ownership, control, and traceability of confidential data become ambiguous.
📜 Essential NDA Updates for the AI Era
Here are key sections every MedTech organization should review and update:
1. Definition of “Confidential Information”
Expand the definition to include:
Prompts, inputs, outputs, training data, intermediate files, and AI-generated materials derived from confidential sources.
Clarify that both the inputs to and outputs from AI systems are covered.
2. Restrictions on Use of AI Tools
Include clauses that explicitly state:
No confidential information may be input, uploaded, or processed by public or unapproved AI systems.
Only company-approved and validated AI environments may be used when managing confidential data.
Vendors and consultants must disclose any AI or machine-learning tools used in performing services.
3. Data Location and Sovereignty
Add requirements for data residency and model location — critical when dealing with AI systems hosted outside regulated authorities (e.g., China-based models like DeepSeek).
Require suppliers to use LLMs and cloud providers located within the U.S. or GDPR-compliant regions.
Prohibit transferring confidential data to servers in areas that lack adequate IP and privacy protection.
4. Ownership of AI-Generated Deliverables
Clarify that:
All outputs, models, and analyses derived from the company’s data are still the sole property of the disclosing party.
Vendors may not reuse, retrain, or commercialize any models holding the company’s confidential information.
5. Audit and Verification Rights
Grant the company the right to audit AI usage related to confidential information. This ensures that external partners can verify which tools were used, where data was stored, and how it was protected.
6. Employee and Subcontractor Obligations
Extend confidentiality and AI-use restrictions to all employees, subcontractors, and AI-driven service providers involved in the engagement.
⚖️ The Overlooked Intersection: AI + Legal + Quality
Updating NDAs isn’t just a legal exercise, it’s a cross-functional compliance requirement.
Legal teams must ensure language reflects AI realities and global data regulations.
Quality and Regulatory Affairs must confirm alignment with 21 CFR Part 820.50 and ISO 13485:2016 requirements on supplier and documentation controls.
IT and Security must confirm that AI environments meet internal data-protection standards (SOC2, HIPAA, ISO 27001).
This collaboration closes the gap between contract terms, supplier practices, and actual technical safeguards.
⚠️ The Risk of Doing Nothing
Without AI-specific NDA provisions, MedTech companies’ risk:
Loss of IP ownership if AI outputs are co-mingled with public datasets.
Regulatory exposure if confidential design data enters uncontrolled environments.
Reputational harm if patient-related or proprietary information leaks through third-party models.
In other words, your NDA might not protect what it once did.
💡 From Legal Shield to Strategic Enabler
When updated correctly, NDAs become more than a legal formality, they become a strategic enabler for responsible innovation.
They prove to partners, regulators, and employees that the company:
✅ Understands its obligations in the AI era.
✅ Controls data flow across all partnerships.
✅ Integrates legal, technical, and ethical safeguards into its daily operations.
🧭 Conclusion
In MedTech, trust and compliance are inseparable. Updating NDAs to reflect AI realities is not optional, it’s essential.
AI has changed how we create, analyze, and collaborate. It must now also change how we define and protect confidentiality.
Because the next time a supplier uses an AI tool, your company’s IP, data, and reputation will depend on the clauses you wrote, or forgot to.
Author: Manfred Maiers Founder & Principal Consultant at NoioMed, I help MedTech companies integrate Artificial Intelligence responsibly across engineering, operations, and quality systems, ensuring alignment with FDA QMSR, ISO 13485, and global regulatory expectations.
Our focus is on bridging innovation with compliance, guiding organizations to modernize processes, safeguard intellectual property, and build trust in the age of AI.