Ethical AI use policy

clarifAI, qualifAI, dignifAI and medifAI

A comprehensive framework for responsible AI development and deployment

1. Introduction and Vision

clarifAI was built on the belief that medical communications can — and should — work smarter. Artificial intelligence is part of that evolution, but only when deployed responsibly, ethically, and with unwavering commitment to human welfare.

Our approach to AI is both bold and responsible — bold in rapidly innovating to improve medical communications, training, and patient support, and responsible in ensuring that every deployment prioritises safety, accuracy, fairness, and human dignity.

AI supports our work. Humans remain accountable for it.

This policy establishes our commitment to developing and deploying AI systems that are transparent, reliable, and worthy of trust, while maximising benefits and minimising risks across all our operations.

2. Scope of Application

This policy applies comprehensively to:

  • clarifAI: Medical communications agency services

  • qualifAI: Training and professional development programmes

  • dignifAI: Charitable and patient-support initiatives

  • medifAI: AI-enabled review platform (currently in development)

  • All in-house staff, specialist partners, external collaborators, and technology providers

3. Core Ethical Principles

Our AI ethics framework is grounded in internationally recognised principles that guide every stage of AI development, deployment, and monitoring.

3.1 Human Oversight and Accountability

Principle: People must be accountable for AI systems. No AI output is final until validated by qualified professionals.

In Practice:

  • AI-generated or AI-assisted outputs must be reviewed and validated by qualified professionals before being delivered, published, or relied upon

  • Clear lines of accountability must be established for all AI-assisted deliverables

  • Human decision-making authority cannot be delegated entirely to AI systems

  • We maintain designated oversight roles at leadership level

3.2 Fairness and Inclusiveness

Principle: AI systems should treat all people fairly and empower everyone, regardless of background or ability.

In Practice:

  • We actively assess and mitigate potential bias in AI training data, algorithms, and outputs

  • AI systems must be designed to be inclusive for people of all abilities

  • We ensure fair allocation of opportunities, resources, and information

  • Patient-facing content undergoes enhanced fairness review

  • We engage diverse perspectives in AI development and testing

3.3 Reliability, Safety, and Quality

Principle: AI systems must perform reliably, safely, and consistently across different contexts.

In Practice:

  • AI systems undergo rigorous testing across multiple scenarios and use conditions

  • We implement safety measures to prevent harmful or inaccurate outputs

  • Regular performance monitoring and quality assurance procedures are mandatory

  • AI is deployed to strengthen clarity and quality, never to compromise it

  • Systems are stress-tested before deployment, particularly for medifAI and patient-facing applications

3.4 Privacy, Security, and Confidentiality

Principle: AI systems must be secure and respect privacy at every stage.

In Practice:

  • Sensitive client, regulatory, or personal data must not be entered into AI systems without appropriate contractual, technical, and security safeguards

  • Data handling complies with all applicable data protection legislation (GDPR, HIPAA, etc.)

  • We implement industry-leading security measures to protect against unauthorized access

  • Privacy considerations are embedded in design, not added as an afterthought ("privacy by design")

  • Data minimization principles guide all AI deployments

3.5 Transparency and Explainability

Principle: AI systems should be understandable, and their use should be disclosed.

In Practice:

  • We are transparent about when and how AI is used in our work

  • Users and stakeholders are informed about AI capabilities and limitations

  • We provide clear documentation of AI system functionality

  • Decisions influenced by AI must be explainable in terms stakeholders can understand

  • We maintain auditability of AI-generated outputs where appropriate

3.6 Scientific Validity and Accuracy

Principle: Strategy precedes automation. AI must enhance, not replace, scientific rigor.

In Practice:

  • AI is deployed to improve structured workflows, not to shortcut thinking, strategy, or scientific interpretation

  • AI-assisted research, summarization, citation checking, or compliance flagging must be verified by subject-matter experts

  • AI does not replace reference validation or regulatory review

  • We measure tangible outcomes and maintain high evidential standards

4. Governance Framework

4.1 Leadership and Oversight

Responsible AI use is overseen at the highest levels of the organisation through:

  • Designated Responsible AI Lead: Senior leadership role with authority and resources

  • AI Ethics Committee: Cross-functional team reviewing high-risk applications

  • Regular Executive Reviews: Quarterly assessments of AI deployment and impact

  • Board-Level Reporting: Annual reporting on AI governance and risk management

4.2 Risk Assessment and Management

Before deploying AI in new workflows or products, we conduct comprehensive risk assessments examining:

  • Purpose and Intent: Clear articulation of goals and expected benefits

  • Data Sensitivity: Classification of data types and protection requirements

  • Compliance Impact: Regulatory and legal implications

  • Accuracy Risk: Potential for inaccuracy, hallucination, or misinterpretation

  • Bias and Fairness: Potential for unfair or discriminatory outcomes

  • Human Oversight Level: Appropriate degree of human review required

  • Stakeholder Impact: Effects on patients, clients, healthcare providers, and the public

Enhanced Safeguards Apply When:

  • Working with compliance review or promotional materials

  • Creating patient-facing or public health content

  • Processing protected health information

  • Making clinical or treatment-related recommendations

  • Deploying AI in medifAI or other high-stakes environments

4.3 Development and Deployment Standards

We implement the following throughout the AI lifecycle:

Design Phase:

  • Incorporate ethical principles from project inception

  • Engage diverse stakeholders in requirements gathering

  • Conduct preliminary impact assessments

  • Document intended use cases and limitations

Testing Phase:

  • Rigorous validation across diverse scenarios

  • Red team exercises to identify potential failures

  • Bias and fairness testing

  • Security and privacy assessments

  • Scenario-based evaluation for medifAI and critical applications

Deployment Phase:

  • Staged rollouts with monitoring

  • User training and guidance

  • Clear documentation of capabilities and limitations

  • Established feedback mechanisms

Monitoring Phase:

  • Continuous performance monitoring

  • Regular audits of outputs and decisions

  • User feedback collection and analysis

  • Iterative improvement based on real-world performance

5. Bold Innovation with Responsibility

We recognise that responsible AI requires balancing innovation with caution. Our approach emphasises:

5.1 Advancing Scientific Discovery

We leverage AI to:

  • Accelerate medical communications research and evidence synthesis

  • Improve accessibility and understanding of complex medical information

  • Support healthcare provider education and patient empowerment

  • Contribute to scientific advances that benefit public health

5.2 Real-World Problem Solving

We focus AI deployment on:

  • Addressing genuine needs in medical communications

  • Improving efficiency without sacrificing quality

  • Enhancing clarity and understanding for diverse audiences

  • Solving practical challenges faced by healthcare professionals and patients

5.3 Frontier Responsibility

As AI capabilities advance, we commit to:

  • Staying informed about emerging risks and best practices

  • Adapting our governance framework to new developments

  • Participating in industry-wide responsible AI initiatives

  • Contributing to the broader responsible AI ecosystem

6. Partner Engagement and Third-Party AI

6.1 Partner Standards

We work exclusively with specialist partners and technology providers who align with our ethical standards. Partners must:

  • Maintain robust data protection and security safeguards

  • Demonstrate transparency about AI usage and limitations

  • Provide auditability of outputs where appropriate

  • Commit to avoiding prohibited or unethical AI practices

  • Comply with relevant regulations and industry standards

  • Share our commitment to human oversight and accountability

6.2 Technology Provider Requirements

AI technology providers must:

  • Provide clear documentation of model capabilities and limitations

  • Disclose training data sources and potential biases

  • Implement appropriate content filtering and safety measures

  • Offer transparent pricing and service level agreements

  • Support our compliance and audit requirements

  • Maintain industry-standard security certifications

6.3 Prohibited AI Practices

We will not engage with partners or use AI systems that:

  • Compromise client confidentiality or regulatory integrity

  • Lack appropriate safety and security measures

  • Operate without adequate human oversight capabilities

  • Cannot provide transparency about their functioning

  • Violate professional or ethical standards in healthcare

  • Employ unfair or discriminatory practices

7. Training, Education, and Literacy

7.1 Mandatory Training

All team members and specialist partners using AI receive comprehensive guidance on:

  • Appropriate Use Cases: When and how to use AI effectively

  • System Limitations: Understanding what AI can and cannot do

  • Verification Requirements: Standards for reviewing and validating AI outputs

  • Data Handling: Proper management of sensitive information

  • Ethical Considerations: Recognizing and addressing ethical concerns

  • Compliance Requirements: Regulatory and legal obligations

7.2 Ongoing Development

Responsible AI literacy forms an integral part of our professional development through qualifAI, including:

  • Regular updates on AI capabilities and risks

  • Case studies of responsible and irresponsible AI use

  • Hands-on training with AI tools used in our operations

  • Forums for sharing experiences and lessons learned

  • Access to external expertise and thought leadership

7.3 Role-Specific Training

Training is tailored to different roles:

  • Leadership: Strategic AI governance and risk management

  • Medical Writers: AI-assisted content creation with scientific rigor

  • Compliance Officers: AI implications for regulatory adherence

  • Patient Advocates: Ethical considerations in patient-facing AI

  • Technical Staff: AI system design, implementation, and monitoring

8. Collaborative Progress

8.1 Internal Collaboration

We foster a culture of shared responsibility through:

  • Cross-functional AI working groups

  • Regular knowledge sharing sessions

  • Transparent reporting of AI incidents and learnings

  • Collaborative problem-solving on ethical challenges

8.2 External Engagement

We engage actively with the broader AI ecosystem:

  • Participating in industry forums and standards development

  • Collaborating with academic and research institutions

  • Sharing learnings (while protecting proprietary information)

  • Engaging with regulators and policymakers

  • Contributing to responsible AI best practices

8.3 Stakeholder Involvement

We seek input from diverse stakeholders:

  • Patients and patient advocacy groups

  • Healthcare providers and medical professionals

  • Regulatory bodies and ethics committees

  • Technology experts and AI researchers

  • Civil society and public interest organisations

9. Continuous Improvement and Adaptation

9.1 Regular Policy Review

This policy is reviewed and updated:

  • Annually: Comprehensive policy review and update

  • As Needed: In response to significant AI developments, incidents, or regulatory changes

  • Following Incidents: After any AI-related issue or near-miss

9.2 Performance Monitoring

We track key performance indicators:

  • Accuracy Metrics: Verification of AI output quality

  • Safety Incidents: Any AI-related errors or near-misses

  • User Satisfaction: Feedback from staff and stakeholders

  • Compliance Adherence: Regulatory and policy compliance rates

  • Fairness Audits: Regular bias and fairness assessments

9.3 Emerging Technologies

As AI technology evolves, we commit to:

  • Staying informed about new capabilities and risks

  • Assessing emerging technologies against our principles

  • Updating our governance framework proactively

  • Maintaining agility while preserving core ethical commitments

9.4 Regulatory Landscape

We actively monitor and respond to:

  • Emerging legal and regulatory requirements (EU AI Act, FDA guidance, etc.)

  • Industry standards and best practices

  • Professional guidelines for medical communications

  • International frameworks and conventions

10. Specific Application Areas

10.1 Medical Communications (clarifAI)

AI use in medical communications is subject to enhanced oversight:

  • All scientific content undergoes expert medical review

  • Citations and references are manually verified

  • Regulatory compliance checks are never fully automated

  • Client confidentiality is strictly protected

  • Promotional materials receive additional scrutiny

10.2 Training Programs (qualifAI)

AI in training and education must:

  • Enhance learning without replacing human instruction

  • Be accessible to diverse learners

  • Provide accurate and evidence-based information

  • Respect intellectual property rights

  • Support, not substitute, critical thinking development

10.3 Patient Support (dignifAI)

Patient-facing AI applications require the highest standards:

  • Clear disclosure of AI involvement

  • Emphasis on empowerment, not medical advice

  • Enhanced safety and accuracy measures

  • Cultural and linguistic sensitivity

  • Privacy protection exceeding minimum requirements

  • Clear pathways to human support when needed

10.4 medifAI Platform

Our AI-enabled review platform is subject to:

  • Pre-release testing including structured scenario review

  • Quality assurance to ensure oversight features function as intended

  • Regular audits post-deployment

  • User feedback mechanisms

  • Continuous refinement based on performance data

  • Enhanced security measures given the sensitivity of data processed

11. Prohibited Uses and Red Lines

We will not use AI for:

  • Making final medical diagnoses or treatment decision recommendations

  • Fully automated regulatory decision-making

  • Processing patient data without appropriate consent and safeguards

  • Replacing human judgment in safety-critical decisions

  • Creating misleading or deceptive content

  • Any application that violates professional ethical standards

  • Uses that could directly harm patients or healthcare providers

  • Applications that unfairly discriminate or exclude

12. Incident Response and Accountability

12.1 Incident Reporting

We maintain a clear process for reporting and addressing AI-related incidents:

  • Immediate Reporting: All staff can report concerns without fear of retaliation

  • Triage: Rapid assessment of incident severity and impact

  • Investigation: Thorough root cause analysis

  • Remediation: Swift action to address issues

  • Documentation: Comprehensive record-keeping

  • Learning: Sharing lessons learned across the organisation

12.2 Accountability Measures

When AI-related issues occur:

  • Affected parties are promptly notified

  • Corrective actions are implemented immediately

  • Systemic improvements are made to prevent recurrence

  • Serious incidents are escalated to leadership and, where appropriate, regulators

  • We take responsibility and do not hide behind "the AI made a mistake"

13. Our Commitment

Across clarifAI and its associated operations, we are committed to:

Using AI to strengthen clarity and quality, not to shortcut professional judgment

Maintaining human accountability for all outputs and decisions

Protecting data and confidentiality at the highest standards

Being transparent about how technology supports our work

Continuously improving how we deploy AI responsibly

Treating all people fairly and designing inclusive systems

Ensuring reliability and safety in every AI application

Collaborating with partners who share our values

Empowering our team with knowledge and tools for responsible AI use

Engaging boldly with innovation while managing risks carefully

14. Conclusion

Innovation matters. So does trust. We will not compromise one for the other.

This policy reflects our commitment to harnessing AI's transformative potential while upholding the highest ethical standards. It is a living document that will evolve as AI technology advances, regulations develop, and we learn from experience.

By following these principles, we aim to lead in responsible AI deployment within medical communications, setting standards that benefit our clients, patients, healthcare providers, and society at large.

Policy Ownership: Responsible AI Lead, clarifAI Leadership Team

Review Frequency: Annual review, with updates as needed

Next Scheduled Review: [Date]

Version: 1.0

Effective Date: [Date]

Questions or Concerns: Contact the Responsible AI Lead at [contact information]

This policy incorporates guidance and principles from leading AI organisations including Microsoft's Responsible AI principles and Google's AI Principles, adapted to the specific context and needs of medical communications.

Date of last revision: February 2026