Skip to main content

Legal

Responsible AI Policy

Last updated:

On this page

Purpose and scope

FASTCLINIC LIMITED ("Fastclinic") develops and operates software that may incorporate artificial intelligence and machine learning ("AI Features") to support clinical workflows, operational efficiency, analytics, and user experience. This Responsible AI Policy describes the principles and practices we apply when designing, deploying, and maintaining those capabilities across our enterprise healthcare platform.

This Policy applies to AI Features we provide as part of the Services. Customers remain responsible for lawful use, clinical governance, and configuration within their organisations. This Policy supplements—but does not replace—your agreement, data processing terms, and applicable regulations.

Governance principles

We commit to the following principles:

  • Fairness: We seek to identify and mitigate unjust bias that could disadvantage individuals or groups based on protected or clinically irrelevant characteristics, consistent with technical feasibility and domain context.
  • Transparency: We provide documentation describing intended use, limitations, and known risks of AI Features appropriate to enterprise customers and, where relevant, end-user-facing disclosures coordinated with customer organisations.
  • Accountability: We maintain internal roles and controls for model lifecycle management, change management, incident review, and escalation paths.
  • Safety and reliability: We employ testing, monitoring, and guardrails proportionate to the risk profile of each feature, with heightened scrutiny for outputs that could influence clinical or public-health decisions.

Human oversight

AI Features are designed as decision-support tools, not autonomous substitutes for licensed healthcare professionals or legally accountable organisational roles. Where outputs may affect patient care, we expect customers to enforce policies requiring qualified human review before actions are taken, except where automation is explicitly approved by appropriate clinical governance and law.

We provide mechanisms for users to override, correct, or disregard model suggestions, and we log certain interactions where required for auditability and product improvement, subject to NDPA 2023 and contractual data minimisation commitments.

Bias and fairness testing

Prior to general availability of high-risk AI Features, we conduct proportionate testing that may include statistical evaluation across relevant subgroups where data is available and statistically meaningful, stress testing with synthetic or red-team scenarios, and review by subject-matter advisers. We document known limitations and residual risks.

Post-deployment, we monitor for performance drift, data skew, and anomalous error patterns. Customers are encouraged to report suspected systematic bias through support channels so we can investigate jointly with appropriate technical and clinical stakeholders.

Patient safety

We align development processes with healthcare safety thinking, including severity classification of failure modes, rollback plans for model or configuration changes, and coordination with customers during incidents that may affect care delivery.

AI Features are not validated as medical devices unless explicitly certified or registered as such in relevant jurisdictions; customers must not represent them otherwise. Local regulatory classifications may vary; customers are responsible for compliance in their deployment context.

Data governance for AI

Training, fine-tuning, evaluation, and operation of AI Features rely on datasets subject to data minimisation, purpose limitation, and lawful basis requirements under NDPA 2023. We use customer production data for model improvement only where contractually permitted and, where appropriate, with de-identification or aggregation, or in segregated environments.

We maintain records of dataset provenance, permitted uses, and subprocessors involved in AI infrastructure, available to enterprise customers under confidentiality as part of due diligence.

Explainability and documentation

We provide documentation that describes, at an appropriate level of technical depth: the intended use case; inputs and outputs; performance metrics from validation; known failure modes; versioning; and update practices. For certain models, we offer feature importance, attention visualisations, natural-language rationales, or citation to source records where architecturally supported—understanding that explainability techniques have limitations and may not fully capture complex model behaviour.

Third-party and open models

Where we integrate third-party foundation models or APIs, we perform vendor due diligence, impose contractual data protection and security requirements consistent with NDPA 2023, and restrict data flows to what is necessary for the subscribed feature. We monitor vendor notices for material changes affecting privacy or safety.

Alignment with NDPA 2023

Processing of personal data in connection with AI Features complies with the Nigeria Data Protection Act 2023, including principles of lawfulness, fairness, transparency, storage limitation, integrity, and confidentiality. Data subjects may exercise rights through the appropriate controller (often the healthcare organisation customer) or directly with us where we act as controller, via contact@fastclinic.xyz.

Continuous improvement

We periodically review this Policy, our risk assessments, and industry standards. We will update customers materially affected by changes to AI governance practices through release notes, contractual notice channels, or revised documentation. For questions regarding this Policy, contact contact@fastclinic.xyz.