System Transparency Pack

(System Card / Model Card)


1. System Overview

System name: askKira
System type: AI-powered professional decision-support assistant for education
Deployment context: Schools, Multi-Academy Trusts, education organisations

Primary purpose:
askKira supports education professionals by:

  • Interpreting policy and statutory guidance

  • Supporting planning, reflection, and professional judgement

  • Reducing administrative workload

  • Improving consistency and confidence in decision-making

Non-purpose (explicit exclusions):
askKira does not:

  • Make autonomous decisions

  • Replace professional judgement

  • Perform automated safeguarding determinations

  • Diagnose pupils or staff

  • Grade, assess, or label individuals


2. Model & Architecture Summary

Model class:
Large Language Model (LLM)–based conversational system with domain-specific constraints.

Model provenance:

  • Built on commercially available foundation models.

  • Models are not trained on customer content.

System architecture (high level):

  • User prompt → safety checks → contextual orchestration → model inference → output filtering → user response

  • Optional retrieval of organisation-specific, permissioned content where enabled.

Data isolation:

  • Customer data is logically isolated by organisation.

  • No cross-organisation data sharing.

Gaps / to confirm:

☐ Specific foundation model families and versions
☐ Retrieval-augmented generation (RAG) technical detail
☐ Frequency of model updates


3. Training Data & Knowledge Sources

General training:

  • Foundation models trained on a mixture of licensed data, data created by human trainers, and publicly available text (as per provider standards).

Customer data:

  • Customer prompts, documents, and outputs are not used to train models.

  • Data is processed solely to provide the service.

Organisation-specific knowledge:

  • Where enabled, askKira can reference:

    • Internal policies

    • Trust documentation

    • Approved guidance

Access is strictly permission-controlled.

Gaps / to confirm:

☐ Detailed list of data categories referenced by default
☐ Retention periods per data type


4. Intended Users & Use Cases

Intended users:

  • Teachers

  • School leaders

  • Trust executives

  • Central teams (SEND, safeguarding, HR, governance)

Intended use cases:

  • Policy interpretation and clarification

  • Drafting and sense-checking professional documents

  • Scenario-based reflection and support

  • Staff workload reduction

Out-of-scope use cases:

  • High-stakes automated decisions

  • Live safeguarding judgements without human oversight

  • Medical, psychological, or legal determinations


5. Guardrails & Safety Controls

Human-in-the-loop:

  • All outputs are advisory only.

  • Users are explicitly instructed to verify outputs.

Content safety controls:

  • Automated detection and filtering for:

    • Safeguarding-sensitive content

    • Harmful, abusive, or discriminatory language

    • Inappropriate sexual or violent content

Education-specific safeguards:

  • Conservative defaults when discussing children, vulnerability, or risk.

  • Clear prompting to escalate safeguarding concerns through organisational procedures.

Operational controls:

  • Rate limiting and abuse prevention

  • Role-based access controls

  • Organisational admin oversight

Gaps / to confirm:

☐ Red-team testing scenarios used
☐ Safeguarding-specific prompt test cases
☐ Severity thresholds for escalation


6. Known Limitations

General AI limitations:

  • Outputs may be inaccurate, incomplete, or outdated.

  • The system may hallucinate plausible but incorrect information.

  • The model does not “understand” context in a human sense.

Education-specific limitations:

  • Cannot replace contextual knowledge of a school or pupil.

  • Cannot account for all local policies or professional nuance.

  • Not suitable for sole reliance in safeguarding or disciplinary decisions.

Bias & representation limitations:

  • Model outputs may reflect biases present in underlying training data.

  • Requires active professional scrutiny to mitigate bias.


7. Monitoring, Evaluation & Improvement

Ongoing monitoring:

  • Automated logging of system performance and safety signals (metadata-level).

  • Monitoring for misuse, abuse, or anomalous patterns.

Incident handling:

  • Defined escalation pathways for:

    • Safety concerns

    • Safeguarding issues

    • Data protection incidents

Review & improvement:

  • Periodic review of guardrails and prompts.

  • Updates informed by user feedback and incident analysis.

Gaps / to confirm:

☐ Review cadence (e.g. quarterly, biannual)
☐ User-visible feedback mechanism
☐ External or independent assurance process


8. Privacy & Data Protection Summary

Legal alignment:

  • UK GDPR compliant

  • Privacy by design and by default

Key measures:

  • Data minimisation

  • Encryption in transit and at rest

  • No model training on customer data

  • Configurable retention and deletion

Gaps / to confirm:

☐ DPIA publication approach
☐ Sub-processor transparency


9. Accountability & Governance

Accountability model:

  • Human accountability retained by customer organisation.

  • askKira provides technical and safety controls; professional judgement remains with users.

Governance:

  • Ethical use embedded in product design and onboarding.

  • Alignment to UK Government Data & AI Ethics Framework.

Gaps / to confirm:

☐ Named senior accountability role
☐ Governance review board or advisory function


10. Summary for Buyers

askKira is designed as a low-risk, assistive AI system for education, with:

  • Strong privacy and safety foundations

  • Clear limits on autonomy and decision-making

  • Conservative safeguards for safeguarding and vulnerable users

The gaps identified above represent optional enhancements for organisations requiring deeper public-sector-grade assurance rather than unresolved risks.