Ethics & Assurance Overview
Aligned to the UK Government Data & AI Ethics Framework
Purpose
This overview summarises how askKira aligns with the UK Government’s Data & AI Ethics Framework and supports responsible, safe, and lawful use of AI in education. It is designed to support due diligence by Trust Boards, Executive Leaders, DPOs, and regulators.
1. Transparency
Government expectation:
AI systems should be transparent, explainable, and clear about purpose, limitations, and appropriate use.
askKira approach:
Clear articulation of purpose: askKira is a decision-support and professional-assistive tool, not an autonomous decision-maker.
Publicly available Privacy Policy and Safety Hub describing how data is handled, safeguarded, and constrained.
Explicit guidance to users that AI outputs must be verified by professionals before use.
Clear separation between customer data and model training (customer content is not used to train models).
Evidence / links:
Privacy Policy
Safety Hub
Organisational AI Ethics & Safety Statement (template)
Space for additional evidence or gaps:
☐ System / model transparency summary
☐ Known limitations or failure modes documentation
☐ Customer-facing “How askKira works” explainer
2. Accountability
Government expectation:
Clear ownership, responsibility, and routes for challenge or redress.
askKira approach:
Human accountability is explicit: responsibility for decisions always remains with the organisation and its staff.
No high-stakes automated decisions (e.g. safeguarding judgements, exclusions, grading, diagnosis).
Defined internal escalation and incident-response processes for safety, safeguarding, or data concerns.
Clear contractual delineation of roles (controller/processor responsibilities).
Evidence / links:
Terms of Service
Data Protection & Processing documentation
Safety escalation guidance
Space for additional evidence or gaps:
☐ Named senior accountability role (SRO equivalent)
☐ Customer feedback / contestability mechanism
☐ Response SLAs for reported issues
3. Fairness
Government expectation:
AI systems should be assessed and monitored for bias and unintended discriminatory impact.
askKira approach:
Explicit commitment to inclusive, equitable use in education, including SEND and disadvantaged contexts.
Guardrails to prevent harmful, inappropriate, or discriminatory outputs.
Emphasis on professional judgement to contextualise outputs rather than relying on AI generalisation.
Evidence / links:
AI Ethics & Safety Statement
Safety Hub (content monitoring and restrictions)
Space for additional evidence or gaps:
☐ Bias testing or evaluation methodology
☐ Equality impact considerations (e.g. SEND, EAL, PSED)
☐ Ongoing monitoring and review process
4. Privacy
Government expectation:
Strong data protection, minimisation, and lawful processing.
askKira approach:
UK GDPR–aligned privacy-by-design architecture.
Data minimisation: only necessary data is processed.
Encryption in transit and at rest; role-based access controls.
Customer data is not used to train AI models.
Configurable retention and deletion controls.
Evidence / links:
Privacy Policy
Data Processing Agreement (on request)
Security & access controls documentation
Space for additional evidence or gaps:
☐ DPIA summary or template
☐ Sub-processor list
☐ Independent security assurance (if required)
5. Safety & Security
Government expectation:
AI systems should be robust, secure, and actively monitored for misuse or harm.
askKira approach:
Safety-first framing across product, policy, and onboarding.
Automated monitoring for high-risk or inappropriate content categories.
Safeguarding-aware constraints aligned to education contexts.
Clear guidance on safe and unsafe uses of AI in schools and Trusts.
Evidence / links:
Safety Hub
Acceptable Use guidance
Incident escalation process
Space for additional evidence or gaps:
☐ Red-teaming or adversarial testing summary
☐ Safeguarding-specific risk scenarios
☐ Audit or monitoring cadence
6. Societal Impact
Government expectation:
AI should deliver public benefit and avoid unintended negative consequences.
askKira approach:
Explicit positioning as augmenting, not replacing, professional expertise.
Designed to reduce workload, improve consistency, and support ethical decision-making.
Strong emphasis on safeguarding, inclusion, and responsible professional use.
Evidence / links:
Social Value Statement
Product positioning and onboarding materials
Space for additional evidence or gaps:
☐ Formal societal impact assessment
☐ Stakeholder consultation evidence
☐ Ongoing impact review process
7. Environmental Sustainability
Government expectation:
Consider and mitigate environmental impacts of AI systems.
askKira approach:
[Currently limited public disclosure]
Space for additional evidence or gaps:
☐ Hosting and infrastructure sustainability posture
☐ Efficiency measures (e.g. compute optimisation)
☐ Commitment to ongoing measurement and improvement
Summary
askKira demonstrates strong alignment with the UK Government’s Data & AI Ethics Framework in privacy, safety, accountability, and human oversight. The areas marked above provide clear, optional extensions for organisations that require deeper assurance or public-sector-grade documentation.