DPIA Accelerator

For Multi-Academy Trusts and Local Authorities


PART A — DPIA CORE TEMPLATE

1. Project Overview

Project name:
Use of askKira AI assistant within education services

Description of processing:
Deployment of an AI-powered professional support tool to assist staff with policy interpretation, drafting, reflection, and workload reduction.

Purpose of processing:

  • Support professional judgement

  • Improve consistency and confidence

  • Reduce administrative burden

[Organisation to complete]:

Specific local objectives and use cases


2. Data Controller & Processor Roles

Data controller:
The MAT or Local Authority

Data processor:
askKira

Sub-processors:
Used for secure hosting and model provision

[askKira provided]:

  • Data Processing Agreement available

  • No training of AI models on customer data

  • Logical separation of organisational data

[Organisation to confirm]:

Controller responsibilities and internal governance


3. Data Categories

Data subjects:

  • Staff

  • Pupils (indirect reference only)

  • Parents / carers (indirect reference only)

Personal data types:

  • Professional role information

  • Policy or operational context

  • Free-text inputs provided by staff

Special category data:

  • Not required for normal operation

  • May arise incidentally in safeguarding or SEND contexts

[askKira provided]:

  • Data minimisation controls

  • Guidance to anonymise personal data where possible

[Organisation to confirm]:

Whether special category data is permitted locally


4. Lawful Basis for Processing

Primary lawful basis:

  • Article 6(1)(e) – Public task
    or

  • Article 6(1)(f) – Legitimate interests

Special category (if applicable):

  • Article 9(2)(g) – Substantial public interest

  • Article 9(2)(h) – Health or social care (context-dependent)

[Organisation to complete]:

Selected lawful basis and justification


5. Necessity & Proportionality

Why AI is necessary:

  • Supports staff at scale

  • Improves consistency

  • Reduces workload without replacing professional judgement

Why askKira is proportionate:

  • Advisory only

  • Human-in-the-loop

  • No automated decision-making

  • Strong safeguards for children and vulnerable groups

[askKira provided]:

  • Conservative system design

  • Clear usage boundaries


6. Risk Assessment

RiskLikelihoodImpactMitigation
Inaccurate outputsMediumMediumHuman verification required
Bias or unfair guidanceLow–MediumMediumBias testing, feedback routes
Over-reliance on AIMediumMediumTraining & usage guidance
Safeguarding misinterpretationLowHighEscalation guidance & safeguards
Data breachLowHighEncryption, access controls

[askKira provided]:

  • Safety Hub

  • Bias & Fairness Framework

  • Incident escalation process

[Organisation to complete]:

Residual risk ratings and sign-off


7. Measures to Mitigate Risk

Technical measures:

  • Encryption in transit and at rest

  • Role-based access control

  • Rate limiting and monitoring

Organisational measures:

  • Staff training on safe AI use

  • Clear safeguarding escalation routes

  • Policy on acceptable AI use

[askKira provided]:

  • Templates for AI ethics and safety statements


8. Automated Decision-Making

Assessment:

  • askKira does not perform automated decision-making under Article 22 UK GDPR.

Conclusion:

  • Human oversight retained at all times.


9. Consultation & Advice

Internal consultation:

  • DPO

  • Safeguarding lead

  • IT / Digital lead

External consultation:

  • ICO consultation not required due to mitigated residual risk
    (subject to local assessment)


10. DPIA Outcome

Overall risk level:

  • Low to Medium (with mitigations)

Decision:

  • Proceed

  • Proceed with conditions

  • Do not proceed

Sign-off:

  • DPO

  • Senior Responsible Owner


PART B — OPTIONAL PUBLISHABLE DPIA SUMMARY

(For websites, transparency pages, or FOI responses)

DPIA Summary: Use of askKira AI Assistant

What we are doing
We are using askKira, an AI-based assistant, to support staff with professional tasks such as policy interpretation and drafting.

Why we are doing it
To reduce workload, improve consistency, and support high-quality professional judgement.

What data is involved
Limited professional information entered by staff. We do not rely on pupil-level personal data for routine use.

How risks are managed

  • No automated decisions

  • Human verification required

  • Strong safeguarding and privacy controls

  • Clear routes to escalate concerns

Conclusion
The DPIA found that risks are proportionate and manageable with appropriate safeguards in place.


Summary for MATs & LAs

This DPIA Accelerator:

  • Aligns with ICO guidance and UK Government AI ethics expectations

  • Reduces duplication of effort across schools and Trusts

  • Provides clear separation of responsibilities

  • Supports transparent, defensible decision-making