In-Product Feedback & Contestability Framework

(Issue Reporting, Escalation & Response SLAs)


1. Purpose

This framework describes how users and organisations can report concerns, challenge outputs, and seek review or correction of askKira responses.

It ensures that:

  • AI outputs can be questioned, reviewed, and improved

  • Harmful or inappropriate responses can be escalated quickly

  • Responsibility remains clearly human-led and accountable


2. Principles

askKira’s approach to feedback and contestability is guided by the following principles:

  • Accessibility – reporting concerns must be simple and visible

  • Proportionality – response speed reflects severity and risk

  • Human oversight – all contested issues are reviewed by people

  • Transparency – users understand what will happen next

  • Continuous improvement – feedback informs system refinement


3. In-Product Issue Reporting

Reporting mechanism:
Users are provided with an in-product option to flag or report an issue directly from an askKira interaction.

Issue categories (minimum set):

  • ☐ Factual inaccuracy or misleading information

  • ☐ Safeguarding concern

  • ☐ Bias or discriminatory content

  • ☐ Harmful, inappropriate, or offensive output

  • ☐ Data protection or privacy concern

  • ☐ Security or misuse concern

  • ☐ Other / general feedback

Information captured (proportionate):

  • Prompt and system output (context-limited)

  • Issue category

  • Optional user description

  • Organisational identifier (automated)

Gaps / to confirm:

☐ Exact UI placement and wording
☐ Ability to submit anonymously within an organisation
☐ User acknowledgement message


4. Contestability & Human Review

What can be contested:

  • The accuracy, appropriateness, or safety of an AI output

  • The suitability of askKira’s response for a specific professional context

  • Potential bias or unfair treatment

  • Handling of sensitive or safeguarding-related scenarios

Review process:

  • All contested issues are reviewed by a human reviewer.

  • Automated responses are never treated as final in contested cases.

  • Safeguarding and high-risk issues bypass standard queues.

Outcomes may include:

  • Clarification or correction

  • Acknowledgement of limitation

  • Prompt or guardrail adjustment

  • Escalation to organisational safeguarding procedures

  • System improvement action

Gaps / to confirm:

☐ Reviewer role definitions
☐ Use of specialist reviewers (e.g. safeguarding)
☐ Documentation of review decisions


5. Escalation Pathways

Tiered escalation model:

Level 1 – General issues

Examples:

  • Minor factual errors

  • Clarity or usefulness concerns

Handled by:

  • Standard support review

Level 2 – Sensitive issues

Examples:

  • Bias concerns

  • Repeated inaccuracies

  • Inappropriate tone

Handled by:

  • Senior reviewer

  • Pattern analysis for recurrence

Level 3 – High-risk / safeguarding issues

Examples:

  • Safeguarding-related outputs

  • Serious harm or risk indicators

  • Data protection incidents

Handled by:

  • Immediate human review

  • Organisational safeguarding lead notified (where appropriate)

  • Formal incident process initiated

Gaps / to confirm:

☐ Criteria for tier classification
☐ External authority notification thresholds
☐ Incident logging format


6. Response SLAs (Indicative)

Issue categoryInitial acknowledgementHuman reviewResolution / update
General issue≤ 2 working days≤ 5 working days≤ 10 working days
Sensitive issue≤ 1 working day≤ 3 working days≤ 7 working days
Safeguarding / criticalSame working dayImmediate / ≤ 24 hrsAs soon as practicable

Notes:

  • SLAs apply during normal UK business hours.

  • Safeguarding issues take precedence over all other work.

Gaps / to confirm:

☐ Public vs contractual SLA commitments
☐ Weekend / out-of-hours handling
☐ Customer notification format


7. Organisational Visibility & Oversight

For organisations:

  • Designated admin users may receive:

    • Aggregated issue summaries

    • High-risk alerts relevant to their organisation

  • Supports Trust-level governance and audit.

Gaps / to confirm:

☐ Admin dashboard availability
☐ Exportable audit logs
☐ Reporting cadence


8. Learning & Continuous Improvement

Feedback and contested issues are used to:

  • Refine prompts and guardrails

  • Improve safety filters and guidance

  • Update onboarding and training materials

  • Inform periodic risk reviews

No individual user is penalised for raising concerns.

Gaps / to confirm:

☐ Formal feedback-to-change loop
☐ Change log transparency
☐ Communication of improvements to customers


9. Relationship to Safeguarding & Data Protection

  • askKira does not replace organisational safeguarding or data protection processes.

  • Where a safeguarding concern arises, users are reminded to follow local safeguarding procedures immediately.

  • Data protection incidents follow UK GDPR breach management processes.


10. Summary for Buyers

askKira provides:

  • Clear, accessible routes to report, challenge, and escalate AI outputs

  • Human-led review of all contested issues

  • Proportionate response SLAs aligned to risk

  • Organisational oversight suitable for public-sector assurance

This framework supports the UK Government’s expectations on accountability, transparency, and safety, while remaining practical for real school and Trust environments.