Introduction
Apprenticeship units are short flexible training courses designed to support employers to upskill their workforce in critical skill shortage areas. Units are for employed learners aged 19 and over, where their employer has identified that they need to upskill quickly to respond to emerging skills gaps and to support business growth and productivity.
Apprenticeship units are based on relevant knowledge and skills in existing employer-led occupational standards to ensure relevant high-quality, targeted training. Each unit is short, with the length of training ranging from 30 and 140 hours delivered over a period of 1 to 16 weeks. This enables employers to have maximum flexibility to select a unit that meets their specific skill need and to deliver the training in a way that fits around their business.
Who is it for?
This apprenticeship unit is for individuals in leadership roles responsible for shaping, influencing, or supporting decisions about the adoption of AI systems within their organisation. These individuals, with support of their employer, need upskilling in adopting AI systems and governing them responsibly. It is suited to those involved in evaluating options, developing business cases, and establishing governance and assurance approaches for AI and digital technologies.
Learning outcomes
A learning outcome is a concise statement that describes what an individual should be able to do by the end of their course. It summarises a cluster of knowledge and skills in the course and provides a foundation for assessment.
Learning outcomes:
- Evaluate AI solutions and vendors using structured criteria following organisational process (cost, performance, risk, data readiness).
- Make procurement decisions based on testing, benchmarking and user validation following organisational policy and process.
- Assess risks associated with AI acquisition, including vendor lock-in, data, IP, and sustainability.
- Design and implement AI governance frameworks, including roles, responsibilities and escalation pathways.
- Embed ethical, legal and regulatory considerations into AI decision-making processes.
- Define assurance and compliance processes, including documentation, auditability and transparency.
- Design and implement human oversight mechanisms for AI systems.
| Occupational standard |
Related apprenticeships
|
Occupational map link
|
| Artificial intelligence (AI) and automation practitioner | ST1512 V2.0 | OCC1512 |
|
Related apprenticeships:
ST1512 V2.0
Occupational map link:
OCC1512
|
||
| Chartered manager (degree) | ST0272 V1.1 | OCC0272 |
|
Related apprenticeships:
ST0272 V1.1
Occupational map link:
OCC0272
|
||
| Machine learning engineer | ST1398 V1.0 | OCC1398 |
|
Related apprenticeships:
ST1398 V1.0
Occupational map link:
OCC1398
|
||
| Senior leader | ST0480 V1.2 | OCC0480 |
|
Related apprenticeships:
ST0480 V1.2
Occupational map link:
OCC0480
|
||
Entry requirements
Learners must be employed and must be 19 years or over.
Must be working in a leadership position within an organisation with autonomy to deliver technological change and inform investment decisions.
Technical knowledge
K1: AI and automation concepts and models that support leadership decision-making, and their limitations. The impact adoption may have on workplace culture and wellbeing.
K2: The capabilities, benefits and risks of automation, AI and digital tools, including responsible use, ethical considerations and the potential impact on the workforce.
K3: The role of organisational leadership in responsible AI adoption, including setting values, policy, and strategy. The business case for ethical AI adoption, including reputational risk, staff engagement and morale, and long-term sustainability.
K4: Understand how to develop and implement organisational AI strategy and plans, including approaches to workforce development, taking and managing risk, monitoring and evaluation, and quality assurance.
K5: How to assess the viability of solutions when making acquisition decisions, for example, testing and evaluating solutions, using test data and results, feasibility (time, cost, data quality and process maturity), and user testing.
K6: The capabilities, risks and implications of adopting on-premise, cloud-based and third-party solutions.
K7: Principles and application of testing methodologies and their application in practice.
K8: Legislation, regulation, governance and assurance frameworks that support the safe adoption of artificial intelligence.
K9: Governance principles to ensure accountability and compliance, including defining roles and responsibilities to identify, escalate and mitigate threats or risks to assets, data and cyber security.
K10: Assurance and compliance arrangements, including documentation expectations, structured risk assessments, aligning with recognised AI assurance and governance frameworks. The importance of auditability, transparency, and accountability in organisational contexts.
K11: Principles of human oversight and human AI collaboration to achieve shared outcomes.
K12: Engagement and training approaches used with non-technical staff to understand their roles, responsibilities, and concerns when AI automation solutions are proposed, in support of strategic AI governance decisions.
K13: Feedback and evaluation loops to improve systems, processes, productivity and performance, including human-in-the-loop safeguards.
K14: Governance principles to ensure accountability and compliance, including methods to identify system vulnerabilities and mitigate threats or risks to assets, data and cyber security.
K15: Methods for assuring compliance in AI and automation projects, including documentation of model decision-making, conducting structured risk assessments, and aligning implementation with recognised AI assurance and governance frameworks. The importance of auditability, transparency, and accountability in organisational contexts.
Technical skills
S1: Identify organisational improvements and opportunities for innovation and growth, using qualitative and quantitative analysis of information and data.
S2: Set strategic direction for AI and gain support for it from key stakeholders.
S3: Commission analysis to identify if AI adoption is viable. Evaluate assessments of risks and unintended consequences of AI automation projects, such as the impact on job roles.
S4: Use evidence to inform governance of AI adoption, outcomes and facilitate improvement.
S5: Ensure business needs are aligned with technical capabilities, to ensure solutions are scalable, efficient, and aligned with the organisation’s strategic objectives.
S6: Keep up to date with existing, evolving, and emerging technologies and sector trends in AI, automation and technology to support the evaluation of vendor and supplier solutions.
S7: Apply principles relating to ethics and values-based leadership and governance, and regulatory compliance.
S8: Horizon scan to identify new developments that have implications for AI use.
S9: Apply regulatory, legal, ethical and governance considerations when evaluating AI recommendations at each stage of the AI adoption process.
S10: Define expectations for testing and feedback to ensure reliability, security, accessibility of AI systems, and alignment with organisational needs.
S11: Make evidence-based suggestions to support governance, outcomes and facilitate improvement, for example, cost-benefit analysis.
S12: Apply ethical and human-centred design principles when scoping, developing, and deploying automation and AI solutions, underpinned by robust governance.
S13: Undertake assurance activities to evidence responsible AI and automation, including maintaining clear documentation of design and decision-making, contributing to risk assessments, and applying assurance frameworks to support compliance with organisational, regulatory, and ethical standards.
Knowledge and skills outcomes
| Function | Learning Outcome | K & S mapping |
|---|---|---|
| AI Solution Evaluation | Evaluate AI solutions and vendors using structured criteria following organisational process (cost, performance, risk, data readiness) | K1, K2, K5, K6 |
| Procurement | Make procurement decisions based on testing, benchmarking and user validation following organisational policy and process | K1, K2, K4, K5, S4, S11 |
| Risk | Assess risks associated with AI acquisition, including vendor lock-in, data, IP, and sustainability | K7, K8, K9, K10, S4, S5 |
| Governance | Design and implement AI governance frameworks, including roles, responsibilities and escalation pathways | K9, K10, K14, K15, S9, S11, S12 |
| Ethics, legal requirements and regulations | Embed ethical, legal and regulatory considerations into AI decision-making processes | K1, K2, K8, S2, S10, S13 |
| Compliance | Define assurance and compliance processes, including documentation, auditability and transparency | K10, K14, K15, S4, S13 |
| Human oversight | Design and implement human oversight mechanisms for AI systems | K11, K12, K13, S10, S11 |
Funding
This apprenticeship unit is currently eligible for public funding.
Skills England will provide the Department for Work and Pensions with ongoing advice on critical skills needs, and the affordability and prioritisation of funding for apprenticeship units will remain under review.
The Department will give notice if funding for this apprenticeship unit is to be withdrawn. Following which, funding for new starts will not be available after four weeks from that notice being given.
Validation and assessment
Mandatory: As a minimum, learners will need to pass a skills test delivered by the training provider, to demonstrate that they have acquired the skills and knowledge set out in the apprenticeship unit. Employers will need to validate the result to confirm the learner has been successful.
Extended: In addition, employers (or learners) have the option to choose independent external assessment where they feel it is appropriate, for example through use of a non-mandatory qualification.
If the apprenticeship unit is in a regulated occupation and the role requires adherence to industry recognised standards and procedures, we would expect employers to choose an extended assessment.
Version log
| Version | Change detail | Earliest start date | Latest start date |
|---|---|---|---|
| 1.0 | 28/04/2026 | Not set |
Crown copyright © 2026. You may re-use this information (not including logos) free of charge in any format or medium, under the terms of the Open Government Licence. Visit www.nationalarchives.gov.uk/doc/open-government-licence