Staff Guidance onGenerative AI in Teaching & Assessment
This guidance applies to all staff who design, deliver or assess Open University teaching and assessment — including modules and qualifications that use, or may be affected by, Generative AI (GenAI).
| Document | Purpose |
|---|---|
| OU Guidance on GenAI for Students | What students may and may not do |
| GenAI Enabling Principles (2023) | University-wide strategic approach to GenAI |
| GenAI Acceptable Use Policy | Primary source for permitted/prohibited use — staff and students |
| OU Responsible AI Policy | Expectations for safe, transparent and accountable AI deployment |
| Academic Conduct Policy | Handling academic integrity concerns including GenAI misuse |
| AI in LTA Exploratorium | One-stop shop: events, resources, guidance, Community of Practice |
- 1Go to https://m365.cloud.microsoft/chat or use the Copilot button in Microsoft Edge
- 2Sign in with your OU account
- 3Check for the shield icon — hover over it to confirm 'enterprise data protection' applies
- ⚠If the shield icon is NOT visible — do not use that session for OU work
The rapid development of GenAI means higher education must reconsider both what it assesses and how. This is an ongoing process — not a one-off adjustment. GenAI-related assessment design should be embedded in regular programme reviews, production and presentation meetings.
Every assessment component must be explicitly labelled as one of three categories. By default (if not labelled), assessments are treated as Category 2.
CAT 1
When to use: Only in clearly justified, exceptional cases where GenAI would prevent demonstration of essential foundational skills.
⚠ Category 1 is difficult to enforce. Use sparingly and supplement with additional authorship evidence measures.
CAT 2
Typical permitted use patterns:
a) Generating ideas and structuring only
b) Editing the student's own work — not producing new content
c) Completing part of the task, but student must critically evaluate, adapt and integrate the output
CAT 3
Important: Students who object to or cannot access GenAI must be provided with an alternative assessment route.
Unless you specify otherwise, students must follow these default acknowledgement requirements:
| 📚 | Reference all AI-generated outputs using Generative AI (Harvard) style in Cite Them Right. Uncited GenAI outputs = plagiarism. |
| 📄 | Summarise GenAI use in an appendix — describe the conversation, include key prompts in quotation marks, cite as personal communication. |
| 💾 | Keep a record of the GenAI conversation — students may be asked to provide it if requested by a tutor or Academic Conduct Officer. |
GenAI capabilities evolve rapidly. Use Microsoft Copilot (protected mode) to test drafts — e.g. ask it to 'answer this question as a Level 1 student'. Treat the results as indicative, not definitive.
Questions more challenging for GenAI typically require:
| # | Feature | Example approach |
|---|---|---|
| 1 | Personal, organisational or geographic context not available on the open web | Ask students to draw on their own workplace, community or placement experience |
| 2 | Higher-level cognitive skills: analysis, evaluation, synthesis, creativity | Require students to critique or compare, not just describe |
| 3 | Specific engagement with module materials (beyond simple reproduction) | Ask students to apply module frameworks to a case or data set provided in the question |
| 4 | Justified opinion or specific conclusion, referenced to evidence or theory | 'Argue for a position and anticipate two counterarguments' |
| 5 | Description of process or method, including intermediate reasoning steps | 'Show your working and explain each modelling decision' |
| 6 | Specific reflection on module activities, own work, or project decisions | 'Critique your own earlier TMA draft in light of the tutor feedback received' |
| 7 | Authentic tasks mirroring professional environments, including original data extraction | Interview a client/colleague; analyse data you have collected yourself |
Marking schemes must explicitly reward academic integrity — accurate sourcing, careful reasoning, appropriate GenAI acknowledgement, and evidence of the student's own thinking.
Review schemes to check that answers which are technically correct but show no engagement with taught material, or which are superficial and generic, cannot gain high marks. Where GenAI use is permitted, give markers clear instructions on:
- Which module concepts, sources and skills should be evidenced in the main submission
- How to evaluate GenAI use documented in the appendix (clarity of prompts, quality of AI output, quality of student's critical editing and integration)
Consider diverse, authentic assessment forms that emphasise process, application and judgement:
| Form | Why it helps in a GenAI context |
|---|---|
| Oral discussions / viva | Tests understanding in real time; hard to outsource to GenAI |
| Portfolios / reflective commentaries | Emphasise personal learning journey; require student-specific evidence |
| Group work / presentations | Process observable; contribution more traceable; employability skills |
| Incremental / staged submissions | Mirrors real-world practice; tutors see work evolving; students must act on feedback at each stage |
| Project-based / authentic tasks | Situated in real contexts; original data; hard to replicate generically |
Changes to assessment must be accompanied by corresponding changes to teaching. Students need structured opportunities to learn and practise relevant skills before they are assessed.
If you introduce oral discussions or viva-style elements, provide preparatory support:
- Videos of mock discussions or worked examples of good and weak responses
- Checklists or prompt sheets for participation
- Low-stakes rehearsal activities in tutorials
For GenAI-aware modules, also include guided practice in:
- Formulating prompts effectively
- Critiquing GenAI outputs
- Using feedback to improve subsequent work
| What to explain | How to model it |
|---|---|
| What GenAI tools can and cannot do — e.g. tendency to hallucinate references, oversimplify, or reflect dominant cultural perspectives | Use subject-specific examples in tutorials |
| Safe, ethical, policy-aligned use — not entering personal data; not uploading OU materials to non-approved tools; using Copilot in protected mode | Demonstrate in tutorials or short screencasts |
| How to check accuracy, bias and relevance in GenAI outputs | Analyse outputs with students live in sessions; use the OU Critical AI Literacy framework |
| Why assessments develop transferable skills — communication, research, critical thinking, ethical judgement | Emphasise what GenAI cannot replace: weighing evidence, value-laden decisions, empathy, lived experience |
The Academic Conduct Guide provides detailed advice on GenAI for tutors, module teams and other staff, including how to mark and when to refer assignments for investigation.
| Indicator | What to do | Weight of evidence |
|---|---|---|
| Hallucinated / fictitious references — citations that don't exist or consistently fail to match sources | Verify a sample via Library, DOI or catalogue searches. Treat as evidence work does not meet expectations for accurate source use. | Strong indicator — but consider alongside other factors |
| No engagement with module materials; generic or inconsistent reasoning | Consider whether to discuss with student. Refer under Academic Conduct Guide if appropriate. | Moderate — consider combination of indicators |
| Undeclared AI appendix; inconsistent writing level across submission | Follow Academic Conduct Guide and Policy. Consult Faculty-specific guidance for referral criteria. | Moderate — consider combination of indicators |
| Stylistic features only (tone, vocabulary, sentence structure) | Do NOT refer based on style alone. Discuss with student if concerned. | Insufficient alone — do not refer on this basis |
As GenAI tools improve, attempting to 'spot' AI use will become increasingly impractical. Staff time is better invested in assessment design that:
- Makes appropriate, transparent use of GenAI where this supports learning
- Reduces incentives and opportunities for misconduct
- Aligns with the Academic Integrity Principles for Assessment Design
- Follows the Principles of Teaching Academic Integrity in Level 1 modules
The following teams can provide support. Keeping them informed of changes to your module helps share good practice and ensures GenAI-related activity remains aligned with institutional policies.
| Team / Contact | When to contact | How to contact |
|---|---|---|
| CLIP (Content, Licensing & Intellectual Property) | Copyright, licensing and legal issues; GenAI-generated content that will become OU copyright material | Via SharePoint intranet page |
| Data Protection | What data can be shared with which GenAI tools; privacy queries | data-protection@open.ac.uk |
| Information Security | System configuration, third-party tool security, prompt-injection or phishing risks | information-security@open.ac.uk |
| Cloud Platform Team | Access to protected Azure OpenAI / ChatGPT environment for teaching, research or scholarship projects | cloud-platform@open.ac.uk |
| Learning Design Service | Redesigning teaching/assessment activities; embedding Critical AI Literacy; applying design frameworks | LDS-LearningDesign@open.ac.uk (FAO GenAI team) |
| Academic Liaison Librarians | Developing students' critical GenAI skills; information literacy aspects of GenAI activities | Contact via module production mailbox or Lib-presentation-{Faculty}@open.ac.uk |
| AI Steering Group | Institution-wide AI strategy or risk; questions about the OU AI Framework, Responsible AI Policy or GenAI Acceptable Use Policy | AI@open.ac.uk |
| GenAI in LTA Academic Leads | Questions about this guidance; sharing examples of practice that could inform future updates | GAI-LTA@open.ac.uk |
In 2025, this guidance was written by Michel Wermelinger (Academic co-Lead for GenAI in Learning, Teaching and Assessment) with input from Mirjam Hauck (Academic co-Lead), Jessica Evans and Chelle Oldham (Academic Leads for Academic Integrity), Mychelle Pride (Academic Director, PVC-S), Ian Pickup and Rachel Penny (Pro-Vice-Chancellors, Students), and colleagues from the AI Steering Group, Learner and Discovery Services, Library, Information Rights, and across all Faculties.
Updated by Haider Ali in 2026, with further input from colleagues involved in AI in Learning, Teaching and Assessment, Academic Integrity, Learning Design, Library, and the development of the OU Critical AI Literacy framework and AI glossary resources.
Use sparingly. Hard to enforce.
Most OU assessments fall here.
Integrate into marking criteria.
- Only use Copilot Chat (web) in protected mode for OU teaching and assessment work — check the shield icon is visible
- Never paste student work (TMAs, EMAs, forum posts) into any GenAI tool
- Never enter personal or confidential data into any GenAI tool without explicit approval
- Always critically check GenAI outputs for hallucinations, bias and inaccuracy
- Be transparent with students and colleagues about where GenAI was used in materials
- Label every assessment component as Category 1, 2 or 3
- Ensure marking criteria explicitly reward academic integrity and the student's own judgement
- Provide an alternative route for Category 3 tasks for students who cannot use GenAI