AI-generated content needs evaluation just like any other source — but it has unique issues that make it tricky for academic work. Use AI for exploration and explanations, but always double-check important details in reliable sources.

This page focuses mainly on AI chatbots such as Microsoft Copilot (the University’s supported tool), ChatGPT, Claude, and Gemini. Chatbots can be useful for brainstorming and clarifying complex topics, but they also create specific risks that make careful evaluation essential.

This page shows you how to use AI responsibly in your studies, avoid common pitfalls, and know when to switch from AI to proper academic sources.

Note: This focuses on text-based AI responses. For AI-generated images, code, or data analysis, ask your tutor for subject-specific guidance.

Important note:

Always follow your assessment brief. It will specify whether you can use Generative AI (GenAI) in your work, and whether this must be acknowledged. Requirements may vary by module or programme. If there is any conflict between this guidance and your assessment brief, the brief takes priority. For details of the University’s official position, see Using AI: Rules and Responsibilities.

Why AI outputs need special attention
What makes AI particularly tricky for academic work
Quick checks for everyday use
When AI evaluation isn't enough
Be aware of AI inside academic tools
Compare academic and AI-generated sources

Last modified by

Related Pages

Back to top