Loading
Loading
In AI, a hallucination occurs when a model generates information that sounds plausible but is factually incorrect or entirely fabricated. The model is not deliberately lying — it is producing statistically likely text that happens to be wrong.
Hallucinations are a major barrier to trusting AI for high-stakes tasks like medical advice or legal research, and understanding them helps you use AI tools more critically.
A lawyer used ChatGPT to prepare a court filing and the model invented several fake case citations that did not exist, leading to sanctions from the judge.
Large Language Model (LLM)
A large language model is an AI system trained on vast quantities of text data that can understand, generate, and reason about human language. LLMs use the transformer architecture and contain billions of parameters, enabling them to perform a wide range of language tasks.
Alignment
Alignment refers to the challenge of ensuring that an AI system's goals and behaviours match human values and intentions. A misaligned system might technically achieve its objective while producing harmful or undesirable side effects.
Explainability (XAI)
Explainability refers to the ability to understand and communicate how an AI system arrives at its decisions or predictions. An explainable model allows humans to inspect its reasoning, rather than treating it as an opaque 'black box'.
Our programme follows a structured Level 4 curriculum with project-based learning, practical workflows, and guided implementation across business and career use cases. Funded route available for UK citizens and ILR holders.