Loading
Loading
Alignment refers to the challenge of ensuring that an AI system's goals and behaviours match human values and intentions. A misaligned system might technically achieve its objective while producing harmful or undesirable side effects.
As AI systems grow more powerful, alignment ensures they remain helpful and safe rather than pursuing goals that conflict with human wellbeing.
OpenAI uses reinforcement learning from human feedback (RLHF) to align ChatGPT's responses with what users find helpful, honest, and harmless.
AI Safety
AI safety is a research field focused on ensuring that AI systems behave as intended and do not cause unintended harm. It encompasses technical challenges like robustness and reliability, as well as broader concerns about long-term risks from increasingly capable systems.
Responsible AI
Responsible AI is an approach to developing and deploying AI systems that prioritises fairness, transparency, accountability, and societal benefit. It encompasses practices and principles designed to minimise harm and ensure AI serves people equitably.
Hallucination
In AI, a hallucination occurs when a model generates information that sounds plausible but is factually incorrect or entirely fabricated. The model is not deliberately lying — it is producing statistically likely text that happens to be wrong.
Our programme follows a structured Level 4 curriculum with project-based learning, practical workflows, and guided implementation across business and career use cases. Funded route available for UK citizens and ILR holders.