Loading
Loading
Retrieval-augmented generation is a technique that enhances a language model's responses by first retrieving relevant information from an external knowledge base, then using that information to generate a more accurate and grounded answer. It combines the strengths of search with generative AI.
RAG dramatically reduces hallucinations and keeps AI responses up to date without the need to retrain the entire model — making it essential for enterprise AI deployments.
A company's internal chatbot uses RAG to search its knowledge base of policies and procedures before answering employee questions, ensuring responses are accurate and current.
Large Language Model (LLM)
A large language model is an AI system trained on vast quantities of text data that can understand, generate, and reason about human language. LLMs use the transformer architecture and contain billions of parameters, enabling them to perform a wide range of language tasks.
Vector Database
A vector database is a specialised database designed to store, index, and search high-dimensional vectors (embeddings) efficiently. It enables fast similarity searches — finding items whose vector representations are closest to a given query.
Hallucination
In AI, a hallucination occurs when a model generates information that sounds plausible but is factually incorrect or entirely fabricated. The model is not deliberately lying — it is producing statistically likely text that happens to be wrong.
Our programme follows a structured Level 4 curriculum with project-based learning, practical workflows, and guided implementation across business and career use cases. Funded route available for UK citizens and ILR holders.