Loading
Loading
A context window is the amount of text or tokens an AI model can consider at once when generating a response. Anything outside that limit is not directly visible to the model in the current request.
Context size determines whether a model can handle long documents, large codebases, or multi-step conversations without losing important details.
A team pastes a long contract into an AI assistant, but key clauses are missed because the prompt exceeded the model's context window.
Tokenisation
Tokenisation is the process of breaking text into smaller units called tokens — which can be words, subwords, or characters — so that an AI model can process them numerically. Each token is mapped to a number that the model uses for computation.
Large Language Model (LLM)
A large language model is an AI system trained on vast quantities of text data that can understand, generate, and reason about human language. LLMs use the transformer architecture and contain billions of parameters, enabling them to perform a wide range of language tasks.
Prompt Engineering
Prompt engineering is the practice of crafting and refining the instructions (prompts) given to an AI model to get the best possible output. It involves techniques like providing context, examples, and constraints to guide the model's response.
Our programme follows a structured Level 4 curriculum with project-based learning, practical workflows, and guided implementation across business and career use cases. Funded route available for UK citizens and ILR holders.