Loading
Loading
A large language model is an AI system trained on vast quantities of text data that can understand, generate, and reason about human language. LLMs use the transformer architecture and contain billions of parameters, enabling them to perform a wide range of language tasks.
LLMs are the technology behind ChatGPT, Claude, and other AI assistants — understanding them helps you leverage these tools effectively and evaluate their limitations.
GPT-4, with over a trillion parameters, can write essays, translate languages, summarise documents, and generate code — all from simple text instructions.
Transformer
A transformer is a neural network architecture that processes input data in parallel using attention mechanisms, rather than sequentially like older models. Introduced in 2017, it has become the dominant architecture for language, vision, and multimodal AI systems.
GPT (Generative Pre-trained Transformer)
GPT is a family of large language models developed by OpenAI that use the transformer architecture to generate human-like text. The models are pre-trained on vast amounts of internet text and can then be adapted for tasks such as writing, coding, and analysis.
Foundation Model
A foundation model is a large AI model trained on broad, diverse data that can be adapted for a wide range of downstream tasks. These models serve as a starting point — or foundation — that can be fine-tuned or prompted for specific applications.
Our programme follows a structured Level 4 curriculum with project-based learning, practical workflows, and guided implementation across business and career use cases. Funded route available for UK citizens and ILR holders.