Loading
Loading
A foundation model is a large AI model trained on broad, diverse data that can be adapted for a wide range of downstream tasks. These models serve as a starting point — or foundation — that can be fine-tuned or prompted for specific applications.
Foundation models have shifted the AI landscape from building task-specific models to adapting general-purpose ones, dramatically lowering the barrier to deploying AI.
GPT-4 is a foundation model that powers ChatGPT, Microsoft Copilot, and hundreds of third-party applications — all built on the same base model.
Large Language Model (LLM)
A large language model is an AI system trained on vast quantities of text data that can understand, generate, and reason about human language. LLMs use the transformer architecture and contain billions of parameters, enabling them to perform a wide range of language tasks.
Fine-Tuning
Fine-tuning is the process of taking a pre-trained model and further training it on a smaller, specialised dataset to adapt it for a specific task. It allows you to leverage the general knowledge of a large model while tailoring its behaviour to your particular needs.
Transfer Learning
Transfer learning is a technique where a model trained on one task is reused as the starting point for a different but related task. Instead of training from scratch, you leverage the knowledge the model has already gained, which saves time, data, and computational resources.
Our programme follows a structured Level 4 curriculum with project-based learning, practical workflows, and guided implementation across business and career use cases. Funded route available for UK citizens and ILR holders.