Artificial Intelligence & Machine Learning

Fine-Tuning

Definition

Fine-tuning is a process in transfer learning where the weights of a pre-trained model are further trained on a new, specific dataset. This adapts the general knowledge of the pre-trained model to the nuances of the new task.

Why It Matters

Fine-tuning is the practical step that makes transfer learning work. It tailors a powerful, general-purpose model to your specific problem, often resulting in state-of-the-art performance with a fraction of the resources needed to train from scratch.

Contextual Example

After starting with a GPT-3 model pre-trained on the entire internet, a company might fine-tune it on their own customer support chat logs. This makes the model much better at understanding and answering questions specific to their products.

Common Misunderstandings

  • Fine-tuning can range from training only the last layer of the network to training all the layers with a very small learning rate.
  • It is a core technique for customizing Large Language Models (LLMs).

Related Terms

Last Updated: December 17, 2025