Correct option is C
In transfer learning, Fine-Tuning refers to the process of adapting a pre-trained model to a new task by making small adjustments to its parameters. Fine-tuning helps the model perform well on the new, related task, leveraging the knowledge it gained from the original task.
Important Key Points:
1. Fine-Tuning: Fine-tuning involves updating a pre-trained model on a new dataset, typically by retraining some layers of the model while freezing others. This allows the model to learn relevant features for the new task.
2. Transfer Learning: Transfer learning leverages a pre-trained model on one task and applies it to a related but different task. Fine-tuning helps the model adjust to the new task more efficiently than training from scratch.
3. Improving Performance: Fine-tuning improves the performance of a model on a new task by using the knowledge from the pre-trained model, often requiring less data and fewer computational resources than training from the beginning.
Knowledge Booster:
· Data Normalization: Data normalization refers to scaling data to a specific range or distribution. While important in machine learning, it is not the process of adapting a pre-trained model to a new task.
· Feature Extraction: Feature extraction involves selecting important features from the data to improve model performance, but it does not involve adapting a pre-trained model to a new task.
· Model Ensembling: Model ensembling combines predictions from multiple models to improve accuracy but is not related to adapting a pre-trained model.