Artificial Intelligence 🤖
Learning Approaches
Transfer Learning

Transfer learning

This is when you apply the knowledge you learnt took in a task A and apply it in another task B. For example, you have trained a cat classifier with a lot of data, you can use the part of the trained NN it to solve x-ray classification problem. Having trained that neural network, implement transfer learning by swapping a new data set [X,Y][X, Y] and initialize the last layers' weights.

Diagram Description automatically generated

Diagram Description automatically generated with medium confidence

To do transfer learning, delete the last layer of NN and it's weights and:

  1. Option 1: if you have a small data set - keep all the other weights as a fixed weights. Add a new last layer(-s) and initialize the new layer weights and feed the new data to the NN and learn the new weights.
  2. Option 2: if you have enough data you can retrain all the weights.

Option 1 and 2 are called fine-tuning and training on task A is called pretraining.

Transfer learning works best when:

  • The new task (like recognizing x-rays) uses the same type of input as the old one (like recognizing cats) – for example, they both use images or audio.
  • You have a lot of data for the old task but not as much for the new one. Even so, the new task's data is more valuable because it's exactly what you need.
  • The basic knowledge for the old task can help with the new one. For example, if knowing about shapes and edges helps recognize both cats and x-rays, then that knowledge is useful to transfer.