
Big Data in the e-commerce
11 July 2018
What’s new and noteworthy in WordPress?
18 July 2018
Learning Transfer is used to reuse the previously prepared model for a new problem. Currently, it is very popular in the field of Deep Learning, because it allows training neural networks with relatively small amounts of data. This is very useful because most real-world problems typically do not have millions of tagged data points to train such complex models.
Transfer learning – what is it?
In Transfer Learning, the knowledge of the already trained machine learning model is applied to another but related problem. For example, if you’ve trained a simple classifier to predict whether a picture contains a backpack, you can use the knowledge that the model gained during its training to identify other objects such as sunglasses. Thanks to this, we’re basically trying to use what we’ve learned in one task to improve generalization in another. We transfer the knowledge that the network has learned in task A, to new task B.
The general idea is to use the knowledge that the model has learned from the task in which a lot of labelled training data is available, to a new task in which we do not have too much data. Instead of starting the learning process from scratch, you start with the patterns you learned in solving the related task.
This is not exactly a Machine Learning technique. Transfer Learning can be perceived as a “design methodology” as part of machine learning, for example in active learning. It is also not the exclusive part or field of machine learning. Nevertheless, it has become quite popular in connection with neural networks, because they require huge amounts of data and computing power.
Transfer learning – design methodology
In image recognition, neural networks usually try to detect edges in primary layers, shapes in the middle layer, and some task-specific functions in the last layers. Thanks to the “transfer learning” you use the primary and middle layers, and only then you re-train these last layers. It helps us use the data marked with the task label on which you were initially trained.
In Transfer Learning, we try to transfer as much knowledge as possible from the previous task on which the model was trained to the new task. This knowledge can take different forms depending on the problem and data. For example, this may be the way the models are composed, which will allow us to identify new objects more easily.
Pros and cons
Using Transfer Learning has many advantages. The main advantages are basically that you save training time and that your neural network works better in most cases and you do not need a lot of data. You can build a solid machine learning model with relatively small training data because the model is already pre-trained. This is especially valuable for natural language processing (NLP) because expert knowledge is required to create large, labelled data sets.
It is difficult to formulate rules that are generally applicable to machine learning. You usually use Transfer Learning when you do not have sufficiently marked training data to train your network from the beginning and/or there is already a network that is previously trained in a similar task that is usually trained on huge amounts of data. Another case where its application would be appropriate is the situation in which task A and task B have the same input data.
If the original model was trained using TensorFlow, you can simply restore it and re-create some layers for your task. Note that Transfer Learning works only if the functions learned from the first task are generic, which means that they can also be useful in other related tasks.