This was different

It was the first year into my university, 2 months in where I was still getting to know people and making friends. It was my first time in a new environment where I didn't know anyone and neither did…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Transfer learning and fine tuning Model using VGG 16

Transfer Learning and Fine-tuning is one of the important methods to make big-scale model with a small amount of data.

Usually, deep learning model needs a massive amount of data for training. But it is not always easy to get enough amount of data for that. To be added, in many cases, it takes much time to make model from the viewpoint of training. I know you don’t like to see one epoch of training using the time from sunrise to sunset. In some areas like image classification, you can use fine-tune method to solve this situation.

For example, when you try to make image classification model, very deep CNN model works well(sometimes and other time not). To make that kind of model, it is necessary to prepare a huge amount of data. However, by using the model trained by other data, it is enough to add one or some layers to that model and train those. It saves much time and data.

Here, I will show this type of method, fine-tuning, by Keras.

Transfer learning make use of the knowledge gained while solving one problem and applying it to a different but related problem.

For example, knowledge gained while learning to recognize cars can be used to some extent to recognize trucks.

Fine-tuning is simple method. It uses already trained network and re-trains the part of that by the new data set.

On the image below, each line means the weight. On this case, the neural network architecture which is made by red lines and blue nodes is fixed and the red line, meaning weight, is already trained by huge amount of data. You can add one or some layers just after the architecture and train just those part(sometimes including some layers before).

Here, I have used random image datasets of my 10 friends, for training and evaluating. Hence, this dataset has 10 classes which you see in the below dictionary which generally consists of my friends.

From Keras, we can easily use some image classification models.

Here, we are taking VGG-16.

VGG-16 is one of the convolution neural net (CNN ) architecture which is considered as very good model for Image classification. This model architecture was used in Image(ILSVR) competition in the year 2014.

Advantages

Importing Libraries

Making Data

Image data augmentation is used to expand the training dataset in order to improve the performance and ability of the model to generalize. Image data augmentation is supported in the Keras deep learning library via the ImageDataGenerator class

Building Model

In the above code, we are training our model on train and validation images.

Evaluating our model on test data set

Once you done with training the model you can visualize training/validation accuracy and loss.

Now, we are done with training our model. So going ahead with predictions.

Let me conclude the steps which we have done so far

For any questions, you can reach out to me on email (roshankg96 [at] gmail [dot] com).

Happy Learning!!

Add a comment

Related posts:

Running through the Google GCP Cloud Foundation Toolkit Setup

Google released a large set of terraform modules to deploy a “best practice based” enterprise cloud setup using infrastructure as code, a common devops best practice. They call it the “Cloud…

Creative Bankruptcy

How can you constantly have an influx of ideas and thoughts that you perfectly elaborate through words? How is it that podcasters like Joe Rogan and Bill Burr can have over 3 hour in-depth…

Welcome to the Future of Collaboration

The DAOstack platform launches Spring of 2018 and is called the WordPress for DAOs (Decentralized Autonomous Organization). WordPress revolutionized web development by bringing a set of easy to use…