Build a deep learning algorithm in 2 ways that can decide whether the animal in an image is a dog or a cat, and compare the two models. First model is a Simple Convolutional Neural Network, and the other model is an algorithm earned by transfer learning VGG Net.
VGGNet was a runner-up model in Image Net Challenge 2014. Although GoogLeNet model won the challenge that year, VGGNet was praised for its simplicity and conciseness.
For this project, I will be using 2,000 images of dogs and cats for training, and another 2,000 images for testing.
After importing the images, I resized each of them into the same size before analysis.
First model is a simple Convolutional Neural Network.
In this model, the data go through 2 sets of Conv2D with 16 layers, followed by maxpooling. Then, the data go through another 2 sets of Conv2D, but with 32 layers this time. Lastly, fully connected Neural Network will generate the output of the model.
As you can see from the summary of the model, there are 12,861,969 parameters in total. It is noticeable that most of the parameters are concentrated in dense_1.
Next model is an algorithm earned by transfer learning VGG Net.
This is the table of configurations that VGG team drew when they came up with this model. They chose Model-D (highlighted with red box) as their final model for the ImageNet Challenge.
This is only the first part of the model. Just as it is indicated in the table, I created two Conv2D with 64 filters and a maxpooling. This time, however, I set “trainable” as False, since we will not be training from this. Instead, I loaded the pre-calculated weights that VGGNet already calculated, and fit it into this model. These weights, which are available online, were calculated when VGGNet was learning the data from ImageNet. After writing out “trainable=False” codes as indicated in the table, I loaded the weights :
After that, I made Fully connected network, without setting trainable to be false. This part will mold all the features into the result.
This is the result of this model’s training. It had 90% of accuracy, which I believe will increase more if I managed it to go over more epochs.
Then, I inserted test data (another 2,000 photos of dogs and cats) and got a similar accuracy. Following is some of the images that this model got wrong : found dog as a cat and vice versa.