autowebcompat icon indicating copy to clipboard operation
autowebcompat copied to clipboard

Tracking issue for the results of different network architectures

Open marco-c opened this issue 6 years ago • 36 comments

We will use this issue to track the results of the different network architectures and training methodologies.

Add a comment in this issue when you are working on one of them (I'll write in progress to mark it), when you're finished with one of them (so I can mark it as done), or when you think of a new one to add.

  • Pretrained with ImageNet:
  • [ ] vgg16 @sagarvijaygupta
  • [ ] vgg19 @sagarvijaygupta
  • [ ] inception
  • [ ] resnet @sagarvijaygupta
  • Pretrained with pretrain.py:
  • [ ] vgg16
  • [ ] vgg19
  • [ ] vgg-like
  • [ ] simnet
  • [ ] simnet-like
  • [ ] resnet
  • From scratch:
  • [ ] vgg16 @sdv4
  • [ ] vgg19 @sdv4
  • [ ] vgg-like @sdv4
  • [ ] simnet
  • [ ] simnet-like
  • [ ] resnet

marco-c avatar Jun 09 '18 07:06 marco-c

@marco-c I think it'll be a good idea if we kept a list of the current architectures in a todo list format here. And whenever a new architecture was added we would update the list. This could help us keep track

Shashi456 avatar Jun 09 '18 07:06 Shashi456

@marco-c I think it'll be a good idea if we kept a list of the current architectures in a todo list format here. And whenever a new architecture was added we would update the list. This could help us keep track

Yes, this was exactly my idea :)

When you finish one, tell me here and I'll update the list. If you want to add one, do the same.

marco-c avatar Jun 09 '18 07:06 marco-c

:D Oh this is nice idea! One place to show all benchmarks!

Trion129 avatar Jun 09 '18 07:06 Trion129

It looks like network.py contains implementations for 'inception', 'vgglike', 'vgg16', 'vgg19', 'simnet', and 'simnetlike' architectures. Are there other architectures that still need to be implemented?

sdv4 avatar Jun 09 '18 22:06 sdv4

@marco-c i think we need to start thinking about a benchmark, when we start training these networks we will need to benchmark them against something (like human accuracy in cifar challenges) , what do you think?

and we havent added resnet to our networks yet.

Shashi456 avatar Jun 11 '18 06:06 Shashi456

I was working on pretrained VGG16 model. Got validation accuracy of 80%. I was not able to save to file though (because of bug which will be fixed by #201 ).

Epoch 50/50 83/82 [==============================] - 152s 2s/step - loss: 0.0243 - accuracy: 0.9672 - val_loss: 0.1793 - val_accuracy: 0.7841 Epoch 00050: val_accuracy did not improve from 0.80966 [0.1576942801475525, 0.8004807692307693]

sagarvijaygupta avatar Jun 11 '18 14:06 sagarvijaygupta

Should we create a directory where models will be saved? And should we change https://github.com/marco-c/autowebcompat/blob/80fd975f4df84444c751f861297895918cad245b/train.py#L84 with a name like user_best_VGG16_model or something like that? So that we can get a link between train_info file and the model?

sagarvijaygupta avatar Jun 11 '18 16:06 sagarvijaygupta

It looks like network.py contains implementations for 'inception', 'vgglike', 'vgg16', 'vgg19', 'simnet', and 'simnetlike' architectures. Are there other architectures that still need to be implemented?

As @Shashi456 said, ResNet. There are also other architectures that we might add, but I would focus on getting at least something basic working and then we can try to improve on it.

@marco-c i think we need to start thinking about a benchmark, when we start training these networks we will need to benchmark them against something (like human accuracy in cifar challenges) , what do you think?

The benchmark could be https://github.com/marco-c/autowebcompat/issues/195.

I was working on pretrained VGG16 model. Got validation accuracy of 80%.

80% is impressive for a first try! But it might be due to class imbalance, we should take it into account.

Should we create a directory where models will be saved? And should we change

I'm thinking of creating another repository where we store the models and setting it as a submodule of this repository (like data and tools).

with a name like user_best_VGG16_model or something like that? So that we can get a link between train_info file and the model?

Yes, linking the train_info file and the model should be done, not sure about the name though.

marco-c avatar Jun 12 '18 05:06 marco-c

80% is impressive for a first try! But it might be due to class imbalance, we should take it into account.

So should we consider Confusion Matrix for class imbalance? Or should we make the training dataset itself balanced (something similar to pretrain.py)?

Yes, linking the train_info file and the model should be done, not sure about the name though.

We can simply name the model same as the name of the train_info file if that feels good?

Also I wanted to know that is there any particular reason we have implemented VGG16 and others as functions and not used the predefined ones available in Keras?

sagarvijaygupta avatar Jun 12 '18 15:06 sagarvijaygupta

Update : VGG16 pretrained with imagenet High accuracy was indeed because of class imbalance. On checking results of predicted values after the latest model (which gave 90% accuracy on 15 epochs) I found that we have class ratio of 36:380

I have attached the text file generated for the training. f8ece846acde_16_26_2018_06_12.txt

sagarvijaygupta avatar Jun 12 '18 18:06 sagarvijaygupta

@sagarvijaygupta @marco-c I think we definitely need to handle class imbalance before taking these accuracy values . Because with a class I'm balance too high we would reach a certain amount of accuracy even if all the predictions were 'y'.

Shashi456 avatar Jun 12 '18 18:06 Shashi456

So should we consider Confusion Matrix for class imbalance? Or should we make the training dataset itself balanced (something similar to pretrain.py)?

I think we should consider confusion matrix. Making the training dataset balanced is feasible for pretrain.py because we have infinite training examples, but for train.py we have only a limited dataset.

marco-c avatar Jun 13 '18 04:06 marco-c

We can simply name the model same as the name of the train_info file if that feels good?

Sounds good to me!

marco-c avatar Jun 13 '18 04:06 marco-c

Also I wanted to know that is there any particular reason we have implemented VGG16 and others as functions and not used the predefined ones available in Keras?

If we can reuse them, we definitely should. The first network I wrote was the "vgg-like" one, so clearly it wasn't available in Keras. Then when we added more I forgot there were already some available in Keras.

marco-c avatar Jun 13 '18 04:06 marco-c

@sagarvijaygupta @marco-c I think we definitely need to handle class imbalance before taking these accuracy values . Because with a class I'm balance too high we would reach a certain amount of accuracy even if all the predictions were 'y'.

Indeed, this is probably what's happening with the 90% accuracy.

marco-c avatar Jun 13 '18 04:06 marco-c

@marco-c Should I create separate PR for each model which is available in https://keras.io/applications/? And for using pretrained models should we pass in argparse like --weights=imagenet?

sagarvijaygupta avatar Jun 13 '18 06:06 sagarvijaygupta

@marco-c Should I create separate PR for each model which is available in https://keras.io/applications/?

Yes, but this is not a high priority. It doesn't matter for now if we keep our own implementation or if we reuse the already existing ones.

And for using pretrained models should we pass in argparse like --weights=imagenet?

Sounds good to me!

marco-c avatar Jun 15 '18 05:06 marco-c

@marco-c using pre-trained weights might be simpler if we directly use keras models.

sagarvijaygupta avatar Jun 15 '18 06:06 sagarvijaygupta

@marco-c I totally forgot to remove the prediction layer (softmax one) while using pre-trained VGG16. We should not take those values as of now!

sagarvijaygupta avatar Jun 15 '18 06:06 sagarvijaygupta

Just a heads up, I am going to start testing the VGG 19 (from scratch) architecture. I will open a PR for this too.

sdv4 avatar Jun 15 '18 17:06 sdv4

Network - ResNet50 Pretrained - Imagenet Optimiser - sgd Epochs - 20 Accuracy - 85.81% 65d451c26877_18_38_2018_06_16.txt

Confusion Matrix:

132 41 16 195

sagarvijaygupta avatar Jun 16 '18 18:06 sagarvijaygupta

@sagarvijaygupta is this for Y vs D + N or Y + D vs N?

marco-c avatar Jun 17 '18 02:06 marco-c

@marco-c This is for Y vs D + N. Its in the file. :smile:

sagarvijaygupta avatar Jun 17 '18 03:06 sagarvijaygupta

Network - vgg16 Pretrained - Imagenet Optimiser - sgd Epochs - 20 Accuracy - 87.25% 65d451c26877_19_39_2018_06_16.txt

Confusion Matrix:

141 32 23 188

sagarvijaygupta avatar Jun 17 '18 04:06 sagarvijaygupta

Network - vgg19 Pretrained - Imagenet Optimiser - sgd Epochs - 20 Accuracy - 86.77% 4aa405f41ed8_14_47_2018_06_18.txt

Confusion Matrix:

148 26 30 180

sagarvijaygupta avatar Jun 18 '18 15:06 sagarvijaygupta

I have been having a difficult time using Colab over the past few days. Most times that I run my notebook, the process is killed. I have been trying to figure out why, and stumbled upon this post:

https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available

I am also on the west coast of Canada, where the author of the post is also located. I managed to get one good run late last night, where I ran the training for over 80 epochs. However the output wasn't saved anywhere that I could find. Note that I am running the notebook that exists on my forked repo on github.

@sagarvijaygupta where is your output being saved?

I am trying to run the training again right now with my Google Drive mounted in Colab, but haven't been able to have a successful run over the past few hours (due to the issue linked above), and the fact that there are no GPU backends available.

@marco-c is there another cloud based GPU service that you would recommend?

That being said, when running train.py via the notebook on Colab, the best val_accuracy achieved was around 85.7%, after over 50 epochs. However, when I run train.py locally on my machine (with no GPU), I get a val_accuracy of 95.2% after 4 epochs. I am trying to figure out why this is, but wanted to post the info in case the reason is obvious to someone.

sdv4 avatar Jun 19 '18 17:06 sdv4

@sdv4 First of all, for the problem of GPU memory being shown nearly 500MB I did find a simple solution which works for me. Whenever you execute train.py you will see the amount of memory available like totalMemory: 11.17GiB freeMemory: 11.10GiB. If freeMemory is nearly 500 MB just restart the runtime (Runtime->Restart Runtime...). When you restart your runtime you don't need to reclone. Only your local variables will be lost. Mostly it is solved in one try. You won't get OOM or resource exhausted type error with it. Though my machine also dies sometime while training is going on and I too feel it won't be feasible to run training for larger epochs over Colab. Also not always GPU backends are available so sometimes it is luck! My output is being saved in the colab runtime only. If I want to save a model, I upload it to my drive. Regarding your accuracy issue. Change number of epochs to 4 only, and run the training and upload the text file generated. It might be the case that number of samples you are taking might not be the total number of samples available (some might be missing).

sagarvijaygupta avatar Jun 19 '18 18:06 sagarvijaygupta

@sagarvijaygupta Regarding the accuracy issue, here is the text file after only one epoch, where val_accuracy is at 90.6%:

Shanes-MacBook-Pro.local_13_01_2018_06_19.txt

The number of training, test, and validation samples are the same as in the last txt you shared.

Also, thanks for the Colab tips. Good to know it isn't just here where there is a problem.

sdv4 avatar Jun 19 '18 23:06 sdv4

@sdv4 your classification type is different. That might be a reason for it. labels:- y - 2120 n - 593 d - 1136 This is the breakdown of labels.csv, so I am pretty sure that's the reason.

sagarvijaygupta avatar Jun 20 '18 00:06 sagarvijaygupta

@sagarvijaygupta yes, you were right. The numbers are more what was expected once I corrected the classification type:

Network - vgg19 Pretrained - none Optimiser - sgd Epochs - 50 Accuracy - 84.61%

258a95a88d5c_01_09_2018_06_21.txt

Confusion matrix:

[[136 39] [ 26 215]]

sdv4 avatar Jun 20 '18 05:06 sdv4

@sdv4 there is nothing as correcting the classification type. Your results were for different classification and they were correct for that. I guess we want results for both!

sagarvijaygupta avatar Jun 20 '18 05:06 sagarvijaygupta

@marco-c do you think there's a neater way to record these observations, the issue will get pretty verbose afterwards and it will get harder to track the benchmarks

Shashi456 avatar Jun 20 '18 11:06 Shashi456

I think I'll just remove the comments at some point, and put the summary of the results in the first comment.

marco-c avatar Jun 20 '18 22:06 marco-c

Heads up, I am going to start testing the VGG16 and VGG-like architectures (from scratch variant).

sdv4 avatar Jun 21 '18 04:06 sdv4

Network - VGG16 Pretrained - None Optimiser - SGD Epochs - 50 Accuracy - 80.29 %

6c685b649c2b_07_55_2018_06_22.txt

Confusion matrix: [[142 61] [ 22 191]]

sdv4 avatar Jun 22 '18 07:06 sdv4

I've added usernames close to the networks people are testing, so we know who's testing what.

marco-c avatar Jul 25 '18 23:07 marco-c