arbitrary_style_transfer
arbitrary_style_transfer copied to clipboard
VGG Model File Source (And a question)
Two questions:
-
Where can I find the code for training the vgg19 model that is available for download in the repositories README?
-
During training, are the weights of the VGG model updated, or is that signal kept the same? The project I'm working on is style transfer for webpage screenshots. I suspect that I need to train a model for classifying webpages before I can do style transfer. That's why it would be useful to know how you trained this VGG model, so that I can do the same (with my own dataset).
Many thanks, great repo!
Hi apockill,
-
You don't need to re-train vgg19 model. Actually, you could consider vgg19 as a product, which comes from a competition called 'ILSVRC' (ref paper: https://arxiv.org/pdf/1409.1556.pdf). People always use vgg19 as a fixed model to extract features. Of course, if you do need to train your own vgg19 with your dataset, you could use keras to build a blank vgg19 model. (code ref: https://gist.github.com/baraldilorenzo/8d096f48a1be4a2d660d#file-vgg-19_keras-py)
-
During my model training, the weights of the vgg model will NOT update, since they are fixed. In my repo, vgg19 is not for classifying. I leverage the first few layers of vgg19 (up to relu4_1) to extract features of input images. Some tips about using trained model to stylize: Using the same content image and style image, but with different input size, you will get different stylized result.
Thanks.
Thanks for the response!
For reference, my goal here is to:
- input 'style' page facebook.com
- input 'content' page google.com Ideally, the output would be a google.com with facebook-esque blue, some font changes, and interesting aesthetics.
The reason I suspect I may need to retrain VGG19 is because your VGG19 was trained on imagenet features- which ordinarilly would be useful, but in this case webpages are so different from the content in imagenet pictures, that the types of features that get extracted would be different.
What do you think? My thought was to retrain vgg19 (or transfer learn) to classify webpages, so that it learns important features regarding fonts, colors, button roundness, etc.
Is this idea even feasible? Thanks!
Cool idea!
Since the current VGG19 is trained to classify common objects, e.g. human beings, animals, fruits, etc., it probably does not have enough capacity to extract important features in web.
I agree with your proposal, even though it may require a lot of efforts.
Thanks!