Nicola Bernini
Nicola Bernini
# Implementation Analysis Code first Let's take the implementation [here](https://github.com/NVIDIA/pix2pixHD/blob/5a2c87201c5957e2bf51d79b8acddb9cc1920b26/models/networks.py#L112) ```python class VGGLoss(nn.Module): def __init__(self, gpu_ids): super(VGGLoss, self).__init__() self.vgg = Vgg19().cuda() self.criterion = nn.L1Loss() self.weights = [1.0/32, 1.0/16, 1.0/8, 1.0/4,...
Latex f_{VGG}(I) \rightarrow \{H_{i}\}_{i=1,...,n} I \in \mathcal{I} \quad H \in \mathcal{H} f_{VGG}(I^{(1,2)}) \rightarrow \{H_{i}\}_{i=1,...,n}^{(1,2)} L(H^{(1)}, H^{(2)}) \rightarrow \mathbb{R} L_{1}(H_{i}^{(1)}, H_{i}^{(2)}) D = \sum_{i=1}^{n} w_{i} L_{1}(H_{i}^{(1)}, H_{i}^{(2)})
# Architecture  
# History of Deep Learning Training - It is well known that the Loss Function for DNN Training is highly non convex - In 1986, [Murty, Katta G, & Kabadi,...
# Overview 
# Update Equation 
# Representation Learning 
# Universal Domain Adaptation through Self Supervision [Universal Domain Adaptation through Self Supervision](https://arxiv.org/abs/2002.07953?fbclid=IwAR3ZHDaXuPucHEYxR1_jNmCr5huUJ4fK4q9wbqeNp-MwFid3E7q5-MuWk2I) # Analysis ## Overview - NN as $Y = f(X)$ with - $X$ : Source Domain -...
# DRL - RL is about learning a mapping between a State / Situation Space and an Action Space - For small and discrete spaces, it can be represented as...
# Slow DRL Source of slow learning - Gradient based methods - Inductive Bias: Generality vs Learning Speed Trade-off ## Gradient based methods - Can be framed as Exploration vs...