EfficientNet-PyTorch
EfficientNet-PyTorch copied to clipboard
How can the EfficientNet model use in multi-labels image classification?
How can the EfficientNet model use in multi-labels image classification?
Same as the other archs
Here is sample of how I used it for 5 class classification.
#load model and define number of classes model = EfficientNet.from_pretrained('efficientnet-b7', num_classes=5) #Defining Optimizer and Loss Criteria criterion = nn.CrossEntropyLoss() learning_rate=1e-3 optimizer = optim.Adam(model.parameters(), lr=learning_rate) #Trainer for epoch in range(epochs): running_loss = 0.0 #loss correct=0 #num_correct in a batch total=0 #total_size for i, data in enumerate(training_generator, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) labels = eye[labels] #calc loss optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, torch.max(labels, 1)[1]) _, predicted = torch.max(outputs, 1) _, labels = torch.max(labels, 1) correct += (predicted == labels).sum().item() total += labels.size(0) accuracy = float(correct) / float(total) #backward prop loss.backward() optimizer.step()
what is "training_generator"? And why use "labels = eye[labels]" ? Thank you!
Same as the other archs
Do you have the example code for multi labels classification? Thank you!
Here is sample of how I used it for 5 class classification.
#load model and define number of classes model = EfficientNet.from_pretrained('efficientnet-b7', num_classes=5) #Defining Optimizer and Loss Criteria criterion = nn.CrossEntropyLoss() learning_rate=1e-3 optimizer = optim.Adam(model.parameters(), lr=learning_rate) #Trainer for epoch in range(epochs): running_loss = 0.0 #loss correct=0 #num_correct in a batch total=0 #total_size for i, data in enumerate(training_generator, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) labels = eye[labels] #calc loss optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, torch.max(labels, 1)[1]) _, predicted = torch.max(outputs, 1) _, labels = torch.max(labels, 1) correct += (predicted == labels).sum().item() total += labels.size(0) accuracy = float(correct) / float(total) #backward prop loss.backward() optimizer.step()
what is "training_generator"? And why use "labels = eye[labels]" ? Thank you!
Training Generator is Data-Generator for the dataset I was working on and eye generates a tensor of NxN. Here is the full code in my repo: https://github.com/kartikdutt18/Kaggle-ATPOS-Efficient_Net/blob/master/Efficient_Net.ipynb If you have trouble viewing it : Use this link https://www.kaggle.com/sanninjiraya/efficient-net or clone the repo.
Here is sample of how I used it for 5 class classification.
#load model and define number of classes model = EfficientNet.from_pretrained('efficientnet-b7', num_classes=5) #Defining Optimizer and Loss Criteria criterion = nn.CrossEntropyLoss() learning_rate=1e-3 optimizer = optim.Adam(model.parameters(), lr=learning_rate) #Trainer for epoch in range(epochs): running_loss = 0.0 #loss correct=0 #num_correct in a batch total=0 #total_size for i, data in enumerate(training_generator, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) labels = eye[labels] #calc loss optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, torch.max(labels, 1)[1]) _, predicted = torch.max(outputs, 1) _, labels = torch.max(labels, 1) correct += (predicted == labels).sum().item() total += labels.size(0) accuracy = float(correct) / float(total) #backward prop loss.backward() optimizer.step()
This doesn't answer the original question. You're applying the problem to a multi-class problem whereas a multi-label problem means an input can have multiple labels