How to add a "model as an layer" in pytorch
Question:
Is there any way to use a "pre-trained model as an layer" in a custom net?
Pseudocode:
pretrained_model = torch.load('model')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.model_layer = pretrained_model # ?
self.fc1 = nn.Linear(num_classes_of_model_layer, 320)
self.fc2 = nn.Linear(320, 160)
self.fc3 = nn.Linear(160, num_classes)
def forward(self, x):
x = pretrained_model. # ?
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
Answers:
Yes you can absolutely use another model a part of your Module
, as the other model is also a Module
Do:
self.model_layer = pretrained_model
and make inference as always with x = self.model_layer(x)
Is there any way to use a "pre-trained model as an layer" in a custom net?
Pseudocode:
pretrained_model = torch.load('model')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.model_layer = pretrained_model # ?
self.fc1 = nn.Linear(num_classes_of_model_layer, 320)
self.fc2 = nn.Linear(320, 160)
self.fc3 = nn.Linear(160, num_classes)
def forward(self, x):
x = pretrained_model. # ?
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
Yes you can absolutely use another model a part of your Module
, as the other model is also a Module
Do:
self.model_layer = pretrained_model
and make inference as always with x = self.model_layer(x)