
-
Input Pipeline
1
2
# torchvision.datasets => torch.utils.data.DataLoader -
Train && Val Diff
1
2
3
4
5
6
7
8#train = true of false
train_dataset = torchvision.datasets.MNIST(root='../data/',
train=True,
transform=transforms.ToTensor(),
download=True)
val_dataset = torchvision.datasets.MNIST(root='../data/',
train=False,
transform=transforms.ToTensor()) -
torch.max ‘s return : first is the max value,second is the index
-
计算准确率
1
correct += (predicted == labels).sum().item()
-
Output size of Conv
-
Summary.
To summarize, the Conv Layer:-
Accepts a volume of size
-
Requires four hyperparameters:
- Number of filters K,
- their spatial extent F,
- the stride S,
- the amount of zero padding P.
-
Produces a volume of sizeW2×H2×D2
where:
-
D2=K
-
-
With parameter sharing, it introduces
weights per filter, for a total of
weights and K biases.
-
In the output volume, the d-th depth slice (of size W2×H2) is the result of performing a valid convolution of the d-th filter over the input volume with a stride of S, and then offset by d-th bias.
-




近期评论