康老师艾滋病康复网,内容丰富有趣,生活中的好帮手!

P24 VGG网络

时间:2022-11-04 12:49:13

相关推荐

重点是VGG块

import torchfrom torch import nnfrom d2l import torch as d2ldef vgg_block(num_convs, in_channels, out_channels): #创建VGG块layers = []for _ in range(num_convs):layers.append(nn.Conv2d(in_channels, out_channels,kernel_size=3, padding=1))layers.append(nn.ReLU())#每一层后面加RuLUin_channels = out_channels #保证输入和输出一样layers.append(nn.MaxPool2d(kernel_size=2,stride=2))#最后加入maxpoolingreturn nn.Sequential(*layers)conv_arch = ((1, 64), (1, 128), (2, 256), (2, 512), (2, 512))

#有5大块,每一块都是把高和宽减半,224/5=7,所以做了5块VGG#每一块有1个通道,64个卷积,以此类推,第一个数字是通道,第二个数字是卷积#块数=5不能变,但是每块里面的通道数和卷积数可以自己设置

def vgg(conv_arch):conv_blks = []in_channels = 1for (num_convs,out_channels) in conv_arch:conv_blks.append(vgg_block(num_convs,in_channels,out_channels))in_channels = out_channelsreturn nn.Sequential(*conv_blks,nn.Flatten(),nn.Linear(out_channels *7 *7,4096),nn.ReLU(),nn.Dropout(0.5),#要把通道数*7*7还原输入的宽度nn.Linear(4096,4096),nn.ReLU(),nn.Dropout(0.5),nn.Linear(4096,10))net = vgg(conv_arch)

观察每个层输出的形状

X = torch.randn(size=(1,1,224,224))for blk in net:X = blk(X)print(blk.__class__.__name__,'output shaple: \t', X.shape)#主要观察Sequential output的结果

Sequential output shaple: torch.Size([1, 64, 112, 112])Sequential output shaple: torch.Size([1, 128, 56, 56])Sequential output shaple: torch.Size([1, 256, 28, 28])Sequential output shaple: torch.Size([1, 512, 14, 14])Sequential output shaple: torch.Size([1, 512, 7, 7])#每一层都高宽减半,通道翻倍,最后一层可以翻倍,可以不翻倍,是经典设计网络的模式Flatten output shaple:torch.Size([1, 25088])Linear output shaple: torch.Size([1, 4096])ReLU output shaple:torch.Size([1, 4096])Dropout output shaple:torch.Size([1, 4096])Linear output shaple: torch.Size([1, 4096])ReLU output shaple:torch.Size([1, 4096])Dropout output shaple:torch.Size([1, 4096])Linear output shaple: torch.Size([1, 10])

由于VGG-11计算量太大,我们构造一个比较小的网络开始训练

ratio = 4small_conv_arch = [(pair[0], pair[1] //ratio) for pair in conv_arch]net = vgg(small_conv_arch)lr,num_epohs,batch_size = 0.05,10,128train_iter,test_iter = d2l.load_data_fashion_mnist(batch_size,resize=224)d2l.train_ch6(net,train_iter,test_iter,num_epohs,lr,d2l.try_gpu())

loss 0.175, train acc 0.936, test acc 0.91933.1 examples/sec on cpu精度很好,但是速度很慢,而且我用同样的代码还是cpu在跑,不是gpu

用GPU跑了以后:

loss 0.179, train acc 0.934, test acc 0.916

1441.0 examples/sec on cuda:0

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。