深度学习实践教程 实验指导书实验1--7 PyTorch框架安装---生成式对抗网络
1、深度学习实践教程实验指导书g PyCharm Community Edition SetupPCPCInstallingPlease wait while PyCharm Community Edition is being installed.Extract: platform-impl.jar. 34%Show detailsCancel图13安装过程安装完成界面如下图2.25所示 PyCharm Community Edition SetupCompleting PyCharm Community Edition SetupPyCharm Community Edition has be
2、en installed on your computer.Click Finish to dose Setup. Run PyCharm Community Edition图14安装完成进入软件如图15所示10y Welcome to PyCharm (Administrator)PyCharm+ Create New Project=Open/ Get from Version ControlQ Configure Get Help 图15开启PyCharm点击Create New Project,如下图16所示y New ProjectLocation: C:UsersAdministr
3、atorPycharmProjectsuntitled Project Interpreter: New Virtualenv environmentCreate Cancel图16设置工程位置Location是存放工程的路径,点击uProject Interpreter这个三角符号可以看 到PyCharm已经自动获取了 Python3.5,如下图17所示,选择“Inheritglobal sitepackages” 和“make available to all projects”这两项一定要选,否则前面安装的 PyTorch包将无法引用。11y New ProjectLocation: D
4、:SyuPyTorch Project Interpreter: New Virtualenv environmentJ Inherit global site-packages,Make available to all projectsExisting interpreter* Python 3.5CreateCreateCancel图17设置项目编译器Location选择的文件夹内需要为空,不然无法创建,第二个Location保持默 认,是自动默认的,其余保持默认,然后点击“Create。出现如下界面,这是 PyCharm在配置环境,等待即可。最后点击“close”关掉提示。如下图18所
5、示。国 Ale Edit View Navigate Code Refactor Run Tools VQ Window Help XAdd Configuration.Q国-Get familiar with the main features of the IDE by reading these tips.Try out the features descnbed in the tips while this dialog stays open on the screen.If you dose the dialog, you can always get back to it usin
6、g Help | Tip of the Day on the menu barhttps://www.renrendoc.com/paper/ Show tips on startupPrevious Tip Next Tip CloseQ Event LogB Terminal 。Python Console = 6: TODO图18等待配置新建工程为:“SyuPyTorch”,如图19所示。在工程中建立文件。12国 File Edit View Navigate Code Refactor Run Tools VCS Window Help SyuPyTorch D:SyuPyTorch - PyCharm (Administrator)SyuPy
7、TorchAdd Configuration.t;B6-d 二 lllll External Libraries% Scratches and Consoles SyuPyTorch D:SyuPyTorchSearch Everywhere Double ShiftGo to File Ctrl+Shift+NRecent Files Ctrl+ENavigation Bar Alt+HomeDrop files here to opens_6pu.s_6pu.Q Event Log:=6: TODO B Terminal 4* Python Console图19建立编译环境。右键单击“Sy
8、uPyTorch”工程,点击“New”,选择“PythonFile”,给新建的 Python取个名字,如图2.20所示。New Python file治 pytorch_testg Python file治 Python unit test4 Python stub图20新建Python文件给文件起名pytorch_test.py如图21所示。至此,可以开始编写代码了。图 21 新文件 pytorch_test.py输入测试代码,如图22所示。这里注意是要在安装好PyTorch基础上进行的。13 TOC o 1-5 h z Is VCS Window Help SyuPyTorch D:Syu
9、PyTorch - .pytorch_test.py X pytorch test 先 g | Q越 pytorch testpyimport torch.a=torch.FloatTensor(2A3)-print(a) 4图22输入测试代码按快捷键ctrl+shift+F10v或者点击pytorch test 绿色三角形,就会运行,结果如图23所示。D:SyuPyTorchvenvScriptspython exe D:/SyuPyTorch/pytorch_test py tun50r(69556e+33, 5.6893e-43, 6.93856+33 56893e-43, 6.9385
10、6+33 5.6893e-43)Process finished with exit code 0图23运行结果14实验二PyTorch基础一、实验目的.理解张量.掌握Tensor的创建.掌握Tensor的调整形状操作.掌握Tensor的加、减、乘、除、取绝对值操作.掌握Tensor的比较操作.掌握Tensor的数理统计操作.掌握Tensor与Numpy的互相转换操作.掌握Tensor的降维和增维操作.掌握Tensor的裁剪操作.掌握Tensor的索引操作.掌握cuda()函数二、实验内容张量的创建、调整形状、加减乘除、取绝对值操作、比较、数理统计操作、 与Numpy的互相转换、降维和增维、裁
11、剪、索引,把Tensor移到GPU上去。 三、主要实验步骤及结果.要定义一个64位浮点型Tensor,其值是矩阵:1,2,3,4,5,6,并输出 结果。.创建一个张量a,元素全部是1,尺寸为2x3。并打印出来。创建一个张量b,元素全部是0,尺寸为2x3。并打印出来。创建一个张量c,对角线元素全部是1,尺寸为3x3。并打印出来。创建一个张量d,随机生成的浮点数的取值满足均值为0、方差为1的正 太分布,尺寸为2x3。并打印出来。创建一个张量e,长度为5的随机排列。并打印出来。创建一个张量f,从1开始到7结束,步长为2。并打印出来。.要定义一个义位整型Tensor,其值是矩阵:1,2,3,4,5,6
12、,并输出结 果。.构造一13x2矩阵,不初始化,并输出结果。.构造一个3*2的随机初始化的矩阵,并输出结果。.构造一个矩阵全为0,而且数据类型是long,并输出结果。.构造一个数据类型是long的全。的3*2矩阵,并输出结果。15.构造一个值为口.5, 2的张量,并输出结果。.根据给出的输入,得到输出,并记录。输入:import torch as t c = t.Tensor(3,2) print(c).根据给出的输入,得到输出,并记录。 输入:import torch as td = t.Tensor(3,2)e = t.Tensor(d.size()print(e).以下每个函数以size
13、=2*3为例,写出输入及输出。torch.empty(size)返回形状为 size 的空 tensortorch.zeros(size)全部是 0 的 tensortorch.zeros_like(input)返回总艮 input 的 tensor 一个 size 的全零 tensortorch.ones(size)全部是 1 的 tensortorch.onesike(input)返回跟 input 的 tensor 一样 size 的全 tensortorch.rand(size) 0,1)内的均匀分布随机数.创建一个二阶张量,长度为8,元素为0,2,3,4,5,6,7,将其改编成形状
14、为2*4的张量。. a = torch.Tensor(2,2,1,4)b = torch.Tensor( 3,5 9 7,4)实现求a与b乘积的操作,并输出结果。.有两个张量a =1,2, b = 3,4,比较两张量大小。.求出张量a =(8)的均值。.计算 Tensor: -1.2027,-1.7687, 0.4412, -1.3856的 tan()值.写出下面程序的结果,并写出注释语句含义。输入:import torch16a = torch.arange(4.)print(torch.reshape(a, (2, 2)b = torch.tensor(0, 1, 2, 3)print(
15、torch.reshape(b, (-1,) #.写出下面程序的结果,并写出注释语句含义。输入:import torchx = torch.randn(3, 4)print(x)mask = x.ge(0.5)print(mask)print(torch.masked_select(x, mask) #.写出下面程序的结果,并写出注释语句含义。输入:import torchx = torch.randn(2, 3)print( x)print(torch.cat(x, x, x), 0) #.写出下面程序的结果,并写出注释语句含义。输入:import torchprint(torch.eye(
16、3) #.写出下面程序的结果,并写出注释语句含义。输入:import torchprint(torch.range(l, 4)print(torch.range( 1, 4, 0.5) #.写出下面程序的结果,并写出注释语句含义。输入:import torch17a = torch.randn(4, 4)print(a)b = torch.randn(4)print(b)print(torch.div(a, b) #.写出下面程序的结果,并写出注释语句含义。输入:import torchexp = torch.arange(l., 5.)base = 2print(torch.pow(base
17、, exp) #.写出下面程序的结果,并写出注释语句含义。输入:import torcha = torch.randn(4)print(a)print(torch.round(a) #.写出下面程序的结果,并写出注释语句含义。输入:import torcha = torch.randn(4)print(a)print( torch.sigmoid(a) #.写出下面程序的结果,并写出注释语句含义。输入:import torcha = torch.tensor(0.7, -1.2, 0.,2.3)print(a)print( torch.sign(a) #18.写出下面程序的结果,并写出注释语句
18、含义。输入:import torcha = torch.randn(4)print(a)print( torch, sqrt(a)#.写出下面程序的结果,并写出注释语句含义。输入:import torcha = torch.randn(l, 3)print(a)print(torch.sum(a) #.写出下面程序的结果,并写出注释语句含义。输入:import torcha = torch.randn(4)print(a)b = torch.randn(4)print(b)print(torch.max(a, b) #.写出以下输入的输出。输入:import torcha = torch.ze
19、ros(2, 1,2, 1, 2)print(na =,a)print(a.size() 二,a.size()b = torch.squeeze(a)print(nb =,b)print(nb.size() =n,b.size()c = torch.squeeze(a, 0)19 TOC o 1-5 h z 实验一 PyTorch框架安装3实验二PyTorch基础15实验三线性回归和逻辑回归21实验四多层全连接神经网络30实验五卷积神经网络35实验六循环神经网络40实验七生成式对抗网络44print(nc =,c)print(Hc.size()二,c.size()d = torch.unsqu
20、eeze(c, 1)print(nd =,d)print(nd.size() 二,d.size().查资料,说明torch.mulQ和torch.mm()的区别,并输出以下程序结果。输入:import torcha = torch.rand(l, 2)b = torch.rand(l, 2)c = torch.rand(2, 3)print(torch.mul(a, b)print(torch.mm(a, c)print(torch.mul(a, c)20实验三线性回归和逻辑回归一、实验目的.理解回归.理解线性回归模型.掌握一元线性回归的实现.理解梯度及梯度下降. 了解多元线性回归实现.掌握逻
21、辑回归的概念和代码实现二、实验内容实验1.某比萨饼连锁店分布在沈阳市市内。该比萨饼连锁店的最佳位置是在大学校 园附近,管理人员确信,这些连锁店的季度销售收入与学生人数是正相关的。下面是10家比萨饼店的季度销售数据,观测次数n=10,数据中给出了自 变量x(i)为当前比萨饼店所在的学校的学生人数,因变量Y为比萨饼店季度销 售额,现有一家新开的比萨饼店,已知这家店附近的学生人数,请用深度学习中 线性回归的知识求比萨饼店季度销售额。数据如图1:连锁店iX (i)Y Ci)1258261053888481185121176161377201573201699221491026202图1数据图表中的数据
22、单位为千。请用PyTorch绘制出源数据可视化散点图,再绘制出回归线的图。实验2.先在范围内任意做出100各点,其y值由y=5x+8后面加上 torch.rand()函数生成。再画出这些点的回归线。写出程序并生成图。21实验3.随机生成一个最高次为3的多项式,并实现多元线性模拟。三、主要实验步骤及结果实验1.参考代码如下:import torchimport torch.nn as nnimport numpy as npimport matplotlib.pyplot as pitfrom torch.autograd import Variable#定义超参数input_size = 1o
23、utput_size = 1num_epochs = 1000learning_rate = 0.001x_train = np.array(2, 6, 8, 8, 12, 16,20, 20, 22, 26,dtype=np.float32)#xtrain生成矩阵数据y.train = np.array(58, 105, 88, 118, 117, 137, 157, 169, 149,202, dtype=np.float32)plt.figure()画图散点图plt.scatter(x_train,y_train)plt.xlabel(,x_train,)x轴名称plt.ylabel(y
24、_train)22y轴名称显示图片plt.show()线性回归模型class LinearRegression(nn.Module):definit(self, input_size, output_size): super(LinearRegression, self).init() self.linear 二 nn.Linear(input_size, output_size)def forward(self, x): out = self.linear(x) return outmodel = LinearRegression(input_size, output_size)定义损失函数
25、和优化函数criterion = nn.MSELoss()optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)训练模型for epoch in range(num_epochs):#将numpy数组转换成torch变量inputs = Variable(torch.from_numpy(x_train)targets = Variable(torch.from_numpy(y_train)#后向传播和优化optimizer.zero_grad()outputs = model(inputs)loss = criter
26、ion(outputs, targets)23loss.backward()optimizer.step()if (epoch+1) % 50 = 0:print (*Epoch %d/%d, Loss: %.4f%(epoch+l, num_epochs, loss.item()#画图model.eval()predicted 二 model(Variable(torch.from_numpy(x_train).data.numpy()plt.plot(x_train, y_train, To)plt.plot(x_train, predicted, label=,predict,)plt.
27、legend()plt.show()200 -180 -160 -u 140 -27 120 -100 -80 -60 -510152025x_train图2散点图24图3实验1结果实验2.参考代码如下:import torchfrom torch.autograd import Variableimport numpy as npimport randomimport matplotlib.pyplot as pitfrom torch import nnx = torch.unsqueeze(torch.linspace(-1, 1, 100), dim= 1)y = 5*x + 8 +
28、torch.rand(x.size()上面这行代码是制造出接近y=5x+8的数据集,后面加上torch.rand()函 数制造噪音画图以下语句是显示散点图的,如果想看就把注释符号去掉,显示完散 点图,需要关掉此图才能继续运行。plt.scatter(x.data.numpy(), y.data.numpy()plt.show()class LinearRegression(nn.Module):def_init_(self):super(LinearRegression, self).init()self.linear = nn.Linear(l, 1) #输入和输出的维度都是1def for
29、ward(self9 x):out = self.linear(x)return outif torch.cuda.is_available():model = LinearRegression( ).cuda()25else:model = LinearRegression()criterion = nn.MSELoss()optimizer = torch.optim.SGD(model.parameters(), lr=le-2)num_epochs = 1000for epoch in range(num_epochs):if torch.cuda.is_available():inp
30、uts = Variable(x).cuda()target = Variable(y).cuda()else:inputs = Variable(x)target = Variable(y)#向前传播out = model(inputs)loss = criterion(out, target)#向后传播optimizer.zero_grad() #注意每次迭代都需要清零loss.backward()optimizer.step()if (epoch+1) %200 = 0:print(Epoch/,loss:.6fformat(epoch+1,num_epochs,loss.item()m
31、odel.eval()if torch.cuda.is_available():26predict = model(Variable(x).cuda()predict = predict.data.cpu( ).numpy()else:predict = model(Variable(x)predict = predict.data.numpy()plt.plot(x.numpy(), y.numpy(), To, label=,Original Data*)plt.plot(x.numpy(), predict, label=Fitting Line*)plt.show()图4实验2结果实验
32、3.参考程序:from itertools import countimport torchimport torch.autogradimport torch. nn. functional as FPOLY_DEGREE = 3def make_features(x):”心用x, xA2, x八3, x八4构造特征矩阵”心x = x.unsqueeze(l)return torch.cat(x * i for i in range( 1, POLY_DEGREE+1), 1)27W_target = torch.randn(POLY_DEGREE, 1) b_target = torch.r
33、andn(l)def f(x):”心拟合函数,return x.mm(W_target) + b_target.item()def get_batch(batch_size=3 2):”心构建批次对(x, f(x)random = torch.randn(batch_size)x = make_features(random)y = f(x) return x, y#定义模型fc = torch.nn.Linear(W_target.size(O), 1)for batch_idx in count(l):获取数据batch_x, batch_y = get_batch()重置梯度 fc.ze
34、ro_grad()前向传播output = F.smooth_l l_loss(fc(batch_x), batch_y)loss = output.item()28后向传播 output.backward()计算梯度for param in fc.parameters():param.data.add_(-0.1 * param.grad.data)截至条件设置if loss Learned function:t + poly_desc(fc.weighLview(-l), fc.bias) print(-= Actual function:t! + poly_desc(W_target.v
35、iew(-1), b_target) 运行结果:Loss: 0.000976 after 92 batches= Learned function: y = +0.54 xA3 -0.73 xA2 -0.90 xAl +0.13= Actual function:y = +0.57 xA3 -0.70 xA2 -0.91 xAl +0.1229实验一 PyTorch框架安装一、实验目的.掌握Windows下PyTorch深度学习环境的配置.掌握一种PyTorch开发工具二、实验内容.在Windows下配置PyTorch深度学习环境.掌握一种PyCharm开发工具三、主要实验步骤及结果项目1.W
36、indows下PyTorch深度学习环境的配置PyTorch是在Python的基础上安装的,所以首先安装Python01.1 Python环境搭建首先在 Window 平台安装以下为在 Window 平台上安装 Python的简单步骤:打开 WEB 浏览器访问 /downloads/windows/,本书选 择安装Python 3.53 如图1所示。Note that Python 3.5.3 cannot be used on Windows XP or earlier.Download Windows help fileDownload Windows x86-64 embeddable
37、zip fileDownload Windows X86-64 executable installerDownload Windows x86-64 web-based installerDownload Windows x86 embeddable zip fileDownload Windows x86 executable installerDownload Windows x86 web-based installer图下载界面下载后,双击下载安装包,进入Python安装向导,如图2所示。实验四多层全连接神经网络一、实验目的.理解全连接神经网络、softmax与交叉燃.理解反向传播算
38、法,包括链式法则、反向传播算法.掌握计算机视觉工具包:torchvision.掌握用全连接神经网络实现多分类二、实验内容用全连接神经网络实现多分类三、主要实验步骤及结果设计一个五层网络,实现给MNIST数据集分类。 batch_size=32,learning_rate=0.01,epochs=100,input_size=28*28,hidden_sizel=400,hidden_size2=300, hidden_size3=200,hidden_size4= 100o隐藏层要带有要带激励函数:ReLU()和批处理方法。import torchfrom torch import nn, o
39、ptimfrom torch.autograd import Variablefrom torch.utils.data import DataLoaderfrom torchvision import datasets, transforms#定义一些超参数batch_size = 32learning_rate = 0.01class Batch_Net(nn.Module):definit(self,in_dim, n_hidden_l, n_hidden_2, n_hidden_3,n_hidden_4,out_dim):super(Batch_Net, self).init()sel
40、f.layerl = nn.Sequential(nn.Linear(in_dim, n_hidden_l),nn.BatchNorm 1 d(n_hidden_ 1), nn.ReLU(True)self.layer2 = nn.Sequential(nn.Linear(n_hidden_l, n_hidden_2),nn.BatchNorm 1 d(n_hidden_2), nn.ReLU(True)30self.layer3=nn.Sequential(nn.Linear(n_hidden_2,n_hidden_3),nn.BatchNorm 1 d(n_hidden_3), nn.Re
41、LU(True)self.layer4=nn.Sequential(nn.Linear(n_hidden_3,n_hidden_4),nn.BatchNorm 1 d(n_hidden_4), nn.ReLU(True)self.layer5 二 nn.Sequential(nn.Linear(n_hidden_4, out_dim)def forward(self, x):x = self.layerl(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.layer5(x) return x数据预处理。transf
42、orms.TbTensor()将图片转换成PyTorch中处理的对象Tensor,并且进行标准化(数据在01之间)transforms.Normalize()做归一化。它进行了减均值,再除以标准差。两个参 数分别是均值和标准差transforms.Compose()函数则是将各种预处理的操作组合到了一起data_tf = transforms.Compose(transforms.ToTensor(),transforms.Normalize(0.5, 0.5)数据集的下载器train_dataset = datasets.MNIST(root=/data train=True, transf
43、orm=data_tf, download=True)test_dataset = datasets.MNIST(root=https://www.renrendoc.com/paper/data train=False, transform=data_tf) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) testjoader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)31选择模型model = net.simpleNet(28 * 28, 300, 100,
44、 10)model = Activation_Net(28 * 28, 300, 100, 10)model = Batch_Net(28 * 28, 400, 300,200, 100, 10)if torch.cuda.is_available():model = model.cuda()定义损失函数和优化器criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=leaming_rate)训练模型epoch = 0for data in train_loader:img, label = d
45、ataimg = img.view(img.size(0), -1)if torch.cuda.is_available():img = img.cuda()label = label.cuda()else:img = Variable(img)label = Variable(label)out = model(img)loss = criterion(out, label)print_loss = loss.data.item()optimizer.zero_grad()loss.backward()optimizer. step()epoch+= 132if epoch% 100 = 0
46、:print(!epoch: , loss: :.4format(epoch, loss.data.item()模型评估model.eval()evaloss = 0eval_acc = 0for data in test_loader:img, label = dataimg = img.view(img.size(O), -1)if torch.cuda.is_available():img = img.cuda()label = label.cuda()out = model(img)loss = criterion(out, label)eval_loss += loss.data.i
47、tem( )*label.size(0)pred = torch.max(out, 1)num_correct = (pred = label).sum()eval_acc += num_correct.item()printCTest Loss: :.6f,Acc: :.6f,.format(evaloss / (len(test_dataset),eval_acc / (len(test_dataset)运行结果:epoch: 100, loss: 0.897epoch: 200, loss: 0.4481epoch: 300, loss: 0.385833epoch: 1600, los
48、s: 0.08569epoch: 1700, loss: 0.1175epoch: 1800, loss: 0.1352Test Loss: 0.105356, Acc: 0.97020034实验五 卷积神经网络一、实验目的.理解深度前馈网络.掌握卷积神经网络原理.掌握卷积层.掌握池化层.掌握CNN架构二、实验内容用卷积神经网络实现多分类三、主要实验步骤及结果设计一个卷积神经网络实现MNIST手写数字识别,网络结构如下所示:Net(convl): Conv2d(1, 32, kernel_size=(3, 3), stride=(l, 1)(conv2): Conv2d(32, 64, ker
49、nel_size=(3, 3), stride=(l, 1)(dropout 1): Dropout2d(p=0.25, inplace=False)(dropout2): Dropout2d(p=0.5, inplace=False)(fcl): Linear(in_features=9216, out_features= 128, bias=True)(fc2): Linear(in_features=l 28, out_features= 10, bias=True)from _future_ import print_functionimport argparseimport torc
50、himport torch.nn as nnimport torch.nn.functional as Fimport torch.optim as optimfrom torchvision import datasets, transformsfrom torch.optim.lr_scheduler import StepLRclass Net(nn.Module):definit_(self):35super(Net, self).init()self.convl = nn.Conv2d(l, 32, 3, 1)self.conv2 = nn.Conv2d(32, 64, 3, 1)s
51、elf, dropout 1 = nn.Dropout2d(0.25)self.dropout2 = nn. Dropout2d(0.5)self.fcl = nn.Linear(9216, 128)self.fc2 = nn.Linear(128, 10)def forward(self, x):x = self.convl (x)x = F.relu(x)x = self.conv2(x)x = F.max_pool2d(x, 2)x = self.dropoutl(x)x = torch.flatten(x, 1)x = self.fcl (x)x = F.relu(x)x = self
52、.dropout2(x)x = self.fc2(x)output = F.log_softmax(x, dim=l)return outputdef train(args, model, device, train_loader, optimizer, epoch): model.train()for batch_idx, (data, target) in enumerate(trainoader):data, target = data.to(device), target.to(device)optimizer.zero_grad()output = model(data)loss =
53、 F.nll_loss(output, target)36loss.backward()optimizer. step()if batch_idx % args.log_interval = 0:print(fTrain Epoch: / (:.Of%)tLoss: :.6fformat( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(trainjoader), loss.item()def test(args, model, device, test_loader):model.
54、eval()test_loss = 0correct = 0with torch.no_grad():for data, target in test_loader:data, target = data.to(device), target.to(device)output = model (data)test_loss += F.nll_loss(output, target, reduction=sum)item( ) # sum up batch losspred = output.argmax(dim= 1, keepdim=True) # get the index of the
55、max log-probabilitycorrect += pred.eq(target.view_as(pred).sum( ).item()test_loss I- len(test_loader.dataset)printCnTest set: Average loss: :.4f, Accuracy: / (:.0f%)n.format( test_loss, correct, len(test_loader.dataset),* correct / len(test_loader.dataset)def main():# Training settings37 parser = ar
56、gparse.ArgumentParser(description=,PyTorch MNIST Example1) parser.add_argument(,batch-size1, type=int, default=64, metavar=,N help=,input batch size for training (default: 64),)parser.add_argument(-test-batch-size, type=int, default=1000, metavar=N, help=,input batch size for testing (default: 1000)
57、parser.add_argument(,epochs1, type二int, default=14, metavar=!N help=number of epochs to train (default: 14)parser.add_argument(!lr type=float, default=1.0, metavar=,LR help=learning rate (default: 1.0)1)parser.add_argument(,gamma*, type=float, default=0.7, metavar=,M, help=Learning rate step gamma (
58、default: 0.7)parser.add_argument(,no-cuda!, action=,store_true default二False, help=*disables CUDA training*)parser.add-argumentC1seed1, type=int, default= 1, metavar=S, help=,random seed (default: I)1)parser.add-argumen1log-interval type=int, default=10, metavar=N; help=,how many batches to wait bef
59、ore logging training status1)parser.add_argument(,save-model action=*store_true default二False, help=For Saving the current Model1)args = parser.parse_args()use_cuda = not args.no_cuda and torch.cuda.is_available()torch.manual_seed(args.seed)device = torch.device(ncudaH if use_cuda else ncpun)kwargs
60、= ,num_workers,: 1, pin_memory: True if use_cuda else train_loader 二 torch.utils.data.DataLoader(38datasets.MNIST(1 https://www.renrendoc.com/paper/data*, train=True, download二True, transform=transforms.Compose(transforms.ToTensor(),transforms.Normalize(0.1307,), (0.3081,),batch_size=args.batch_size, shuffle二True, *kwargs)test_l