看pytorch中文文档摘抄的笔记. class torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False)[source] 实现随机梯度下降算法(momentum可选). Nesterov动量基于On the importance of initialization and momentum in deep learning中的公式. 参数: params (iterable) –
目录 Pytorch Leture 05: Linear Rregression in the Pytorch Way Logistic Regression 逻辑回归 - 二分类 Lecture07: How to make netural network wide and deep ? Lecture 08: Pytorch DataLoader Lecture 09: softmax Classifier part one part two : real problem - MNIST i
import torch from torch.autograd import Variable import torch.nn.functional as F import matplotlib.pyplot as plt # torch.manual_seed(1) # reproducible # make fake data n_data = torch.ones(100, 2) x0 = torch.normal(2*n_data, 1) # class0 x data (tensor