At each step the weight vector is moved in the direction of the greatest rate of decrease of the error function,

and so this approach is known as gradient descent(梯度下降法) or steepest descent(最速下降法).

Techniques that use the whole data set at once are called batch methods.

With the method of gradient descent used to perform the training, the advantages of batch learning

include the following:

1)accurate estimation of the gradient vector(i.e., the derivative of the cost function with respect to the weight vector w),

thereby guaranteeing, under simple conditions, convergence of the method of steepest descent to a local minimum;

2)parallalization of the learning process.

However, from a practical perspective, batch learning is rather demanding in terms of storage requirements.

#include <iostream>
#include <vector>
#include <cmath>
#include <cfloat>

/*批量梯度下降法*/
int main() {
    double datax[]={1,2,3,4,5};
    double datay[]={1,1,2,2,4};
    std::vector<double> v_datax,v_datay;

for(size_t i=0;i<sizeof(datax)/sizeof(datax[0]);++i) {
        v_datax.push_back(datax[i]);
        v_datay.push_back(datay[i]);
    }

double a=0,b=0;
    double J=0.0;

for(std::vector<double>::iterator iterx=v_datax.begin(),itery=v_datay.begin();iterx!=v_datax.end(),itery!=v_datay.end();++iterx,++itery) {
        J+=(a+b*(*iterx)-*itery)*(a+b*(*iterx)-*itery);
    }
    J=J*0.5/v_datax.size();
                            
    while(true) {
        double grad0=0,grad1=0;
        for(std::vector<double>::iterator iterx=v_datax.begin(),itery=v_datay.begin();iterx!=v_datax.end(),itery!=v_datay.end();++iterx,++itery) {
            grad0+=(a+b*(*iterx)-*itery);
            grad1+=(a+b*(*iterx)-*itery)*(*iterx);
        }

grad0=grad0/v_datax.size();
        grad1=grad1/v_datax.size();

//0.03为学习率阿尔法
        a=a-0.03*grad0;
        b=b-0.03*grad1;
        double MSE=0;
        
        for(std::vector<double>::iterator iterx=v_datax.begin(),itery=v_datay.begin();iterx!=v_datax.end(),itery!=v_datay.end();++iterx,++itery) {
            MSE+=(a+b*(*iterx)-*itery)*(a+b*(*iterx)-*itery);
        }
        MSE=MSE*0.5/v_datax.size();
        
        if(std::abs(J-MSE)<0.0000001)
            break;
        J=MSE;
    }

std::cout<<"批量梯度下降法得到的结果:"<<std::endl;
    std::cout<<"a = "<<a<<std::endl;
    std::cout<<"b = "<<b<<std::endl;

return 0;
}

In a statistical context, batch learning may be viewed as a form of statistical inference. It is therefore well suited

for solving nonlinear regression problems.

最新文章

  1. [转]推荐highcharts学习网址
  2. Angular初学
  3. 使用ISO文件安装Linux
  4. 【转】Fiddler的基本介绍
  5. IE8 不支持html5 placeholder的解决方案
  6. 转:NodeJS、NPM安装配置步骤
  7. 模拟退火法(吊打XXX)Bzoj3680
  8. 移除virbr0
  9. python 解析XML python模块xml.dom解析xml实例代码
  10. [网络配置相关]——netstat命令
  11. 把图片生成Base64字符串
  12. 为git配置ssh
  13. 2.如何搭建MQTT环境
  14. python调用C函数
  15. 移动商城第四篇【Controller配置、添加品牌之文件上传和数据校验】
  16. Ajax+jquery+jaso传输数据
  17. SpringBoot vue
  18. ZJOI 2014 星系调查(推导)
  19. bitmap--Bitmap详解与Bitmap的内存优化
  20. AnjularJS 学习

热门文章

  1. ProE常用曲线方程:Python Matplotlib 版本代码(蝴蝶曲线)
  2. graphite 绘图工具
  3. (转)Hibernate的优化方案
  4. drf06 认证Authentication 权限Permissions 限流Throttling
  5. react typescript 父组件调用子组件
  6. [SQL Service] 时间处理:获取今天的00:00:00/获取今天的23:59:59
  7. 5.terms搜索多个值以及多值搜索结果优化
  8. 8.2.3 操作MySQL数据库
  9. “从客户端(content=&quot;XXXX&quot;)中检测到有潜在危险的 Request.Form值” 解决方案
  10. 关于python字符串format的一些花式用法