最速下降法优化我要分享

Optimization of steepest descent method

关注次数: 217

下载次数: 0

文件大小: 1.11 kB

代码分类: 其他

开发平台: matlab

下载需要积分: 2积分

版权声明:如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

代码描述

中文说明:梯度下降是迭代法的一种,可以用于求解最小二乘问题(线性和非线性都可以)。在求解机器学习算法的模型参数,即无约束优化问题时,梯度下降(Gradient Descent)是最常采用的方法之一,另一种常用的方法是最小二乘法。在求解损失函数的最小值时,可以通过梯度下降法来一步步的迭代求解,得到最小化的损失函数和模型参数值。反过来,如果我们需要求解损失函数的最大值,这时就需要用梯度上升法来迭代了。在机器学习中,基于基本的梯度下降法发展了两种梯度下降方法,分别为随机梯度下降法和批量梯度下降法。


English Description:

Gradient descent is a kind of iterative method, which can be used to solve least squares problems (both linear and nonlinear). Gradient descent is one of the most commonly used methods to solve the model parameters of machine learning algorithm, that is, unconstrained optimization problem. The other commonly used method is least square method. When solving the minimum value of loss function, the gradient descent method can be used step by step to get the minimum value of loss function and model parameters. On the other hand, if we need to solve the maximum value of the loss function, we need to use the gradient rising method to iterate. In machine learning, based on the basic gradient descent method, two gradient descent methods are developed, which are random gradient descent method and batch gradient descent method.


代码预览

相关推荐