Gradient

In vector calculus, the gradient of a scalar-valued differentiable function f {\displaystyle f} of several variables is the vector field (or vector-valued function) ∇ f {\displaystyle \nabla f} whose value at a point p {\displaystyle p} gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of f {\displaystyle f} . If the gradient of a function is non-zero at a point p {\displaystyle p} , the direction of the gradient is the direction in which the function increases most quickly from p {\displaystyle p} , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function f ( r ) {\displaystyle f(\mathbf {r} )} may be defined by: d f = ∇ f ⋅ d r {\displaystyle df=\nabla f\cdot d\mathbf {r} } where d f {\displaystyle df} is the total infinitesimal change in f {\displaystyle f} for an infinitesimal displacement d r {\displaystyle d\mathbf {r} } , and is seen to be maximal when d r {\displaystyle d\mathbf {r} } is in the direction of the gradient ∇ f {\displaystyle \nabla f} . The nabla symbol ∇ {\displaystyle \nabla } , written as an upside-down triangle and pronounced "del", denotes the vector differential operator. When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of f {\displaystyle f} at p {\displaystyle p} . That is, for f : R n → R {\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} } , its gradient ∇ f : R n → R n {\displaystyle \nabla f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}} is defined at the point p = ( x 1 , … , x n ) {\displaystyle p=(x_{1},\ldots ,x_{n})} in n-dimensional space as the vector ∇ f ( p ) = [ ∂ f ∂ x 1 ( p ) ⋮ ∂ f ∂ x n ( p ) ] . {\displaystyle \nabla f(p)={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)\\\vdots \\{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}.} Note that the above definition for gradient is defined for the function f {\displaystyle f} only if f {\displaystyle f} is differentiable at p {\displaystyle p} . There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the metric tensor at that point needs to be taken into account. For example, the function f ( x , y ) = x 2 y x 2 + y 2 {\displaystyle f(x,y)={\frac {x^{2}y}{x^{2}+y^{2}}}} unless at origin where f ( 0 , 0 ) = 0 {\displaystyle f(0,0)=0} , is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin. In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase. The gradient is dual to the total derivative d f {\displaystyle df} : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of f {\displaystyle f} at a point p {\displaystyle p} with another tangent vector v {\displaystyle \mathbf {v} } equals the directional derivative of f {\displaystyle f} at p {\displaystyle p} of the function along v {\displaystyle \mathbf {v} } ; that is, ∇ f ( p ) ⋅ v = ∂ f ∂ v ( p ) = d f p ( v ) {\textstyle \nabla f(p)\cdot \mathbf {v} ={\frac {\partial f}{\partial \mathbf {v} }}(p)=df_{p}(\mathbf {v} )} . The gradient admits multiple generalizations to more general functions on manifolds; see § Generalizations.

The New Wild - 2025-06-30T00:00:00.000000Z

Bye (for now) - 2024-10-24T00:00:00.000000Z

EVOLUCIEN - 2024-05-19T00:00:00.000000Z

Begin Again - 2021-09-09T00:00:00.000000Z

Arcane Amalgam - 2017-07-31T00:00:00.000000Z

Vacant Eyes - 2015-09-25T00:00:00.000000Z

Ambition - 2014-08-22T00:00:00.000000Z

Big Money - 2024-08-16T00:00:00.000000Z

Sinister - 2024-05-09T00:00:00.000000Z

Mirror - 2023-09-25T00:00:00.000000Z

Feel Of My Skin - 2023-04-04T00:00:00.000000Z

Heir Fryer - 2022-11-24T00:00:00.000000Z

Two Twenty-Two EP - 2022-02-22T00:00:00.000000Z

Stew - 2021-12-15T00:00:00.000000Z

Kung Pao (Check-In) - 2021-09-28T00:00:00.000000Z

Levitate - 2018-06-22T00:00:00.000000Z

Similar Artists

Twistid Rob

Slimbag Troy

Jellyfish Brigade

Charles At Work

Disjointed

Jordan Miché

Syq

Andrew Lankes

J Bomb

Natural

Loop Minded Individuals

Yahweh Collective

Lyricool

Ang P

Wandering Monks: MCAD (A. Dukes) and Linguístory (S. Eschenberg)

PRiME & PM

Mxzy

Tom Pepe & Jesse Judd

Westley

Llamabeats