Gradient

In vector calculus, the gradient of a scalar-valued differentiable function f {\displaystyle f} of several variables is the vector field (or vector-valued function) ∇ f {\displaystyle \nabla f} whose value at a point p {\displaystyle p} gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of f {\displaystyle f} . If the gradient of a function is non-zero at a point p {\displaystyle p} , the direction of the gradient is the direction in which the function increases most quickly from p {\displaystyle p} , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, machine learning, and artificial intelligence, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function f ( r ) {\displaystyle f(\mathbf {r} )} may be defined by: d f = ∇ f ⋅ d r {\displaystyle df=\nabla f\cdot d\mathbf {r} } where d f {\displaystyle df} is the total infinitesimal change in f {\displaystyle f} for an infinitesimal displacement d r {\displaystyle d\mathbf {r} } , and is seen to be maximal when d r {\displaystyle d\mathbf {r} } is in the direction of the gradient ∇ f {\displaystyle \nabla f} . The nabla symbol ∇ {\displaystyle \nabla } , written as an upside-down triangle and pronounced "del", denotes the vector differential operator. When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of f {\displaystyle f} at p {\displaystyle p} . That is, for f : R n → R {\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} } , its gradient ∇ f : R n → R n {\displaystyle \nabla f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}} is defined at the point p = ( x 1 , … , x n ) {\displaystyle p=(x_{1},\ldots ,x_{n})} in n-dimensional space as the vector ∇ f ( p ) = [ ∂ f ∂ x 1 ( p ) ⋮ ∂ f ∂ x n ( p ) ] {\displaystyle \nabla f(p)={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)\\\vdots \\{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}} . Note that the above definition for gradient is defined for the function f {\displaystyle f} only if f {\displaystyle f} is differentiable at p {\displaystyle p} . There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the metric tensor at that point needs to be taken into account. For example, the function f ( x , y ) = x 2 y x 2 + y 2 {\displaystyle f(x,y)={\frac {x^{2}y}{x^{2}+y^{2}}}} unless at origin where f ( 0 , 0 ) = 0 {\displaystyle f(0,0)=0} , is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin. In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase. The gradient is dual to the total derivative d f {\displaystyle df} : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of f {\displaystyle f} at a point p {\displaystyle p} with another tangent vector v {\displaystyle \mathbf {v} } equals the directional derivative of f {\displaystyle f} at p {\displaystyle p} of the function along v {\displaystyle \mathbf {v} } ; that is, ∇ f ( p ) ⋅ v = ∂ f ∂ v ( p ) = d f p ( v ) {\textstyle \nabla f(p)\cdot \mathbf {v} ={\frac {\partial f}{\partial \mathbf {v} }}(p)=df_{p}(\mathbf {v} )} . The gradient admits multiple generalizations to more general functions on manifolds; see § Generalizations.

Dub Stories - 2024-09-27T00:00:00.000000Z

Erbmu Danza - 2023-08-07T00:00:00.000000Z

Pura Lempuyang - 2023-02-03T00:00:00.000000Z

Dub Ornaments - 2022-11-11T00:00:00.000000Z

Device Protocols - 2022-04-22T00:00:00.000000Z

Multiplayer - 2020-03-06T00:00:00.000000Z

Dub Harmonics - 2019-12-13T00:00:00.000000Z

Dub Trips - 2019-04-04T00:00:00.000000Z

Dub Iterations - 2019-03-07T00:00:00.000000Z

Deep Dive Vol. 1 - 2013-07-29T00:00:00.000000Z

Third Dive - 2012-09-12T00:00:00.000000Z

Clarity from the Depths - 2012-07-20T00:00:00.000000Z

Illuminated - 2012-01-08T00:00:00.000000Z

VICTORY - 2024-10-18T00:00:00.000000Z

Day Drama / Night Drama - 2023-04-14T00:00:00.000000Z

Hypnotic Dub Season - 2022-10-30T00:00:00.000000Z

Autumn Clouds - 2021-06-25T00:00:00.000000Z

Neural Waves from the Stars - 2021-06-16T00:00:00.000000Z

Tekkenboy - 2019-10-29T00:00:00.000000Z

Океан - 2019-08-15T00:00:00.000000Z

Пустяк - 2019-08-08T00:00:00.000000Z

Landscapes - 2019-06-07T00:00:00.000000Z

Similar Artists

El Choop

Giacomo Pellegrino

M. Rahn

Ohm

Francisco Aguado

Stop|code

Phantom Network

Star_Dub

Basicnoise

Tender H

R.Hz

TM Shuffle

Alexander Bogdanov

Marco Cassanelli

J.S.Zeiter

Insect O.

Grad_U

Philipp Priebe

Merv

Ajnkana