This section deals with the programming of the Newton-Raphson method, in any dimension. Given a function that is differentiable along both dimensions, it is possible to write and its derivatives in the following way :
The function is called the Jacobian matrix of . The idea of the Newton-Raphson algorithm consists in considering that, given a point , the best direction leading to a root of the function is the direction given by the slope of at . By following this direction, the algorithm assumes that it moves closer to a root of .
Given a vector representing the current position, the algorithm computes , a vector representing the value of at , as well as the Jacobian matrix of the derivatives of at is chosen such that:
Then, is replaced by , and this operation is repeated until either converges or a maximal number of iterations is reached.
function U = Newton_Raphson(f, J, U0, N, epsilon)
where f is the function under study, J is its Jacobian matrix,
U0 is the starting position of the algorithm, N is the maximal
number of steps allowed during the algorithm and epsilon measures
the convergence of the algorithm.
One requirement is to be able to compute the values taken by the
function and all its derivatives in a priori any point of the
plane. Therefore, it is necessary to specify and its
derivatives in the form of functions. Python allows programming
using functional parameters. These parameters may be provided by
functions defined either with the keyword lambda, or with simple
def definitions.
The algorithm requires the resolution of several linear systems
with matrices that are possibly singular (i.e . Prefer the function numpy.linalg.lstsq in order to avoid such
numerical problems.
One of the drawbacks of this algorithm is its tendency to diverge in numerous cases. It is therefore imperative to limit the number of steps taken, and to detect to what extent the algorithm has converged.