banner

line

Newton-Raphson and Secant Methods

1. Newton's Method

The Background: The goal is to find a value of x such that our function of interest, f(x), is equal to zero. That value of x is a root of the function. There are as many (real) roots as places where the function crosses the x-axis. We assume the function is differentiable ("smooth") and that we can compute both it and its derivative at any point.

If we only have the function defined at discrete points (say it's a vector of experimental readings, or the output of Matlab's differential equation solver, for instance), that brings in the issue of interpolation, an important numerical technique we'll ignore for now.

You may have seen this root-finding method, also called the Newton-Raphson method, in calculus classes. It is a simple and obvious approach, and is an example of the common engineering trick of approximating an arbitrary function with a "first-order" function -- in two dimensions, a straight line. Later in life, you'll expand functions into an infinite series (e.g. the Taylor series), and pitch out all but a few larger, "leading" terms to approximate the function close to a given point.

As usual, Wikipedia has a very nice article (with a movie), on Newton's method. For the mathematically inclined there's a proof of the method's quadratic convergence time: roughly, the number of correct digits doubles at every step. (BTW, the proof uses a Taylor series.)

The Idea:

  1. Guess an x0 close to the root of interest.
  2. Start Iteration: Approximate the function at that point by a straight line. The obvious choice is the line tangent to (in the direction of) the function's graph at that point.
  3. Notice that the slope of the required tangent is the derivative of the function, so the line we want has that slope and goes through the point (x0, f(x0)) .
  4. This tangent line goes through the x-axis at a point x1, which is easy to calculate and which we bet is nearer to the root than x0 is.
  5. Compute x1 and f(x1), and we're ready to go to Start Iteration and repeat the process until for some xi, we find a f(xi) close enough to zero for our purposes.

Considered as an algorithm, this method is clearly a while -loop; it runs until a small-error condition is met.

The Math:

Newton-Raphson

Say the tangent to the function at x = x0 intersects the x-axis at x1. The slope of that tangent is Δy/ Δx = f'(x0) (where f' is the derivative of f), and Δy = f(x0). Thus f(x0)/(x0 - x1) =f'(x0), and so x1 = x0 - f(x0)/f'(x0) . We know everything on the RHS and the LHS is what we need to continue.

Repeat until done: Generally,
(Eq. 1) xi+1 = xi - f(xi)/f'(xi).

Example: The function y = f(x) = .6x4 -7.533333 x3 + 29.9 x2 -37.966667 x +5 looks like this:
A polynomial

We can see there is a root near 4.4, for instance.

f's derivative function is easy to write given f's coefficients.

3 iterations of the method starting at x0 = 3.8, and stopping when f(x) ≤ 0.001, yields a root at x = 4.39709833506137 with error of 3.55426928955183e-06. The actual root is x= 4.39709878259766

CB's function [my_root, err] = newton(x0, maxerr) is 11 statements long, including 2 to count and print iterations and 2 to assign a global array of polynomial coeffficients. the functions function y = f(x) and der = f_der(x) are four statements, including function, global, end, and one line that actually does some work.

Extensions and Issues: There are lots of extensions, and various tweaks to the method (use higher-order approximation functions, say). The method extends to functions of several variables (i.e. higher-dimensional problems). In that case it uses the Jacobian matrix you may see in vector calculus, and also the generalized matrix inverse you'll definitely see in the Data-Fitting segment later in 160.

Clearly there are potential problems. The process may actually diverge, not converge. Starting too far from the desired root may diverge or find some other root. See a more in-depth treatment (like Wikipedia, say) for more consumer-protection warnings. To detect such problems and gracefully abort, One could watch that the error does not keep increasing for too long, or count iterations and bail out after too many, etc.

Secant Method

The Background: Same assumptions as for Newton, but we use two initial points (ideally close to the root) and we don't use the derivative, we approximate it with the secant line to the curve (a cutting line: in the limit that the cut grazes the function we have the tangent line). The secant line concept is not immediately obviously related to the trigonometric secant function. The secant method has been around for thousands of years.

Here's Wikipedia: Secant method.

The Idea:

secant

  1. Pick two initial values of x, close to the desired root. Call them x0, x1. Evaluate y0 = f(x0) and y1 = f(x1).
  2. As with the tangent line in Newton's method, produce the (secant) line through (x0, y0) and (x1, y1), and compute where it crosses the x-axis, and call that point x2. Get y2 =f(x2).
  3. Bootstrap along: replace (x0, y0), (x1, y1) with (x1, y1), (x2, y2) and repeat.
  4. Keep this process up: derive (xi, yi) from (xi-1, yi-1) and (xi-2, yi-2) until yi meets the error criterion.
This is the same while -loop control structure as Newton, but needs a statement or two's worth of bookkeeping since we need to remember two previous x's, not one. ( Do Not save them all in some vector, please! Always wasteful, sometimes dangerous, and no easier to write).

The Math: Easy to formulate given we've done Newton's method. Starting with Newton (Eq. 1), use the "finite-difference" approximation:
f'(xi) ≈ Δy/Δx = (f(xi) - f(xi-1)) /(xi - xi-1).
Thus for the secant method we need two initial x points, which should be close to the desired root.

Generally,
(Eq. 2) xi+1 = xi - f(xi) [( xi - xi-1) / ( f(xi) - f(xi-1))].

Example: For the same problem as above, a reasonable function prototype is
function [my_root, err] = secant(lastx,x, maxerr). Initializing at x0 = 3.8, x1 = 3.9, CB gets root 4.39716813662186, with error -0.000550801803422374, also in 3 iterations. The programs are all the same size as in newton.

Extensions and Issues: The most popular extension in one dimension is the method of false position, (q.v.) There is also an extension to higher-dimensional functions.

Same non-convergence issues and answers as Newton, only risk is greater with the approximation.

The convergence rate is, stunningly enough, the Golden Ratio , which turns up all sorts of delightfully unexpected places, not just Greek sculpture, Renaissance art, Fibonacci series, etc. Thus it is about 1.6, slower than Newton but still better than linear. Indeed, it may run faster since it doesn't need to evaluate the derivative at every step.


Last Change: 9/23/2011: CB