Nonlinear Least-Squares Fitting

This chapter describes functions for multidimensional nonlinear least-squares fitting. The library provides low level components for a variety of iterative solvers and convergence tests. These can be combined by the user to achieve the desired solution, with full access to the intermediate steps of the iteration. Each class of methods uses the same framework, so that you can switch between solvers at runtime without needing to recompile your program. Each instance of a solver keeps track of its own state, allowing the solvers to be used in multi-threaded programs.

Contents:

  1. Overview

  2. Initializing the Solver

    1. GSL::MultiFit::FdfSolver class

  3. Providing the function to be minimized

    1. GSL::MultiFit::Function_fdf class

  4. Iteration

  5. Search Stopping Parameters

  6. Computing the covariance matrix of best fit parameters

  7. Higher level interfaces

  8. Examples

    1. Fitting to user-defined functions

    2. Fitting to built-in functions

Overview

The problem of multidimensional nonlinear least-squares fitting requires the minimization of the squared residuals of n functions, f_i, in p parameters, x_i, All algorithms proceed from an initial guess using the linearization, where x is the initial point, p is the proposed step and J is the Jacobian matrix J_{ij} = d f_i / d x_j. Additional strategies are used to enlarge the region of convergence. These include requiring a decrease in the norm ||F|| on each step or using a trust region to avoid steps which fall outside the linear regime.

To perform a weighted least-squares fit of a nonlinear model Y(x,t) to data (t_i, y_i) with independent gaussian errors sigma_i, use function components of the following form, Note that the model parameters are denoted by x in this chapter since the non-linear least-squares algorithms are described geometrically (i.e. finding the minimum of a surface). The independent variable of any data to be fitted is denoted by t.

With the definition above the Jacobian is J_{ij} =(1 / sigma_i) d Y_i / d x_j, where Y_i = Y(x,t_i).

Initializing the Solver

FdfSolver class





Providing the function to be minimized

Function_fdf class




Iteration



Search Stopping Parameters

A minimization procedure should stop when one of the following conditions is true:

The handling of these conditions is under user control. The method below allows the user to test the current estimate of the best-fit parameters.





Computing the covariance matrix of best fit parameters


Higher level interfaces


Examples

Fitting to user-defined functions

The following example program fits a weighted exponential model with background to experimental data, Y = A exp(-lambda t) + b. The first part of the program sets up the functions procf and procdf to calculate the model and its Jacobian. The appropriate fitting function is given by,

f_i = ((A exp(-lambda t_i) + b) - y_i)/sigma_i

where we have chosen t_i = i. The Jacobian matrix jac is the derivative of these functions with respect to the three parameters (A, lambda, b). It is given by,

J_{ij} = d f_i / d x_j

where x_0 = A, x_1 = lambda and x_2 = b.

require("gsl")
include GSL::MultiFit

# x: Vector, list of the parameters to determine
# t, y, sigma: Vectors, observational data
# f: Vector, function to minimize
procf = Proc.new { |x, t, y, sigma, f|
  a = x[0]
  lambda = x[1]
  b = x[2]
  n = t.size
  for i in 0...n do
    yi = a*Math::exp(-lambda*t[i]) + b
    f[i] = (yi - y[i])/sigma[i]
  end
}

# jac: Matrix, Jacobian
procdf = Proc.new { |x, t, y, sigma, jac|
  a = x[0]
  lambda = x[1]
  n = t.size
  for i in 0...n do
    ti = t[i]
    si = sigma[i]
    ei = Math::exp(-lambda*ti)
    jac.set(i, 0, ei/si)
    jac.set(i, 1, -ti*a*ei/si)
    jac.set(i, 2, 1.0/si)
  end
}

f = GSL::MultiFit::Function_fdf.alloc(procf, procdf, 2)

# Create data
r = GSL::Rng.alloc()
t = GSL::Vector.alloc(n)
y = GSL::Vector.alloc(n)
sigma = Vector.alloc(n)
for i in 0...n do
  t[i] = i
  y[i] = 1.0 + 5*Math::exp(-0.1*t[i]) + r.gaussian(0.1)
  sigma[i] = 0.1
end

f.set_data(t, y, sigma)
x = GSL::Vector.alloc(1.0, 0.0, 0.0)    # initial guess

solver = GSL::FdfSolver.alloc(FdfSolver::LMSDER, n, np)

solver.set(f, x)

iter = 0
solver.print_state(iter)
begin
  iter += 1
  status = solver.iterate
  solver.print_state(iter)
  status = solver.test_delta(1e-4, 1e-4)
end while status == GSL::CONTINUE and iter < 500

covar = solver.covar(0.0)
position = solver.position
chi2 = pow_2(solver.f.dnrm2)
dof = n - np
printf("A      = %.5f +/- %.5f\n", position[0], Math::sqrt(chi2/dof*covar[0][0]))
printf("lambda = %.5f +/- %.5f\n", position[1], Math::sqrt(chi2/dof*covar[1][1]))
printf("b      = %.5f +/- %.5f\n", position[2], Math::sqrt(chi2/dof*covar[2][2]))

Fitting to built-in functions

#!/usr/bin/env ruby
require("gsl")
include MultiFit

N = 100

y0 = 1.0
A = 2.0
x0 = 3.0
w = 0.5

r = Rng.alloc
x = Vector.linspace(0.01, 10, N)
sig = 1
# Lognormal function with noise
y =  y0 + A*Sf::exp(-pow_2(Sf::log(x/x0)/w)) + 0.1*Ran::gaussian(r, sig, N)

guess = [0, 3, 2, 1]
coef, err, chi2, dof = MultiFit::FdfSolver.fit(x, y, "lognormal", guess)
y0 = coef[0]
amp = coef[1]
x0 = coef[2]
w = coef[3]

graph(x, y, y0+amp*Sf::exp(-pow_2(Sf::log(x/x0)/w)))

prev next

Reference index top