m_n_kappa.fitting.GaussNewton#

class m_n_kappa.fitting.GaussNewton(f_i, x_0, maximum_iterations=20, tolerance=0.1, max_line_search_iterations=100, minimum_residuum_change=1e-09, exceptions_to_catch=<class 'IndexError'>)#

Bases: object

Gauss-Newton Solver for finding zero in a system of equations

New in version 0.2.0.

Parameters:
  • f_i (list[Callable]) –

    Number of methods representing a system of equations. Each method must …

    • … accept a list[float] that is in size equal to x_0.

    • … be of the form f(x) = 0

  • x_0 (list[float]) – starting-values for the iteration

  • maximum_iterations (int) – maximum number of iterations (must be greater zero, Default: 20)

  • tolerance (float) – tolerance-limit, must be greater zero (Default: 1.0).

  • max_line_search_iterations (int) – maximum nuumber of line-search iterations (Default: 100)

  • minimum_residuum_change (float) – Iteration stops in case the residuum changes less than the given value (Default: 0.000000001)

  • exceptions_to_catch (tuple[Exception] | Exception) – Exceptions that may arise during a run of the problem and that must be caught (Default: IndexError)

Notes

The result of a Gauss-Newton iteration step is given as follows.

\[\vec{x}_{n+1} = \vec{x}_{n} - \alpha_{n} \Delta \vec{x}_{n}\]

where \(\vec{x}_{n}\) is the result of the previous iteration step. The Index \(n\) stands for the current iteration-number. alpha_{n} is a line search parameter, making sure that the result of the current step \(f(x_{n})\) is smaller than the result of the previous step.

\(\Delta \vec{x}_{n}\) indicates the difference between the current and the next iteration-step \(\vec{x}_{n+1}\). It is given as follows.

\[\Delta \vec{x}_{n} = -(\mathbf{J}^T \mathbf{J})^{-1} \mathbf{J}^T f(x_{n})\]

where \(\mathbf{J}\) is the Jacobian Matrix at the current point \(\vec{x}_{n}\).

\(\Delta \vec{x}_{n}\) is found by bringing the formula in the form \(Ax = b\) and solved applying QR decomposition.

\[ \begin{align}\begin{aligned}A \Delta \vec{x}_{n} & = b\\(\mathbf{J}^T \mathbf{J})^{-1} \Delta \vec{x}_{n} & = - \mathbf{J}^T f(x_{n})\end{aligned}\end{align} \]

As the Gauss-Newton algorithm finds only local minimums the initial values \(\vec{x_{0}}\) must be chosen appropriatly.

Examples

The Rosenbrock function is a performance test problem for optimization algorithms. It is defined as follows:

\[f(x, y) = (a - x)^{2} + b(y - x^{2})^{2}\]

Usually the constants are defined as \(a = 1\), \(b = 100\). Given these constants the optimum applies to \(\vec{x} = [1, 1]^{T}\).

For sueing the Gauss-Newton algorithm the Rosenbrock function \(f(x, y)\) must be reshaped as follows:

\[\begin{split}\begin{pmatrix} f_{1}(x) \\ f_{2}(x) \end{pmatrix} = \begin{bmatrix} \sqrt{2}(a - x_{1}) \\ \sqrt{2b}(x_{2} - x_{1}^{2} \end{bmatrix}\end{split}\]

To pass this system of linear equations to GaussNewton these functions need to be defined taking \(x_{1}\) and \(x_{2}\) as arguments wrapped by a list.

>>> def f_1(variables: list[float, float]):
...    x_1, x_2 = variables
...    return 2.0**0.5*(1.0 - x_1)
>>> def f_2(variables: list[float, float]):
...    x_1, x_2 = variables
...    return (2.0*100.0)**0.5*(x_2 - x_1**2.0)
>>> rosenbrock = [f_1, f_2]

Furthermore, an initial guess of the result must be given. It denotes normally to \(\vec{x}_{0} = [0, -0.1]^{T}\).

>>> x_0 = [0.0, -0.1]
>>> from m_n_kappa.fitting import GaussNewton
>>> gauss_newton = GaussNewton(f_i=rosenbrock, x_0=x_0)

The result is computed by initializing GaussNewton. The attribute successful indicates if a result is available.

>>> gauss_newton.successful
True

In that case the result may be obtained as follows.

>>> gauss_newton.x_i
Vector([1.0, 1.0])

What is the optimal result as indicated above.

Attributes

f_i

Number of methods representing a system of equations

f_x

Most recent result of the iterations

iteration

current iteration

line_search_step_size

current step-size of the line-search

max_line_search_iterations

maximum number of iterations during the line-search

maximum_iterations

maximum number of iterations

minimum_residuum_change

boundary condition regarding the minimum residuum

not_successful_reason

gives a reason if the computation was not successful

successful

indicates the success of the computation

tolerance

tolerance-limit

x_0

starting-values for the iteration

x_i

computed variables at iteration-step \(i\)

property f_i: list[Callable]#

Number of methods representing a system of equations

property f_x: Vector#

Most recent result of the iterations

property iteration: int#

current iteration

property line_search_step_size: float#

current step-size of the line-search

property max_line_search_iterations: int#

maximum number of iterations during the line-search

property maximum_iterations: int#

maximum number of iterations

property minimum_residuum_change: float#

boundary condition regarding the minimum residuum

property not_successful_reason: None | m_n_kappa.general.NotSuccessfulReason#

gives a reason if the computation was not successful

Returns:

Reason why computation was not successful

Return type:

None | NotSuccessfulReason

property successful: bool#

indicates the success of the computation

property tolerance: float#

tolerance-limit

property x_0: list[float]#

starting-values for the iteration

property x_i: Vector#

computed variables at iteration-step \(i\)