archimedes.optimize.lm_solveΒΆ

archimedes.optimize.lm_solve(func: Callable, x0: T, args: Tuple[Any, ...] = (), bounds: Tuple[T, T] | None = None, ftol: float = 1e-06, xtol: float = 1e-06, gtol: float = 1e-06, max_nfev: int = 100, diag: T | None = None, lambda0: float = 0.001, log_level: int | None = None) OptimizeResultΒΆ

Solve nonlinear least squares using modified Levenberg-Marquardt algorithm.

Solves optimization problems of the form:

minimize    0.5 * ||r(x)||Β²
subject to  lb <= x <= ub

where r(x) are residuals computed by the objective function.

This implementation uses a direct Hessian approximation and Cholesky factorization instead of the standard QR approach. It also supports box constraints by switching to a quadratic programming solver ([OSQP](https://osqp.org/)) when bounds are active.

Parameters:
  • func (callable) – Objective function with signature func(x, *args) -> r, where r is a vector of residuals.

  • x0 (PyTree) – Initial parameter guess. Can be a flat array or a PyTree structure which will be preserved in the solution.

  • args (tuple, optional) – Extra arguments passed to the objective function.

  • bounds (tuple of (PyTree, PyTree), optional) – Box constraints specified as (lower_bounds, upper_bounds). Each bound array must have the same PyTree structure as x0. Use -np.inf and np.inf for unbounded variables. Enables physical constraints like mass > 0, damping > 0, etc.

  • ftol (float, default=1e-6) – Tolerance for relative reduction in objective function. Convergence occurs when both actual and predicted reductions are smaller than this value.

  • xtol (float, default=1e-6) – Tolerance for relative change in parameters. Convergence occurs when the relative step size is smaller than this value.

  • gtol (float, default=1e-6) – Tolerance for gradient norm. Convergence occurs when the infinity norm of the gradient (or projected gradient for constrained problems) falls below this threshold.

  • max_nfev (int, default=100) – Maximum number of function evaluations.

  • diag (PyTree, optional) – Diagonal scaling factors for variables, in the form of a PyTree matching the structure of x0. If None, automatic scaling is used based on the Hessian diagonal. Custom scaling can improve convergence for ill-conditioned problems.

  • lambda0 (float, default=1e-3) – Initial Levenberg-Marquardt damping parameter. Larger values bias toward gradient descent, smaller values toward Gauss-Newton.

  • log_level (int, default=0) – Print progress every nprint iterations. Set to 0 to disable progress output.

Returns:

result – Optimization result containing:

  • x : Solution parameters with same structure as x0

  • success : Whether optimization succeeded

  • status : Termination status (LMStatus)

  • message : Descriptive termination message

  • fun : Final objective value

  • jac : Final gradient vector

  • hess : Final Hessian matrix

  • nfev : Number of function evaluations

  • nit : Number of iterations

  • history : Detailed iteration history

Return type:

OptimizeResult

Notes

The algorithm implements the damped normal equations approach:

(H + Ξ»I)p = -g

where H is the Hessian approximation, Ξ» is the damping parameter, and p is the step direction. The damping parameter is adapted based on the ratio of actual to predicted reduction.

For box-constrained problems, the algorithm uses projected gradients for convergence testing and OSQP for constrained step computation when bounds are violated. When the damped normal equations yield a step that would violate the bounds, the algorithm switches to solving the quadratic program:

minimize    0.5 * p^T * (H + Ξ»I) * p + g^T * p
subject to  lb <= x + p <= ub

Examples

>>> import numpy as np
>>> from archimedes.sysid import lm_solve
>>> import archimedes as arc
>>>
>>> # Rosenbrock function as least-squares problem
>>> def rosenbrock_func(x):
...     # Residuals: r1 = 10*(x[1] - x[0]Β²), r2 = (1 - x[0])
...     return np.hstack([10*(x[1] - x[0]**2), 1 - x[0]])
>>>
>>> # Solve unconstrained problem
>>> result = lm_solve(rosenbrock_func, x0=np.array([-1.2, 1.0]))
>>> print(f"Solution: {result.x}")
Solution: [0.99999941 0.99999881
>>> print(f"Converged: {result.success} ({result.message})")
Converged: True (The cosine of the angle between fvec and any column...)
>>>
>>> # Solve with box constraints (physical parameter limits)
>>> bounds = (np.array([0.0, 0.0]), np.array([0.8, 2.0]))
>>> result = lm_solve(rosenbrock_func, x0=np.array([0.5, 0.5]), bounds=bounds)
>>> print(f"Constrained solution: {result.x}")
Constrained solution: [0.79999998 0.6391025 ]

See also

archimedes.sysid.pem

Parameter estimation using prediction error minimization

LMStatus

Convergence status codes and meanings

scipy.optimize.least_squares

SciPy’s least-squares solver