Documentation
¶
Overview ¶
Package opt implements routines for solving optimization problems
Index ¶
- Variables
- func ReadLPfortran(fn string) (A *la.CCMatrix, b, c, l, u []float64)
- type ConjGrad
- type Convergence
- func (o *Convergence) AccessHistory() *History
- func (o *Convergence) Fconvergence(fprev, fmin float64) bool
- func (o *Convergence) Gconvergence(fprev float64, x, u la.Vector) bool
- func (o *Convergence) InitConvergence(Ffcn fun.Sv, Gfcn fun.Vv)
- func (o *Convergence) InitHist(x0 la.Vector)
- func (o *Convergence) SetConvParams(maxIt int, ftol, gtol float64)
- func (o *Convergence) SetParams(params utl.Params)
- func (o *Convergence) SetUseHistory(useHist bool)
- func (o *Convergence) SetVerbose(verbose bool)
- type FactoryType
- type GradDesc
- type History
- type LinIpm
- type LineSearch
- type NonLinSolver
- type Powell
- type Problem
Constants ¶
This section is empty.
Variables ¶
var Factory = FactoryType{}
Factory holds Objective functions to be minimized
Functions ¶
func ReadLPfortran ¶
ReadLPfortran reads linear program from particular fortran file
download LP files from here: http://users.clas.ufl.edu/hager/coap/format.html Output: A -- compressed-column sparse matrix where: Ap -- pointers to the beginning of storage of column (size n+1) Ai -- row indices for each non zero entry (input, nnz A) Ax -- non zero entries (input, nnz A) b -- right hand side (input, size m) c -- objective vector (minimize, size n) l -- lower bounds on variables (size n) u -- upper bounds on variables (size n)
Types ¶
type ConjGrad ¶ added in v1.1.0
type ConjGrad struct { // merge properties Convergence // auxiliary object to check convergence // configuration UseBrent bool // use Brent method insted of LineSearch (Wolfe conditions) UseFRmethod bool // use Fletcher-Reeves method instead of Polak-Ribiere CheckJfcn bool // check Jacobian function at all points during minimization // contains filtered or unexported fields }
ConjGrad implements the multidimensional minimization by the Fletcher-Reeves-Polak-Ribiere method.
NOTE: Check Convergence to see how to set convergence parameters, max iteration number, or to enable and access history of iterations REFERENCES: [1] Press WH, Teukolsky SA, Vetterling WT, Fnannery BP (2007) Numerical Recipes: The Art of Scientific Computing. Third Edition. Cambridge University Press. 1235p.
func NewConjGrad ¶ added in v1.1.0
NewConjGrad returns a new multidimensional optimizer using ConjGrad's method (no derivatives required)
func (*ConjGrad) Min ¶ added in v1.1.0
Min solves minimization problem
Input: x -- [ndim] initial starting point (will be modified) params -- [may be nil] optional parameters. e.g. "alpha", "maxit". Example: params := utl.NewParams( &utl.P{N: "brent", V: 1}, &utl.P{N: "maxit", V: 1000}, &utl.P{N: "maxitls", V: 20}, &utl.P{N: "maxitzoom", V: 20}, &utl.P{N: "ftol", V: 1e-2}, &utl.P{N: "gtol", V: 1e-2}, &utl.P{N: "hist", V: 1}, &utl.P{N: "verb", V: 1}, ) Output: fmin -- f(x@min) minimum f({x}) found x -- [modify input] position of minimum f({x})
type Convergence ¶ added in v1.1.0
type Convergence struct { // input Ffcn fun.Sv // objective function; scalar function of vector: y = f({x}) Gfcn fun.Vv // gradient function: vector function of vector: g = dy/d{x} = deriv(f({x}), {x}) [may be nil] // configuration MaxIt int // max iterations Ftol float64 // tolerance on f({x}) Gtol float64 // convergence criterion for the zero gradient test EpsF float64 // small number to rectify the special case of converging to exactly zero function value UseHist bool // save history Verbose bool // show messages // statistics and History (e.g. for debugging) NumFeval int // number of calls to Ffcn (function evaluations) NumGeval int // number of calls to Gfcn (Jacobian evaluations) NumIter int // number of iterations from last call to Solve Hist *History // history of optimization data (for debugging) // contains filtered or unexported fields }
Convergence assists in checking the convergence of numerical optimizers Convergence can be accessed to set convergence parameters, max iteration number, or to enable and access history of iterations.
func (*Convergence) AccessHistory ¶ added in v1.1.0
func (o *Convergence) AccessHistory() *History
AccessHistory gets access to History
func (*Convergence) Fconvergence ¶ added in v1.1.0
func (o *Convergence) Fconvergence(fprev, fmin float64) bool
Fconvergence performs the check for f({x}) values
Input: fprev -- a previous f({x}) value fmin -- current f({x}) value Output: returns true if f values are not changing any longer
func (*Convergence) Gconvergence ¶ added in v1.1.0
func (o *Convergence) Gconvergence(fprev float64, x, u la.Vector) bool
Gconvergence performs the check for df/dx|({x}) values
Input: fprev -- a previous f({x}) value (for normalization purposes) x -- current {x} value u -- current direction; e.g. dfdx Output: returns true if dfdy values are not changing any longer
func (*Convergence) InitConvergence ¶ added in v1.1.0
func (o *Convergence) InitConvergence(Ffcn fun.Sv, Gfcn fun.Vv)
InitConvergence initialize convergence parameters
func (*Convergence) InitHist ¶ added in v1.1.0
func (o *Convergence) InitHist(x0 la.Vector)
InitHist initializes history
func (*Convergence) SetConvParams ¶ added in v1.1.0
func (o *Convergence) SetConvParams(maxIt int, ftol, gtol float64)
SetConvParams sets convergence parameters
func (*Convergence) SetParams ¶ added in v1.1.0
func (o *Convergence) SetParams(params utl.Params)
SetParams sets parameters
Example: o.SetParams(utl.NewParams( &utl.P{N: "maxit", V: 1000}, &utl.P{N: "ftol", V: 1e-2}, &utl.P{N: "gtol", V: 1e-2}, &utl.P{N: "hist", V: 1}, &utl.P{N: "verb", V: 1}, ))
func (*Convergence) SetUseHistory ¶ added in v1.1.0
func (o *Convergence) SetUseHistory(useHist bool)
SetUseHistory sets use history parameter
func (*Convergence) SetVerbose ¶ added in v1.1.0
func (o *Convergence) SetVerbose(verbose bool)
SetVerbose sets verbose mode
type FactoryType ¶ added in v1.1.0
type FactoryType struct{}
FactoryType defines a structure to implement a factory of Objective functions to be minimized
func (FactoryType) Rosenbrock2d ¶ added in v1.1.0
func (o FactoryType) Rosenbrock2d(a, b float64) (p *Problem)
Rosenbrock2d returns the classical Rosenbrock2d function
See https://en.wikipedia.org/wiki/Rosenbrock_function Input: a -- parameter a, a=0 ⇒ function is symmetric and minimum is at origin b -- parameter b NOTE: the commonly used values are a=1 and b=100
func (FactoryType) RosenbrockMulti ¶ added in v1.1.0
func (o FactoryType) RosenbrockMulti(N int) (p *Problem)
RosenbrockMulti returns the multi-variate version of the Rosenbrock function
See https://en.wikipedia.org/wiki/Rosenbrock_function See https://docs.scipy.org/doc/scipy-0.14.0/reference/tutorial/optimize.html#unconstrained-minimization-of-multivariate-scalar-functions-minimize Input: N -- dimension == ndim
func (FactoryType) SimpleParaboloid ¶ added in v1.1.0
func (o FactoryType) SimpleParaboloid() (p *Problem)
SimpleParaboloid returns a simple optimization problem using a paraboloid as the objective function
func (FactoryType) SimpleQuadratic2d ¶ added in v1.1.0
func (o FactoryType) SimpleQuadratic2d() (p *Problem)
SimpleQuadratic2d returns a simple problem with a quadratic function such that f(x) = xᵀ A x (2D)
func (FactoryType) SimpleQuadratic3d ¶ added in v1.1.0
func (o FactoryType) SimpleQuadratic3d() (p *Problem)
SimpleQuadratic3d returns a simple problem with a quadratic function such that f(x) = xᵀ A x (3D)
type GradDesc ¶ added in v1.1.0
type GradDesc struct { // merge properties Convergence // auxiliary object to check convergence // configuration Alpha float64 // rate to take descents // contains filtered or unexported fields }
GradDesc implements a simple gradient-descent optimizer
NOTE: Check Convergence to see how to set convergence parameters, max iteration number, or to enable and access history of iterations
func NewGradDesc ¶ added in v1.1.0
NewGradDesc returns a new multidimensional optimizer using GradDesc's method (no derivatives required)
func (*GradDesc) Min ¶ added in v1.1.0
Min solves minimization problem
Input: x -- [ndim] initial starting point (will be modified) params -- [may be nil] optional parameters. e.g. "alpha", "maxit". Example: params := utl.NewParams( &utl.P{N: "alpha", V: 0.5}, &utl.P{N: "maxit", V: 1000}, &utl.P{N: "ftol", V: 1e-2}, &utl.P{N: "gtol", V: 1e-2}, &utl.P{N: "hist", V: 1}, &utl.P{N: "verb", V: 1}, ) Output: fmin -- f(x@min) minimum f({x}) found x -- [modify input] position of minimum f({x})
type History ¶ added in v1.1.0
type History struct { // data Ndim int // dimension of x-vector HistX []la.Vector // [it] history of x-values (position) HistU []la.Vector // [it] history of u-values (direction) HistF []float64 // [it] history of f-values HistI []float64 // [it] index of iteration // configuration NptsI int // number of points for contour NptsJ int // number of points for contour RangeXi []float64 // {ximin, ximax} [may be nil for default] RangeXj []float64 // {xjmin, xjmax} [may be nil for default] GapXi float64 // expand {ximin, ximax} GapXj float64 // expand {ximin, ximax} // contains filtered or unexported fields }
History holds history of optimization using directions; e.g. for Debugging
func NewHistory ¶ added in v1.1.0
NewHistory returns new object
type LinIpm ¶
type LinIpm struct { // problem A *la.CCMatrix // [Nl][nx] B la.Vector // [Nl] C la.Vector // [nx] // constants NmaxIt int // max number of iterations Tol float64 // tolerance ϵ for stopping iterations // dimensions Nx int // number of x Nl int // number of λ Ny int // number of y = nx + ns + nl = 2 * nx + nl // solution vector Y la.Vector // y := [x, λ, s] X la.Vector // subset of y L la.Vector // subset of y S la.Vector // subset of y Mdy la.Vector // -Δy Mdx la.Vector // subset of Mdy == -Δx Mdl la.Vector // subset of Mdy == -Δλ Mds la.Vector // subset of Mdy == -Δs // affine solution R la.Vector // residual Rx la.Vector // subset of R Rl la.Vector // subset of R Rs la.Vector // subset of R J *la.Triplet // [ny][ny] Jacobian matrix // linear solver Lis la.SparseSolver // linear solver }
LinIpm implements the interior-point methods for linear programming problems
Solve: min cᵀx s.t. Aᵀx = b, x ≥ 0 x or the dual problem: max bᵀλ s.t. Aᵀλ + s = c, s ≥ 0 λ
type LineSearch ¶ added in v1.1.0
type LineSearch struct { // configuration MaxIt int // max iterations MaxItZoom int // max iterations in zoom routine MaxAlpha float64 // max 'a' MulAlpha float64 // multiplier to increase 'a'; e.g. 1.5 Coef1 float64 // "sufficient decrease" coefficient (c1); typically = 1e-4 (Fig 3.3, page 33 of [1]) Coef2 float64 // "curvature condition" coefficient (c2); typically = 0.1 for CG methods and 0.9 for Newton or quasi-Newton methods (Fig 3.4, page 34 of [1]) CoefQuad float64 // coefficient for limiting 'a' in quadratic interpolation in zoom CoefCubic float64 // coefficient for limiting 'a' in cubic interpolation in zoom // statistics and History (for debugging) NumFeval int // number of calls to Ffcn (function evaluations) NumJeval int // number of calls to Jfcn (Jacobian evaluations) NumIter int // number of iterations from last call to Find NumIterZoom int // number of iterations from last call to zoom Jfcn fun.Vv // vector function of vector: {J} = df/d{x} @ {x} // contains filtered or unexported fields }
LineSearch finds the scalar 'a' that gives a substantial reduction of f({x}+a⋅{u})
REFERENCES: [1] Nocedal, J. and Wright, S. (2006) Numerical Optimization. Springer Series in Operations Research. 2nd Edition. Springer. 664p
func NewLineSearch ¶ added in v1.1.0
NewLineSearch returns a new LineSearch object
ndim -- length(x) ffcn -- function y = f({x}) Jfcn -- Jacobian {J} = df/d{x} @ {x}
func (*LineSearch) F ¶ added in v1.1.0
func (o *LineSearch) F(a float64) float64
F implements f(a) := f({xnew}(a,u)) where {xnew}(a,u) := {x} + a⋅{u}
func (*LineSearch) G ¶ added in v1.1.0
func (o *LineSearch) G(a float64) float64
G implements g(a) = df/da|({xnew}(a,u)) = df/d{xnew}⋅d{xnew}/da where {xnew} == {x} + a⋅{u}
func (*LineSearch) Set ¶ added in v1.1.0
func (o *LineSearch) Set(x, u la.Vector)
Set sets x and u vectors as required by G(a) and H(a) functions
func (*LineSearch) SetParams ¶ added in v1.1.0
func (o *LineSearch) SetParams(params utl.Params)
SetParams sets parameters
Example: o.SetParams(utl.NewParams( &utl.P{N: "maxitls", V: 10}, &utl.P{N: "maxitzoom", V: 10}, &utl.P{N: "maxalpha", V: 100}, &utl.P{N: "mulalpha", V: 2}, &utl.P{N: "coef1", V: 1e-4}, &utl.P{N: "coef2", V: 0.4}, &utl.P{N: "coefquad", V: 0.1}, &utl.P{N: "coefcubic", V: 0.2}, ))
type NonLinSolver ¶ added in v1.1.0
type NonLinSolver interface { Min(x la.Vector, params utl.Params) (fmin float64) // computes minimum and updates x @ min SetConvParams(maxIt int, ftol, gtol float64) // SetConvParams sets convergence parameters SetUseHistory(useHist bool) // SetUseHist sets use history parameter SetVerbose(verbose bool) // SetVerbose sets verbose mode AccessHistory() *History // get access to history }
NonLinSolver solves (unconstrained) nonlinear optimization problems
func GetNonLinSolver ¶ added in v1.1.0
func GetNonLinSolver(kind string, prob *Problem) NonLinSolver
GetNonLinSolver finds a non-linear-solver in database or panic
kind -- e.g. conjgrad, powel, graddesc
type Powell ¶ added in v1.1.0
type Powell struct { // merge properties Convergence // auxiliary object to check convergence // access Umat *la.Matrix // matrix whose columns contain the directions u // contains filtered or unexported fields }
Powell implements the multidimensional minimization by Powell's method (no derivatives required)
NOTE: Check Convergence to see how to set convergence parameters, max iteration number, or to enable and access history of iterations REFERENCES: [1] Press WH, Teukolsky SA, Vetterling WT, Fnannery BP (2007) Numerical Recipes: The Art of Scientific Computing. Third Edition. Cambridge University Press. 1235p.
func NewPowell ¶ added in v1.1.0
NewPowell returns a new multidimensional optimizer using Powell's method (no derivatives required)
func (*Powell) Min ¶ added in v1.1.0
Min solves minimization problem
Input: x -- [ndim] initial starting point (will be modified) params -- [may be nil] optional parameters: "reuse" > 0 -- use pre-computed matrix containing the directions as columns; otherwise, set Umat as a diagonal matrix Output: fmin -- f(x@min) minimum f({x}) found x -- [given as input] position of minimum f({x})
type Problem ¶ added in v1.1.0
type Problem struct { Ndim int // dimension of x == len(x) Ffcn fun.Sv // objective function f({x}) Gfcn fun.Vv // gradient function df/d{x}|(x) Hfcn fun.Mv // Hessian function d²f/d{x}d{x}|(x) Fref float64 // known solution fmin = f({x}) Xref la.Vector // known solution {x} @ min }
Problem holds the functions defining an optimization problem
func NewQuadraticProblem ¶ added in v1.1.0
NewQuadraticProblem returns a quadratic optimization problem such that f(x) = xᵀ A x