nlminb {stats} | R Documentation |
Unconstrained and constrained optimization using PORT routines.
nlminb(start, objective, gradient = NULL, hessian = NULL, ...,
scale = 1, control = list(), lower = -Inf, upper = Inf)
start |
numeric vector, initial values for the parameters to be optimized. |
objective |
Function to be minimized. Must return a scalar value (possibly
NA/Inf). The first argument to |
gradient |
Optional function that takes the same arguments as |
hessian |
Optional function that takes the same arguments as |
... |
Further arguments to be supplied to |
scale |
See PORT documentation (or leave alone). |
control |
A list of control parameters. See below for details. |
lower , upper |
vectors of lower and upper bounds, replicated to be as long as
|
Any names of start
are (as from R 2.8.1) passed
on to objective
and where applicable, gradient
and
hessian
. The parameter vector will be coerced to double.
The PORT documentation is at http://netlib.bell-labs.com/cm/cs/cstr/153.pdf.
A list with components:
par |
The best set of parameters found. |
objective |
The value of |
convergence |
An integer code. |
message |
A character string giving any additional information returned by the
optimizer, or |
iterations |
Number of iterations performed. |
evaluations |
Number of objective function and gradient function evaluations |
Possible names in the control
list and their default values
are:
eval.max
Maximum number of evaluations of the objective function allowed. Defaults to 200.
iter.max
Maximum number of iterations allowed. Defaults to 150.
trace
The value of the objective function and the parameters is printed every trace'th iteration. Defaults to 0 which indicates no trace information is to be printed.
abs.tol
Absolute tolerance. Defaults to 1e-20
.
rel.tol
Relative tolerance. Defaults to 1e-10
.
x.tol
X tolerance. Defaults to 1.5e-8
.
step.min
Minimum step size. Defaults to 2.2e-14
.
(of R port) Douglas Bates and Deepayan Sarkar.
http://netlib.bell-labs.com/netlib/port/
optim
and nlm
.
optimize
for one-dimensional minimization and
constrOptim
for constrained optimization.
x <- rnbinom(100, mu = 10, size = 10)
hdev <- function(par) {
-sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
}
nlminb(c(9, 12), hdev)
nlminb(c(20, 20), hdev, lower = 0, upper = Inf)
nlminb(c(20, 20), hdev, lower = 0.001, upper = Inf)
## slightly modified from the S-PLUS help page for nlminb
# this example minimizes a sum of squares with known solution y
sumsq <- function( x, y) {sum((x-y)^2)}
y <- rep(1,5)
x0 <- rnorm(length(y))
nlminb(start = x0, sumsq, y = y)
# now use bounds with a y that has some components outside the bounds
y <- c( 0, 2, 0, -2, 0)
nlminb(start = x0, sumsq, lower = -1, upper = 1, y = y)
# try using the gradient
sumsq.g <- function(x,y) 2*(x-y)
nlminb(start = x0, sumsq, sumsq.g,
lower = -1, upper = 1, y = y)
# now use the hessian, too
sumsq.h <- function(x,y) diag(2, nrow = length(x))
nlminb(start = x0, sumsq, sumsq.g, sumsq.h,
lower = -1, upper = 1, y = y)
## Rest lifted from optim help page
fr <- function(x) { ## Rosenbrock Banana function
x1 <- x[1]
x2 <- x[2]
100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
grr <- function(x) { ## Gradient of 'fr'
x1 <- x[1]
x2 <- x[2]
c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
200 * (x2 - x1 * x1))
}
nlminb(c(-1.2,1), fr)
nlminb(c(-1.2,1), fr, grr)
flb <- function(x)
{ p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) }
## 25-dimensional box constrained
## par[24] is *not* at boundary
nlminb(rep(3, 25), flb,
lower=rep(2, 25),
upper=rep(4, 25))