Cross-validation wrapper around mlxs_glmnet() that mirrors the core
glmnet::cv.glmnet() workflow for the families currently supported by
mlxs_glmnet().
Usage
mlxs_cv_glmnet(
x,
y,
weights = NULL,
offset = NULL,
lambda = NULL,
type.measure = c("default", "mse", "deviance", "class", "mae", "auc", "C"),
nfolds = 10,
foldid = NULL,
alignment = c("lambda", "fraction"),
grouped = TRUE,
keep = FALSE,
parallel = FALSE,
gamma = c(0, 0.25, 0.5, 0.75, 1),
relax = FALSE,
trace.it = 0,
family = mlxs_gaussian(),
...
)Arguments
- x
Numeric matrix of predictors (observations in rows).
- y
Numeric response vector.
- weights
Optional observation weights. Currently unsupported.
- offset
Optional offset. Currently unsupported.
- lambda
Optional decreasing lambda sequence. If
NULL, the full-data fit chooses the path and the same path is reused inside each fold.- type.measure
Loss used to score the holdout predictions.
- nfolds
Number of folds.
- foldid
Optional integer vector giving the fold assignment for each observation.
- alignment
Alignment mode. Only
"lambda"is currently supported.- grouped
Should cross-validation be aggregated fold-by-fold? Only
TRUEis currently supported.- keep
Should out-of-fold predictions be stored?
- parallel
Logical. Parallel refits are currently unsupported.
- gamma, relax
Relaxed fits are currently unsupported.
- trace.it
Progress tracing. Currently unsupported.
- family
MLX-aware family object, e.g.
mlxs_gaussian()ormlxs_binomial().- ...
Additional arguments passed to
mlxs_glmnet(), such asalpha,nlambda,lambda_min_ratio,standardize,intercept,maxit, andtol.
Details
The full-data fit defines a master lambda path. Each fold is then refit on the same lambda values and scored on its holdout set.
Current limitations relative to glmnet::cv.glmnet():
only Gaussian and binomial families are supported
weights,offset,alignment != "lambda",grouped = FALSE,parallel = TRUE,relax = TRUE, and non-zerotrace.itare not implementedtype.measure = "auc"andtype.measure = "C"are not implemented