With a formula, smooth a variable in a sspm dataset. See Details for more explanations.
spm_smooth(
sspm_object,
formula,
boundaries,
keep_fit = TRUE,
predict = TRUE,
...
)
# S4 method for class 'sspm_dataset,formula,sspm_discrete_boundary'
spm_smooth(
sspm_object,
formula,
boundaries,
keep_fit = TRUE,
predict = TRUE,
...
)[sspm_dataset] An object of class sspm_dataset.
[formula] A formula definition of the form response ~ smoothing_terms + ...
[sspm_boundary] An object of class sspm_discrete_boundary.
[logical] Whether or not to keep the fitted values and model (default to TRUE, set to FALSE to reduce memory footprint).
[logical] Whether or not to generate the smoothed predictions (necessary to fit the final SPM model, default to TRUE).
Arguments passed on to mgcv::bam
familyThis is a family object specifying the distribution and link to use in
fitting etc. See glm and family for more
details. The extended families listed in family.mgcv can also be used.
dataA data frame or list containing the model response variable and
covariates required by the formula. By default the variables are taken
from environment(formula): typically the environment from
which gam is called.
weightsprior weights on the contribution of the data to the log likelihood. Note that a weight of 2, for example,
is equivalent to having made exactly the same observation twice. If you want to reweight the contributions
of each datum without changing the overall magnitude of the log likelihood, then you should normalize the weights
(e.g. weights <- weights/mean(weights)).
subsetan optional vector specifying a subset of observations to be used in the fitting process.
na.actiona function which indicates what should happen when the data contain `NA's. The default is set by the `na.action' setting of `options', and is `na.fail' if that is unset. The “factory-fresh” default is `na.omit'.
offsetCan be used to supply a model offset for use in fitting. Note
that this offset will always be completely ignored when predicting, unlike an offset
included in formula (this used to conform to the behaviour of
lm and glm).
methodThe smoothing parameter estimation method. "GCV.Cp" to use GCV for unknown scale parameter and
Mallows' Cp/UBRE/AIC for known scale. "GACV.Cp" is equivalent, but using GACV in place of GCV. "REML"
for REML estimation, including of unknown scale, "P-REML" for REML estimation, but using a Pearson estimate
of the scale. "ML" and "P-ML" are similar, but using maximum likelihood in place of REML. Default
"fREML" uses fast REML computation.
controlA list of fit control parameters to replace defaults returned by
gam.control. Any control parameters not supplied stay at their default values.
selectShould selection penalties be added to the smooth effects, so that they can in principle be
penalized out of the model? See gamma to increase penalization. Has the side effect that smooths no longer have a fixed effect component (improper prior from a Bayesian perspective) allowing REML comparison of models with the same fixed effect structure.
scaleIf this is positive then it is taken as the known scale parameter. Negative signals that the scale paraemter is unknown. 0 signals that the scale parameter is 1 for Poisson and binomial and unknown otherwise. Note that (RE)ML methods can only work with scale parameter 1 for the Poisson and binomial cases.
gammaIncrease above 1 to force smoother fits. gamma is used to multiply the effective degrees of freedom in the GCV/UBRE/AIC score (so log(n)/2 is BIC like). n/gamma can be viewed as an effective sample size, which allows it to play a similar role for RE/ML smoothing parameter estimation.
knotsthis is an optional list containing user specified knot values to be used for basis construction.
For most bases the user simply supplies the knots to be used, which must match up with the k value
supplied (note that the number of knots is not always just k).
See tprs for what happens in the "tp"/"ts" case.
Different terms can use different numbers of knots, unless they share a covariate.
spA vector of smoothing parameters can be provided here.
Smoothing parameters must be supplied in the order that the smooth terms appear in the model
formula. Negative elements indicate that the parameter should be estimated, and hence a mixture
of fixed and estimated parameters is possible. If smooths share smoothing parameters then length(sp)
must correspond to the number of underlying smoothing parameters. Note that discrete=TRUEmay result in
re-ordering of variables in tensor product smooths for improved efficiency, and sp must be supplied in re-ordered order.
min.spLower bounds can be supplied for the smoothing parameters. Note
that if this option is used then the smoothing parameters full.sp, in the
returned object, will need to be added to what is supplied here to get the
smoothing parameters actually multiplying the penalties. length(min.sp) should
always be the same as the total number of penalties (so it may be longer than sp,
if smooths share smoothing parameters).
paraPenoptional list specifying any penalties to be applied to parametric model terms.
gam.models explains more.
chunk.sizeThe model matrix is created in chunks of this size, rather than ever being formed whole.
Reset to 4*p if chunk.size < 4*p where p is the number of coefficients.
rhoAn AR1 error model can be used for the residuals (based on dataframe order), of Gaussian-identity
link models. This is the AR1 correlation parameter. Standardized residuals (approximately
uncorrelated under correct model) returned in
std.rsd if non zero. Also usable with other models when discrete=TRUE, in which case the AR model
is applied to the working residuals and corresponds to a GEE approximation.
AR.startlogical variable of same length as data, TRUE at first observation of an independent
section of AR1 correlation. Very first observation in data frame does not need this. If NULL then
there are no breaks in AR1 correlaion.
discretewith method="fREML" it is possible to discretize covariates for storage and efficiency reasons.
If discrete is TRUE, a number or a vector of numbers for each smoother term, then discretization happens. If numbers are supplied they give the number of discretization bins. Parametric terms use the maximum number specified.
clusterbam can compute the computationally dominant QR decomposition in parallel using parLapply
from the parallel package, if it is supplied with a cluster on which to do this (a cluster here can be some cores of a
single machine). See details and example code.
nthreadsNumber of threads to use for non-cluster computation (e.g. combining results from cluster nodes).
If NA set to max(1,length(cluster)). See details.
gc.levelto keep the memory footprint down, it can help to call the garbage collector often, but this takes a substatial amount of time. Setting this to zero means that garbage collection only happens when R decides it should. Setting to 2 gives frequent garbage collection. 1 is in between. Not as much of a problem as it used to be, but can really matter for very large datasets.
use.cholBy default bam uses a very stable QR update approach to obtaining the QR decomposition
of the model matrix. For well conditioned models an alternative accumulates the crossproduct of the model matrix
and then finds its Choleski decomposition, at the end. This is somewhat more efficient, computationally.
samfracFor very large sample size Generalized additive models the number of iterations needed for the model fit can
be reduced by first fitting a model to a random sample of the data, and using the results to supply starting values. This initial fit is run with sloppy convergence tolerances, so is typically very low cost. samfrac is the sampling fraction to use. 0.1 is often reasonable.
coefinitial values for model coefficients
drop.unused.levelsby default unused levels are dropped from factors before fitting. For some smooths involving factor variables you might want to turn this off. Only do so if you know what you are doing.
Gif not NULL then this should be the object returned by a previous call to bam with
fit=FALSE. Causes all other arguments to be ignored except sp, chunk.size, gamma,nthreads, cluster, rho, gc.level, samfrac, use.chol, method and scale (if >0).
fitif FALSE then the model is set up for fitting but not estimated, and an object is returned, suitable for passing as the G argument to bam.
drop.interceptSet to TRUE to force the model to really not have the a constant in the parametric model part,
even with factor variables present.
in.outIf supplied then this is a two item list of intial values. sp is initial smoothing parameter estiamtes and scale the initial scale parameter estimate (set to 1 if famiy does not have one).
An updated sspm_dataset.
This functions allows to specify a model formula for a given discrete sspm
object. The formula makes use of specific smoothing terms smooth_time(),
smooth_space(), smooth_space_time(). The formula can also contain fixed
effects and custom smooths, and can make use of specific smoothing terms
smooth_time(), smooth_space(), smooth_space_time().
if (FALSE) { # \dontrun{
biomass_smooth <- biomass_dataset %>%
spm_smooth(weight_per_km2 ~ sfa + smooth_time(by = sfa) +
smooth_space() +
smooth_space_time(),
boundaries = bounds_voronoi,
family = tw)
} # }