predict.grridge {GRridge} | R Documentation |
Returns predictions for new samples from a grridge
object
## S3 method for class 'grridge' predict(object, datanew, printpred = FALSE, dataunpennew=NULL, responsetest=NULL, recalibrate=FALSE, ...)
object |
A model object resulted from the grridge function. |
datanew |
Vector or data frame. Contains the new data. For a data frame: columns are samples, rows are variables (features). |
printpred |
Boolean. Should the predictions be printed on the screen? |
dataunpennew |
Vector or data frame. Optional new data for unpenalized variables. NOTE: columns are covariates, rows are samples. |
responsetest |
Factor, numeric, binary or survival. Response values of test samples. The number of response values should equal
|
recalibrate |
Boolean. Should the prediction model be recalibrated on the test samples? Only implemented for logistic and linear regression with only penalized covariates. |
... |
There is no further argument is used. |
This function returns predictions of the response using the grridge
output. It should be applied to samples NOT used for fitting of the models. About recalibrate
: we noticed that
recalibration of the linear predictor using a simple regression (with intercept and slope) can improve predictive performance, e.g. in terms of brier score or mean square error (not in terms of AUC, which is rank-based). We recommend to use it for large enough test sets (say >= 25 samples), in particular when one suspects that the test set could have somewhat different properties than the training set.
A matrix containing the predictions from all models available in grridge
.
Mark A. van de Wiel
Mark van de Wiel, Tonje Lien, Wina Verlaat, Wessel van Wieringen, Saskia Wilting. (2016). Better prediction by use of co-data: adaptive group-regularized ridge regression. Statistics in Medicine, 35(3), 368-81.
Cross-validated predictions: grridgeCV
.
Examples: grridge
.
#data(dataFarkas) #firstPartition <- CreatePartition(CpGannFarkas) #sdsF <- apply(datcenFarkas,1,sd) #secondPartition <- CreatePartition(sdsF,decreasing=FALSE, uniform=TRUE, grsize=5000) ## Concatenate two partitions #partitionsFarkas <- list(cpg=firstPartition, sds=secondPartition) ## A list of monotone functions from the corresponding partition #monotoneFarkas <- c(FALSE,TRUE) #Hold-out first two samples #testset <- datcenFarkas[,1:2] #resptest <- respFarkas[1:2] #trainingset <- datcenFarkas[,-(1:2)] #resptraining <- respFarkas[-(1:2)] #grFarkas <- grridge(trainingset,resptraining,optl=5.680087, # partitionsFarkas,monotone=monotoneFarkas) ## Prediction of the grridge model to the test samples #Standardize variables in testset, becasuse by default standardizeX = TRUE in grridge function. #Here, test set is small so we use the summaries of the training set. If the test set #is large one may opt to standardize wrt to the test set itself. #sds <- apply(trainingset,1,sd) #sds2 <- sapply(sds,function(x) max(x,10^{-5})) #teststd <- (testset-apply(trainingset,1,mean))/sds2 #yhat <- predict.grridge(grFarkas,teststd)