perf_eval {diffuStats} | R Documentation |
Function perf_eval
directly compares a desired output
with the scores from diffusion. It handles the possible
shapes of the scores (vector, matrix, list of matrices)
and gives the desired metrics.
perf_eval(prediction, validation, metric = list(auc = metric_fun(curve = "ROC")))
prediction |
smoothed scores; either a named numeric vector, a column-wise matrix whose rownames are nodes and colnames are different scores, or a named list of such matrices. |
validation |
target scores to which the smoothed scores will be compared to. Must have the same format as the input scores, although the number of rows may vary and only the matching rows will give a performance measure. |
metric |
named list of metrics to apply. Each metric should accept
the form |
A data frame containing the metrics for each comparable pair of output-validation.
# Using a matrix with four set of scores # called Single, Row, Small_sample, Large_sample data(graph_toy) diff <- diffuse( graph = graph_toy, scores = graph_toy$input_mat, method = "raw") df_perf <- perf_eval( prediction = diff, validation = graph_toy$input_mat) df_perf