Skip to contents

It estimates the Geometric Mean score for a nominal/categorical predicted-observed dataset.

Usage

gmean(
  data = NULL,
  obs,
  pred,
  pos_level = 2,
  atom = FALSE,
  tidy = FALSE,
  na.rm = TRUE
)

Arguments

data

(Optional) argument to call an existing data frame containing the data.

obs

Vector with observed values (character | factor).

pred

Vector with predicted values (character | factor).

pos_level

Integer, for binary cases, indicating the order (1|2) of the level corresponding to the positive. Generally, the positive level is the second (2) since following an alpha-numeric order, the most common pairs are (Negative | Positive), (0 | 1), (FALSE | TRUE). Default : 2.

atom

Logical operator (TRUE/FALSE) to decide if the estimate is made for each class (atom = TRUE) or at a global level (atom = FALSE); Default : FALSE. When dataset is "binomial" atom does not apply.

tidy

Logical operator (TRUE/FALSE) to decide the type of return. TRUE returns a data.frame, FALSE returns a list; Default : FALSE.

na.rm

Logic argument to remove rows with missing values (NA). Default is na.rm = TRUE.

Value

an object of class numeric within a list (if tidy = FALSE) or within a data frame (if tidy = TRUE).

Details

The gmean is a metric especially useful for imbalanced classes because it measures the balance between the classification performance on both major (over-represented) as well as on minor (under-represented) classes. As stated above, it is particularly useful when the number of observations belonging to each class is uneven.

The gmean score is equivalent to the square-root of the product of specificity and recall (a.k.a. sensitivity).

\(gmean = \sqrt{recall * specificity} \)

It is bounded between 0 and 1. The closer to 1 the better the classification performance, while zero represents the worst.

For the formula and more details, see online-documentation

References

De Diego, I.M., Redondo, A.R., Fernández, R.R., Navarro, J., Moguerza, J.M. (2022). General Performance Score for classification problems. _ Appl. Intell. (2022)._ doi:10.1007/s10489-021-03041-7

Examples

# \donttest{
set.seed(123)
# Two-class
binomial_case <- data.frame(labels = sample(c("True","False"), 100, replace = TRUE), 
predictions = sample(c("True","False"), 100, replace = TRUE))
# Get gmean estimate for two-class case
gmean(data = binomial_case, obs = labels, pred = predictions)
#> $gmean
#> [1] 0.4939454
#> 

# }