The **k-medoids** algorithm works like k-means, with the important difference that at any given time a cluster centre has to correspond to an input data item. On each round, the input data item that is closest to the geometric centre of each cluster is selected to be used as the new cluster centre for the subsequent round. The two subtypes, **Partitioning around medoids** and **Voronoi iteration**, are simply different ways of achieving this mathematically.

K-medoids is less sensitive to the presence of outliers than k-means.

- alias
- subtype
- Partitioning around medoids Voronoi iteration
- has functional building block
- FBB_Classification
- has input data type
- IDT_Vector of quantitative variables
- has internal model
- has output data type
- ODT_Vector of quantitative variables ODT_Classification
- has learning style
- LST_Unsupervised
- has parametricity
- PRM_Nonparametric with hyperparameter(s)
- has relevance
- REL_Relevant
- uses
- sometimes supports
- ALG_Nearest Neighbour
- mathematically similar to