Аннотация:Traditional machine learning and pattern recognition techniques are intimately linked to the notion of feature spaces. Adopting this view, each object is described in terms of a vector of numerical attributes and is, therefore, mapped to a point in a Euclidean (geometric) vector space, so that the distances between the points reflect the observed (dis)similarities between the respective objects. This kind of representation is attractive because geometric spaces offer powerful analytical as well as computational tools that are simply not available in other representations. Indeed, classical machine learning methods are tightly related to geometrical concepts, and numerous powerful tools have been developed during the last few decades, starting from the maximal likelihood method in the 1920s to perceptrons in the 1960s and, more recently, to kernel machines and deep learning architectures.