Empirical data analyses often require complete data sets. Therefore, in case of incompletely observed data sets, methods are attractive that generate plausible values (imputations) for the unobserved data. The idea is to then analyze the completed data set in an easy way. Thus, various imputation techniques have been proposed and evaluated. Popular measures used for evaluating these techniques are based on distances between true and imputed values applied in simulation studies. In this paper we show through a theoretical example and a simulation study that these measures may be misleading: From the fact that they are zero if all the imputed values were equal to the true but unobserved values and are usually larger than zero otherwise, it does not follow that the smaller the value of such a measure, the `closer' the inference based on the imputed data set to the inference based on the complete data set without missing values. Moreover, since these measures are usually only applied in simulations, corresponding findings can not be generalized.