A great difference in methods

The methods used vary widely. The differences concern the definition of quality, its criteria and indicators, the measurement methods and the presentation format; they result in very different classification formulae, which also produce very different results.
Rankings are based on weighted indicators

The media are seeking to produce league tables assigning a rank to each University. The higher the rank, the better the quality, and vice versa. How do we achieve this? A definition of the quality of universities is developed and its various aspects are measured using indicators. The Overall Score is obtained by weighting the score on each indicator. Each aspect of quality, such as the impact of research or the quality of teaching for THES, is measured on specific criteria, such as the Thompson Scientific database index, or the student mentorship rate. In this case, these indicators are given the same weighting factor: 20 %. The grid is applied identical to all universities.

What is the problem with this method?

The problem with this method is that the definition of the quality of a university, the criteria and indicators used to measure it and the weighting adopted vary enormously from one classification to another, and with them the results obtained. They cannot be reasonably explained without knowing what was measured and how it was done. Considerable differences are obtained if the number of former students who have obtained a Nobel Prize (Shanghai ranking) or the supervision rate (THES ranking) are taken as indicators of the quality of Education.); or if we give the search a weight of 20 % (THES) or 40 % (Shanghai). In addition, the definition of quality and the ways in which it is measured are chosen by the organization making the ranking: for the media, the press organ itself. There is no clear explanation of why a definition was adopted, its Rationale, The origin of the decision, or the basis for openness and reflection on which it was made – although all of this has major implications for measuring the quality of universities.
The nature and quality of the data used are highly variable

Moreover, the nature and quality of the data used have little in common. The Shanghai classification is based on objective quantifiable data. The THES, on the other hand, makes extensive use of subjective expert assessments; it is not clear how reliable they are and how accurate they are – which would be important. This subjective definition of quality is applied to all universities, whatever their mission and their goals; and the results are then presented in a scoreboard as if they were from specific measurements. But this precision is practically out of reach, that is to say illusory. It is not realistic to propose to measure properly and accurately the quality of academic work, in all institutions and for all relevant stakeholders.

Principles for more satisfactory methods

Some classification methods seem more appropriate than others. They are based on a number of principles, such as:

  • classification of disciplines or departments instead of entire institutions;
  • instead of a single pattern, a multidimensional approach to the quality of a university, taking into account the diversity of institutional forms, missions and goals, as well as the language and particularities of the institution;
  • a separate presentation for each indicator, each with separate measures, which allows the user to customize the ranking according to his needs rather than relying on an overall score;
  • the classification into groups (leading, median, tail platoon) rather than into sequential lists.