1. What are the features of this ranking system?
The features of this ranking system are that quality indicators account for 75% of the score; that it is the first ranking system employing h-index, an indicator reflecting both the quality and quantity of research papers; and that 50% of the score represents a university’s short-term progress in research. It should be noted that this ranking system is based exclusively on the qualitative and quantitative performance of scientific papers. It does not assess the overall university performance in teaching, research, and administration. It also de-emphasizes performance indices used in other ranking systems that represent subjectively perceived reputations and extraordinary achievements. As such this ranking system serves as an objective and informative benchmarking tool for research universities in transitional and newly developed countries in assessing the achievement of science research.
2. What universities are the targets of evaluation for this ranking system?
This ranking system is designed for research universities especially those of transitional and newly developed countries. The objective indicators used in this ranking system measure both long-term and short-term research performance of each university. The ranking serves as a good benchmark from which universities around the world can map its relative position among peer institutions. It also allows each research university to track its annual progress in terms of its output of scientific papers.
3. Why do you use WOS and ESI databases of Thomson Reuters?
The WOS and ESI databases of Thomson Reuters were chosen because of their good, consistent data quality. The WOS and ESI databases have the following characteristics:
(1) The data cover the longest time span and widest coverage among peer products: the databases of Thomson Reuters contain the longest period of citation data of any similar product. The earliest articles indexed in the databases date back to 1900, and citations to 1800. By 2018, the databases contained 40 million article records from 12,000 journals.
(2) The data are of high quality: WOS employs strict evaluation procedures to select and index only the core journals that form the basis of any given scientific discipline. Each year, WOS reviews over 2,000 journals, with only 10%-12% selected for inclusion. Journals included in WOS also receive constant review for continuing inclusion.
4. When do you extract data from the WOS and ESI databases?
We extract cumulative data of the last 11 years from the ESI database as soon as it updates each March. As for the WOS database, we used to extract data every January, however, we observed that research articles published in 2012 would continue to be indexed by database into early 2013. Accordingly, we decided to postpone the data extraction to April so that it would be more complete.
5. How did the revision of ESI in March 2014 affect the NTU Ranking?
The publication counting method for some institutions in ESI has changed since 2014. Some universities and their affiliated institutions are now considered single institutions, where they were considered separate institutions in the past. Moreover, journal articles are now indexed by publication year instead of database year. These two changes have led to the increment of the number of published journal articles.
6. How do you define what qualifies as an “affiliated institution” of a university?
In principle, we follow ESI’s definition of affiliated institutions for each university. The affiliated institutions can be cooperating institutions and hospitals, research centers, and so on.
7. Has the classification of subjects changed?
Yes. In 2014, we placed Engineering Industrial (which was uncategorized in 2013) under Mechanical Engineering. In 2017, we placed Green & Sustainable Science & Technology under Chemical Engineering.
8. Why are there two overall performance-based rankings?
The 2015 ranking offers two sets of ranking results indicating the overall performance of the universities. The first set is based on the original scores of the universities, and the second is the ranking result where the original scores have been further adjusted by university faculty numbers. After the 2007 ranking was released, many suggestions were made concerning the factor of faculty size, and therefore in 2008 this ranking system began offering adjusted rankings based on faculty number. However, the ranking based on the original scores is still considered the official result of the annual ranking project. Readers are reminded that the faculty number information used for the adjusted calculation was drawn from various sources, and thus it does not constitute a concrete basis for cross-institution or cross-domain comparisons.
9. What distinguishes this ranking system from the Academic Ranking of World Universities published by Shanghai Jiao Tong University?
The Academic Ranking of World Universities published by Shanghai Jiao Tong University uses certain indicators that measure extraordinary research achievement, including the number of Nobel laureates affiliated with that institution, the number of highly cited scholars, and numbers of papers in Nature and Science. However, these types of research achievement are usually beyond most universities, and therefore are of limited use for most institutions. The HEEACT ranking, meanwhile, uses a set of indicators that are sensitive to short-term excellence in scientific papers, which are achievable for many universities. A university’s annual progress will easily result in a change in ranking with this system, thus offering timely, accurate information for university evaluation.
10. What distinguishes this ranking system from the QS World University Rankings by the UK’s Quacquarelli Symonds?
The QS World University Rankings emphasizes peer review, which accounts for 40% of the overall ranking. The nature of peer review is subjective, particularly when conducted in questionnaire-type ranking with the allotment of points, which could be confounded with assessment of university reputation. In contrast, the NTU Ranking is more objective in its data analysis.
11. What distinguishes this ranking system from the THE World University Rankings by the UK’s Times Higher Education?
In Times Higher Education’s ranking, peer review, research performance, and learning environment account for 33%, 38.5%, and 28.5% respectively of the overall scores. The results are objectively evaluated according to various aspects of universities, however, the average of citations in research performance accounts for 30%. It may cause bias. For instance, if several universities have similar numbers of citations, the university with fewer papers will have the higher number of average citations. In contrast, the NTU Ranking utilizes more objective methods and statistics to conduct its university ranking.
12. Can this ranking system replace other university rankings?
No. This ranking system evaluates performances of scientific papers only. The indicators are designed to compare the quality and quantity of each university’s scientific papers (including sciences and social sciences) from both the long-term and short-term perspectives. The ranking does not indicate universities’ overall performances in teaching, research, and administrative activities.
13. Does NTU ranking represent the overall academic performance of a university?
NTU Ranking evaluates a university's academic performance via comparing the quality and quantity of its scientific papers to other universities. Although scientific papers are the major part of a university's academic output, there are several dimensions such as published books, research projects, owning patents that can also represent a university's academic performance. Generally, the quantity and quality of a university's scientific papers is widely used for estimating the academic performance of the university, except for humanity field.
14. Does this ranking system take into consideration the size of the universities?
Yes. The number of papers from a university is naturally related to the size and history of that institution. This ranking system has used certain indicators such as the average citations per paper to balance the influences from university size and history. Even with the precaution the overall ranking results suggest that our ranking methodology is rather sensitive to the factor of university size. To offer an alternative view on the ranking, the 2008 project offers an additional ranking in which the original ranking is adjusted by faculty numbers. However, readers are reminded that the adjusted ranking employs a range of sources for faculty number information, and therefore the results must be treated with caution.
*About Reference Ranking
Adjusted reference ranking is presented to balance the overall ranking, which favors universities with greater number of faculty members. Four indicators significantly affected by university size are normalized by each university’s number of full-time faculty; these include the number of articles in the last 11 years, number of articles in the current year, number of citations in the last 11 years, and the number of citations in the last 2 years. This ranking system employs faculty numbers obtained from the following sources (listed by priority in usage): numbers of full-time faculty obtained from university official websites, numbers of faculty registered with each higher education administration, and numbers of academic staff of each university obtained from university websites.
15. Why are papers from the humanities disciplines not calculated in this ranking system?
In the humanities, research tends to take on regional characteristics and research output is often published in non-English journals or via monographic publications. Since A&HCI indexes mainly papers in English journals, it only weakly represents the worldwide scholarly performance and achievement of humanities researchers. It is for this reason that this ranking system excludes humanities disciplines from the analyses.
16. Why does the h-index indicator calculate performance data of the last two years only?
The h-index is a highly sensitive indicator. The number of papers published in two years is usually sufficient for h-index analyses (for example in 2005-2006, Harvard University produced 28,951 papers). Results from several recent studies also confirmed that two years for h-index is sufficient for evaluation of institutions. To determine the effective range of years for h-index performance measurement, we took a sample of 47 universities including Harvard and Tokyo Universities and analyzed their annual h-index performances within the ranges of 2 (2005-2006) and 11 (1996-2006) years. The results showed that the h-index values for 2 years and 11 years are highly correlated (.967), thus supporting our use of 2-year h-index values in this ranking project.
17. Some universities are presumed to rank higher than they actually do in NTU Ranking. Why is that?
NTU Ranking is based exclusively on the quantitative and qualitative performance of scientific papers. Its results can naturally differ from those of other ranking programs which evaluate different aspects of university performances. Further, the NTU Ranking is rather sensitive to certain factors, including university size, whether the university has a medical school, and how large a percentage social sciences departments account for of the entirety of the university. These aspects can affect universities’ ranks in NTU Ranking and yield unexpected results.
18. Do universities with similar scores perform equally well in their scientific papers production?
Yes. One feature of NTU Ranking is that the score of the first-place university stands out among all universities. As such the score differences among all other universities are rather small. In other words, the scores are relative and indicative rather than absolute and rigid. A university with a slightly lower score is not necessarily inferior to its peer institutions with higher scores.
19. Why is there such a huge difference between the scores of the first- and second-place universities?
The huge gap between the first two universities in the ranking result is a prevalent phenomenon in academic rankings. For example, in the 2011 Shanghai Jiao Tong University ranking the difference between the first two institutions’ scores reaches 27.4 points. Some ranking systems may try to lessen the gap in scores. For instance, the THE ranking has used the Z-scores to adjust the original scores in order to reduce the gap.
The first-placed Harvard University is large in size and has a medical school with great performance record (it published 41,895 papers between 2000 and 2011, which received 1,182,589 citations); it also has exceptional performance in terms of paper quantity and quality in most academic disciplines. Thus among the eight indicators used in 2011 NTU Ranking, Harvard ranks highest in all except the average number of citations. The second-placed Johns Hopkins University is also a large universityand with a medical school performing outstandingly (the school published 26,264 papers from 2000 to 2011, which received 664,976 citations). However, its quantitative and qualitative performance of papers examined from long-term and short-term perspectives fell significantly behind Harvard. There are also significant gaps in highly cited papers and high-impact journal papers between the two universities. All the phenomena contributed to the great difference in the two universities’ scores.
20. Why does NTU Ranking standardize scores by T-score?
T-score has been employed since 2013 to solve the problem of linear standardized methods causing lower-ranked universities receiving similar scores. This phenomenon can lead to difficulties in distinguishing between the performances of these universities. Through using T-score to standardize the scores, we can more effectively differentiate and identify the rankings of the universities.
21. Do lower ranks indicate poor academic performance of universities?
Not necessarily. Scientific papers performance constitutes only a part of a university’s overall academic performance although it is a significant one. Any interpretation of poorer academic performance or reputation due to a university’s lower rank in this ranking system is an over-inference. As explained elsewhere, certain factors such as university size, whether a university has a medical school, and how large a proportion the social scientific disciplines account for of the entirety of the university’s scholarly research can significantly affect this ranking. These factors must be taken into consideration when one interprets the ranking results.
22. Are the indicators used in NTU Ranking too quantitative and thereby fail to assess the qualitative performance?
No. NTU Ranking highly emphasizes the qualitative performance of scientific papers. 75% of the indicators are designed for measuring qualitative performance.
In other words, although NTU Ranking employs exclusively objective statistical data, it conceptually assesses the quantity and quality of each university’s scientific papers.
23. Will universities focusing largely on humanities and social sciences fall behind in the ranking or fail to be included in the ranking at all?
Very likely. The ranking is based exclusively on the journal papers indexed in SCI and SSCI databases. Humanities papers (A&HCI database) are outside the scope of the ranking. Further, a large gap exists between SCI and SSCI in terms of the numbers of journals included. SCI indexes 8,600 journals, while SSCI indexes 3,100. The discrepancy in the numbers of the indexed journal naturally results in less favorable rankings for those universities specializing in humanities and social sciences.
24. Will a university with a medical school rank higher?
Yes. Papers published in medical sciences journals and their citations are significantly higher than many other disciplines. Between 2000 and 2010, 2,116,193 clinical medicine papers were published, and they were cited 27,355,596 times; while in the fields of engineering 817,334 papers were published and were cited 3,887,615 times. The huge discrepancy in papers production and citations between the medical sciences and other disciplines naturally leads to favorable rankings for universities with medical school. To avoid high rankings caused by subject discipline differences, starting from 2008, NTU Ranking also offers field-ranking results.
25. Are larger universities more likely to rank higher?
Yes. The ranking has attempted to neutralize negative influences from university size through the use of indicators such as the average citations per paper. However, the ranking was still prone to favorable representation of larger institutions. Recognizing the potential biases resultant from university size, this year NTU Ranking offers an additional ranking which incorporates faculty numbers in the calculation of scores.
26. How should we interpret NTU Ranking results which might be very different from people’s expectations?
The readers are reminded again that NTU Ranking is not a reputation ranking or a university ranking. It is based exclusively on objectively obtained data and measures scientific papers performance only. Thus the results can very likely differ from people’s subjective perceptions about how well certain universities should do in relation to other universities. Although there may be gaps between NTU Ranking results and readers’ expectations, the relative positions of the universities from the same country in NTU Ranking are generally consistent with social expectations.
27. Why are there so many indicators? Can they be combined or simplified?
In this ranking system, each indicator represents different criteria for measuring the performance of scientific papers. Although the incorporation of short-term indicators increases the complexity of the ranking, it also enhances the sensitivity of the ranking methodology and is able to prioritize universities with recent progress in research. Therefore, combining or simplifying these indicators may compromise the quality of the ranking results. Furthermore, all 8 indicators passed the significance test on regression, signifying the necessity for these indicators.
28. In compiling and analyzing scientific papers for the ranking process, was authority control used?
Yes. Generally speaking, on ranking projects project staff conduct authority control on the various forms of a university name to ensure the accuracy and completeness of the data. For university systems with several campuses, this project differentiates each campus by labeling it with the city name of each campus. For example, the various campuses of University of Texas are marked by their locations - Austin, M. D. Anderson Cancer Center, Dallas, Southwestern Medical Center, etc. - as a basis for analysis.
29. For universities that have consolidated or changed their names, were there adjustments in authority control?
Universities often change names, consolidate or reorganize. That is why each year before conducting ranking analysis, project staff verifies the universities selected as targets to ensure accuracy and objectivity of the rankings. For example, appropriate adjustments in authority control were conducted on the database before the University Louis Pasteur, Marc Bloch University and Robert Schuman University were acquired by the University of Strasbourg in January 2009.
30. How do you calculate the number of articles in high impact journal?
NTU Ranking ranks journals in subjects according to their impact factors. The top 5% of journals then are set as high impact journals. The university’s number of journal articles published in these high impact journals are further calculated.
Note: Impact factor (called IF for short) is an average number of citations received within one year per journal which has published for the two preceding years. When the impact factor of a journal is higher, its research articles may have a greater influence on its related discipline.
31. Why does NTU Ranking adjust the indicator “h-index of the last 2 years” for subject-based and field-based rankings?
The h-index scores are more concentrated in rankings by subjects and fields as the numbers of papers at the subject and field levels are comparatively smaller. Many universities would thus receive identical h-index scores. For instance, among 570 sample universities in the field of Mechanical Engineering, there are 139 universities with identical h-index scores. Therefore, from 2013 the NTU Ranking has employed a new calculation adjusting the scores for “h-index of the last two years” to differentiate universities.
32. Are the ranking performance biased towards English-speaking countries, since the databases NTU Ranking employs mainly index English journals?
English-speaking countries do have an advantage in the ranking performance. The databases from Thomson Reuters employed in our ranking to analyze the qualitative and quantitative performance of scientific papers consist mostly of English journals. However, non-English-speaking countries such as France, Germany, the Netherlands and China have shown substantial scientific results according to the 2011 results. Scientific results from non-English-speaking countries have also recently exceeded those from English-speaking countries; during the last 5 years, 12 universities in English-speaking countries fell out of the top 500 university ranking, and 105 universities showed lower university rankings.