The "Performance Ranking of Scientific Papers for World Universities" is released by National Taiwan University, and is also known as NTU Ranking. NTU Ranking provides overall ranking, rankings by six fields, and rankings by 14 selected subjects.
The Performance Ranking of Scientific Papers for World Universities is hosted by Dr. Mu-Hsuan Huang, professor of library and information science at National Taiwan University. The ranking was first published in 2007 by Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT) and which utilized more objective methods and statistics to rank universities. This ranking system is designed to evaluate research universities’ achievements in scientific research by using objective indicators. (Further information about objective indicators can be found in the Methodology section.)
This ranking of top 500 universities has been announced annually since 2007. In 2008 and 2009, the ranking of top 300 universities in six fields was also released. In addition to the overall ranking and the ranking by fields, since 2010 the ranking of top 300 universities in 10 subjects have also been presented. In 2011, the HEEACT announced the overall ranking, while the performance rankings by field and subject were separately released by National Taiwan University. In the same year, we further include three subjects in agricultural for our subject rankings. The name of the ranking system was also changed from "HEEACT" to "Taiwan Ranking". Since 2012, the Performance Ranking of Scientific Papers for World Universities has been individually and officially executed and released by National Taiwan University, with overall, six-field, and 14-subject rankings provided. (Pharmacology and Toxicology has been included in the performance ranking since 2012).
This ranking system evaluates the performance of scientific papers, and the indicators are designed to compare both the quality and the quantity of scientific papers in each university. Through examining long-term performance and short-term research efforts, we look forward to providing a more objective ranking as a reference for diverse research performance in universities worldwide.
Background (back to top)
This ranking system evaluates and ranks the scientific paper performance of the top 500 universities worldwide. Three criteria represented by eight indicators were used to assess a university’s overall scientific paper performance: research productivity (accounting for 25% of the score), research impact (35%), and research excellence (40%).
This annual ranking system began in 2007. Currently, aside from this ranking system there are various major university rankings, Academic Ranking of World Universities by Shanghai Jiao Tong University (ARWU), THE World University Rankings by Times Higher Education (THE), the QS World University Rankings by Quacquarelli Symonds (QS), the Webometrics Ranking of World Universities by the Consejo Superior de Investigaciones Científicas (CSIC) in Spain,and so on. Among these rankings, ARWU and THE rankings are well-known and most commonly discussed.
In contrast with the ARWU’s focus on academic ranking and the THE ranking’s focus on university ranking, NTU Ranking focuses on scientific paper performance ranking. The emphasis on current research performance makes the indicators used in this ranking system more objective than traditional indicators such as a university’s reputation reflected by peer reviews, or the number of Nobel laureates affiliated with that university, which tend to favor universities with longer histories or universities in developed countries.
This ranking system employs quantitative data extracted from Science Citation Index (SCI) and Social Sciences Citation Index (SSCI) to evaluate the scientific paper performances of world universities. Today, publishing in international academic journals has become predominant mode of scientific research output. Statistics on the articles published in international academic journals provide an objective representation of each university’s research performance.
In addition to the overall performance based ranking, from 2008, this ranking system began to provide a field-based ranking of world universities to balance potential biases. A subject-based ranking of world universities is also provided to give more information on individual universities’ unique strengths. NTU Ranking thus provides a ranking of the overall 500 top universities and the top 300 universities by six fields and 14 subjects, and all subjects are derived from four of six fields.
Features (back to top)
Since the Institute for Scientific Information (ISI) first published SCI and SSCI in 1961, the two databases have grown to include a good number of academic journals that are both international in scope and comprehensive in subject representation. However, it should be noted that the results of NTU Ranking may favor universities with better performances in sciences and social sciences, and under-represent performances in arts and humanities research. The database Arts & Humanities Citation Index (A&HCI) mainly indexes English-language journals, while arts and humanities researchers’ publications take various forms (such as books) in their native languages.
Therefore, this ranking system does not include the A&HCI database because it may fail to objectively and accurately represent the research performance of arts and humanities researchers. Focusing on data obtained from SCI and SSCI allows for fairer comparisons across universities globally. The indicators used in this ranking system have the following three characteristics:
1. Emphasize the quality of research - the indicators assessing research quality (research impact and research excellence) account for 75% of the performance score.
Research impact and research excellence evaluate the quality of a university’s research output. The calculation of each university’s score is based on the number of citations to its published articles, h-index of the last 2 years, number of Highly Cited Papers, and number of articles published in high-impact journals (Hi-Impact journal articles). These indicators will be explained further in the Indicators section.
2. Neutralize biases caused by the university size or faculty number.
Traditionally the size of a university affects its ranking when the number of articles is used as the sole indicator for research output. Because the number of articles is closely tied to the number of faculty members, rankings employing number of articles often favor larger universities. This ranking system corrects that flaw by incorporating the average number of citations in the last 11 years and h-index of the last 2 years in the calculation of universities’ performance scores (explained below). The inclusion of these two indicators, which together account for 20% of the total score, balances the assessments of quality and quantity of research and provides a fairer representation of a university’s performance regardless of its size.
To further show the possible influences of university size on ranking, in addition to the original ranking, this ranking system also provides an adjusted ranking based on university size. Four indicators significantly affected by university size are normalized by each university’s number of full-time faculty; these include the number of articles in the last 11 years, number of articles in the current year, number of citations in the last 11 years, and the number of citations in the last 2 years. This ranking system employs faculty numbers obtained from the following sources (listed by priority in usage): numbers of full-time faculty obtained from university websites, numbers of faculty registered at each country’s higher education administration, and numbers of faculty/staff obtained from university websites.
3. Take into account a university’s short-term research performance (constituting 50% of the score), which ensures a more objective comparison between universities with histories of varying lengths.
The indicators used in this ranking system seek to represent both the long-term and short-term research performances of a university. The inclusion of indicators evaluating short-term performances corrects the flaws resulted from not differentiating indicators that favor universities with longer histories. These short-term performance indicators include: the number of articles in the current year, the number of citations in the last 2 years, h-index in the last 2 years, and number of articles in the current year in high-impact journals.
Through the use of these indicators, this ranking system attempts to objectively compare the research performance and achievement of universities worldwide. The relative strengths and weaknesses of a university as revealed in the ranking provide insights into higher education administration and resource allocation.