Measuring the academic reputation through citation networks via PageRank

Open access preprint Measuring the academic reputation through citation networks via PageRank, Massucci and Docampo, arXiv (2018).

Abstract:

The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of University Rankings have been proposed to quantify the excellence of different research institutions in the world. Albeit met with criticism in some cases, the relevance of university rankings is being increasingly acknowledged: indeed, rankings are having a major impact on the design of research policies, both at the institutional and governmental level.

Yet, the debate on what rankings are exactly measuring is enduring. Here, we address the issue by measuring a quantitive and reliable proxy of the academic reputation of a given institution and by evaluating its correlation with different university rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field.

Our results allow to quantifying the prestige of a set of institutions in a certain research field based only on hard bibliometric data. Given the volume of the data analysed, our findings are statistically robust and less prone to bias, at odds with ad–hoc surveys often employed by ranking bodies in order to attain similar results. Because our findings are found to correlate extremely well with the ARWU Subject rankings, the approach we propose in our paper may open the door to new, Academic Ranking methodologies that go beyond current methods by reconciling the qualitative evaluation of Academic Prestige with its quantitative measurements via publication impact.

cross-citations-engineering
The institutional network of cross-citations in the Telecommunication Engineering WoS category. Each node of the network is an academic institution featured both in the Telecommunications ARWU GRAS and as an affiliation in at least one publication of the Telecommunication Engineering WoS category. Edges are citations from a publication produced by an institution to those authored by another one (10% of the total edges are plotted). The node size is proportional to the number of publications.

China’s science, technology, engineering, and mathematics (STEM) research environment

China’s science, technology, engineering, and mathematics (STEM) research environment: A snapshot, by Xueying Han, and Richard P. Appelbaum, PLOS One (2018).

Abstract (emphasis mine):

In keeping with China’s President Xi Jinping’s “Chinese Dream,” China has set a goal of becoming a world-class innovator by 2050. China’s higher education Science, Technology, Engineering, and Math (STEM) research environment will play a pivotal role in influencing whether China is successful in transitioning from a manufacturing-based economy to an innovation-driven, knowledge-based economy. Past studies on China’s research environment have been primarily qualitative in nature or based on anecdotal evidence. In this study, we surveyed STEM faculty from China’s top 25 universities to get a clearer understanding of how faculty members view China’s overall research environment. We received 731 completed survey responses, 17% of which were from individuals who received terminal degrees from abroad and 83% of which were from individuals who received terminal degrees from domestic institutions of higher education. We present results on why returnees decided to study abroad, returnees’ decisions to return to China, and differences in perceptions between returnees and domestic degree holders on the advantages of having a foreign degree. The top five challenges to China’s research environment identified by survey respondents were: a promotion of short-term thinking and instant success (37% of all respondents); research funding (33%); too much bureaucratic or governmental intervention (31%); the evaluation system (27%); and a reliance on human relations (26%). Results indicated that while China has clearly made strides in its higher education system, there are numerous challenges that must be overcome before China can hope to effectively produce the kinds of innovative thinkers that are required if it is to achieve its ambitious goals. We also raise questions about the current direction of education and inquiry in China, particularly indications that government policy is turning inward, away from openness that is central to innovative thinking.

Measuring Student Success: A Value-Added Approach

Book chapter Pounder J.S. (2018) Measuring Student Success: A Value-Added Approach. In: Fardoun H., Downing K., Mok M. (eds) The Future of Higher Education in the Middle East and Africa. Springer, Cham.

Abstract:

The notion of what constitutes a ‘quality’ university has been challenged by the 2014 Gallup-Purdue Survey (Great Jobs, Great Lives: The 2014 Gallup-Purdue Index Report, Gallup, Inc., 2014). This survey of 30,000 US university alumni revealed that engagement and feelings of well-being beyond the university and into the workplace have little to do with the prestige of the university and much to do with having caring professors and being afforded opportunities for experiential learning. The Survey has shifted the focus from what university professors value to what students value. Assuming universities are interested in what students think, the issue then becomes one of assessing ‘value added’, and this paper examines one university’s approach to addressing this issue.

Are university rankings useful to improve research? A systematic review

Open access Are university rankings useful to improve research? A systematic review, by Vernon, Balas, and Momani, PLOS One (2018).

Abstract (emphasis mine):

Introduction
Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement.

Methods
A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted.

Results
A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems.

Discussion
No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

university-ranking-comparison
Conflicting global rankings of an illustrative research university (per most recent published results, 2016).

Implementation of preference ranking organization method for enrichment evaluation on selection system of student’s achievement

Open access Implementation of preference ranking organization method for enrichment evaluation (Promethee) on selection system of student’s achievement, by Karlitasari, Suhartini, and Nurrosikawati, IOP Conference Series: Materials Science and Engineering (2018) Volume 332, conference 1.

Abstract:

Selection of Student Achievement is conducted every year, starting from the level of Study Program, Faculty, to University, which then rank one will be sent to Kopertis level. The criteria made for the selection are Academic and Rich Scientific, Organizational, Personality, and English. In order for the selection of Student Achievement is Objective, then in addition to the presence of the jury is expected to use methods that support the decision to be more optimal in determining the Student Achievement. One method used is the Promethee Method. Preference Ranking Organization Method for Enrichment Evaluation (Promethee) is a method of ranking in Multi Criteria Decision Making (MCDM). PROMETHEE has the advantage that there is a preference type against the criteria that can take into account alternatives with other alternatives on the same criteria. The conjecture of alternate dominance over a criterion used in PROMETHEE is the use of values in the relationships between alternative ranking values. Based on the calculation result, from 7 applicants between Manual and Promethee Matrices, rank 1, 2, and 3, did not change, only 4 to 7 positions were changed. However, after the sensitivity test, almost all criteria experience a high level of sensitivity. Although it does not affect the students who will be sent to the next level, but can bring psychological impact on prospective student’s achievement.