The contemporary higher education environment is dominated by uncertainty. Institutions do not disappear overnight in this industry, but study programmes decline even dramatically. Presently, ranking methodologies and indicators contribute to different and dynamic positioning of institutions at national or international level, based on a particular approach or a field-based one. Building a proper development strategy is a complex task for academic leadership. The chapter reveals the need of integrating the information provided by rankings into the decisions and actions in higher education institutions to achieve sustainable development. The main objectives of the chapter are to understand the dynamism of the contemporary competitive environment in higher education sector, to clarify the differentiation strategy as a solution for being stable on the educational market, to identify the role of rankings in defining an effective strategy. The topic is relevant for the students, contributing to their knowledge of differentiation strategy in general, but also on its applications in higher education, in particular; they will not only become more aware of the large possibilities of differentiation strategy implementation, but also better decision-makers about educational providers.
Therefore, considering all the aforementioned connections between ranking dimensions and institutional missions, the steps to follow to generate the change towards the differentiation should be:
determine the higher education option for the ranking dimension
assess the current state of the ranking dimension
define possible institutional changes
predict the competitor’s changes related to the chosen dimension
implement the change.
A differentiation strategy is a way of competing in which institutions look for unfitness, through selecting one or several ranking dimensions. Higher education institutions become able to better perform on the market, but only in the case of student awareness or other stakeholder awareness, according to the specific objectives. If the students do not know or do not trust rankings, having a differentiation strategy and investing in it is similar to the case of no differentiation at all. In other words, a differentiation strategy is worth building and developing only if the students, as beneficiaries of it are aware and understand it properly. In this context, communication to the public is most important. Media and institutional press office contribute to the strategy building. If the communication is direct, continuous and clear, the strategy is effective. In case of a lack of communication, the differentiation does not reach the potential public and its impact becomes minor.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of University Rankings have been proposed to quantify the excellence of different research institutions in the world. Albeit met with criticism in some cases, the relevance of university rankings is being increasingly acknowledged: indeed, rankings are having a major impact on the design of research policies, both at the institutional and governmental level.
Yet, the debate on what rankings are exactly measuring is enduring. Here, we address the issue by measuring a quantitive and reliable proxy of the academic reputation of a given institution and by evaluating its correlation with different university rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field.
Our results allow to quantifying the prestige of a set of institutions in a certain research field based only on hard bibliometric data. Given the volume of the data analysed, our findings are statistically robust and less prone to bias, at odds with ad–hoc surveys often employed by ranking bodies in order to attain similar results. Because our findings are found to correlate extremely well with the ARWU Subject rankings, the approach we propose in our paper may open the door to new, Academic Ranking methodologies that go beyond current methods by reconciling the qualitative evaluation of Academic Prestige with its quantitative measurements via publication impact.
Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement.
A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted.
A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems.
No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.
Selection of Student Achievement is conducted every year, starting from the level of Study Program, Faculty, to University, which then rank one will be sent to Kopertis level. The criteria made for the selection are Academic and Rich Scientific, Organizational, Personality, and English. In order for the selection of Student Achievement is Objective, then in addition to the presence of the jury is expected to use methods that support the decision to be more optimal in determining the Student Achievement. One method used is the Promethee Method. Preference Ranking Organization Method for Enrichment Evaluation (Promethee) is a method of ranking in Multi Criteria Decision Making (MCDM). PROMETHEE has the advantage that there is a preference type against the criteria that can take into account alternatives with other alternatives on the same criteria. The conjecture of alternate dominance over a criterion used in PROMETHEE is the use of values in the relationships between alternative ranking values. Based on the calculation result, from 7 applicants between Manual and Promethee Matrices, rank 1, 2, and 3, did not change, only 4 to 7 positions were changed. However, after the sensitivity test, almost all criteria experience a high level of sensitivity. Although it does not affect the students who will be sent to the next level, but can bring psychological impact on prospective student’s achievement.