The research impact of National Higher Education Systems

January 31, 2011

Felix de Moya Anegon. Research is not the only activity carried out at universities, but we know Higher Education Institutions’ (HEIs) ability to generate scientific knowledge is an obvious symptom of general performance. The existence of renowned researchers in universities is not by itself a sign of global quality, however nowadays it is hard to think about advanced human capital training unconnected to high quality knowledge generation processes.

In this regard, bibliometric indicators measuring the average quality of HEIs research outputs are valuable reference marks of research capability when it comes time to disseminate the acquired knowledge and spread it to the students. For this reason, it makes sense to put HEIs through benchmarking processes using the Normalized Impacts (NI)* reached by its research outputs. And it will make sense as well to compare the impact distributions of National Higher Education Systems to assess aspects such as the level of heterogeneity of universities within a country or its compared average level, so that policymakers consider whether is better to sacrifice homogeneity to promote top-class institutions, or otherwise the excellence can be reached for the system itself without having to give up equity.

The following chart generated from the World Report SIR 2010 (based on Scopus data) shows, in a comparative way, the Normalized Impact Distributions for all the universities belonging to the world’s 50 most productive countries on scientific knowledge.

chart

[Download data: Microsoft Excel | Open Office]

Universities within countries are distributed in quartiles ranging form higher to lower NI. As can be seen on the chart, the 50 countries can be divided into two groups: on one hand, those which have more than 75% of its universities over the world average and; on the other, those which have the same percentage underneath. We find 24 countries in the first group featuring USA, UK, Germany, France, Canada, Italy and Spain and 26 in the second one including China, Japan, Korea, India, Brazil and Russia.

It might be surprising the presence of Japan or Korea in the second group, but it is not. We must consider that NI distributions show the average value of the international visibility reached by universities research outputs in those countries. Therefore the fact of being technologically developed countries is not incompatible with the relatively low research visibility showed by its academic institutions. The reasons for this are varied and this may not be the place to deal with them, but it can be pointed out briefly that according to the SIR World Report 2010 while one can easily find large Japanese and Korean Companies carrying out high-profile research amongst the world’s Top 100 ordered by NI (i.e. Toyota, Nippon, Samsung, Toshiba, Fujitsu, Hitachi and Mitsubishi) it is otherwise hard to find a Japanese or Korean university in the same ranking Top 400.

It also worth analyzing the level of homogeneity shown by different National Higher Education Systems. The chart shows some countries having similar NI values for all their universities, these countries, which belong either to the first or second before-mentioned groups, show small differences between extreme NI values, i.e. Belgium and Ukrania. On the contrary, the most heterogeneous countries, from an NI perspective, are those whose universities manage to reach very high impact rates jointly coexisting with other ones presenting very low NI rates. This phenomena is not only related to the size or complexity of educational systems, but also to Public vs. Private balance and the level of openness to the globalized market of the different National Higher Education Systems.

To conclude I would mention that the systems’ ability to attract talent is one of the most supporting factors in rising research reputation and, attached to it, the scientific visibility reached by universities around the world.

* Normalized Impact scores indicate the scientific impact that institutions have over the scientific community. In order to obtain a fair measurement of such impact, its calculation removes the influence due to institutions’ size and research profile making it ideal for comparing research performance. [More]

photo

Félix de Moya Anegón is Research Professor at the Institute of Public Goods and Policies (IPP) from the Spanish National Research Council (CSIC), his academic interests include scientometrics, bibliometrics, research evaluation and science policy; he has published around 100 papers in these fields. He is SCImago Research Group‘s main researcher, where he has led renowned bibliometic projects including Scimago Journal & Country Rank, Scimago Institution Rankings and The Atlas of Science. Prof. De Moya is also advisor for Science Policy issues for national organizations of science and technology and research institutions around the world.

9 Responses

  1. Congratulations on your new blog! This is a suitable tool to receive alerts on this topic, however I am missing a RSS button to integrate the blog with my personal feeds.

  2. yusef says:

    Thanks M. Luisa! We just placed a RSS icon prominently on the top right corner.

  3. Dear colleagues,

    Thank you for opening this new window for debate in the area – and allow me to write in English too for expanding feedback opportunities. Just wanted to briefly comment on this statement “while one can easily find large Japanese and Korean Companies carrying out high-profile research amongst
    the world’s Top 100 ordered by NI (…) it is otherwise hard to find a Japanese or Korean university in the same ranking Top 400” at the very interesting Jan 31st post.

    When one checks university rankings other than the SIR World Report 2010, as for instance the Jan’2011 Web Ranking of World Universities as published by the CCHS Cybermetrics Lab at http://www.webometrics.info/top12000.asp, one may find two Japanese universities among the best 25, making Japan the best country after the US (Tokyo and Kyoto being also among the top60 at the THE university ranking for 2010,
    http://www.timeshighereducation.co.uk/world-university-rankings/2010-2011/top-200.html).

    Even if report methodologies may vary among different rankings, one gets a slight impression when looking at them of what back at my
    physics studies we used to call ‘the big-point approach in statistical analysis’, meaning statistical data may be easily twisted -if desired- to support practically any statistical hypothesis. In this particular case my point would be it is quite a way to go yet until assessment standards for university performance are sound and reliable enough – although great progress has been recently made in the field. At this point however, basing management strategies on such information alone is probably quite risky. Thank you.

  4. Félix de Moya says:

    Thank you for your interest. You are quite right in your comment and I cannot agree more on your last sentence: “basing management strategies on such information alone is probably quite risky”. In this post I analyze the NI as a tool to assess research institutions ability to generate knowledge which cause impact over the scientific community in an efficient way. I assume this is just one dimension that must be taken into account by research managers; my post must not be understood as an holistic assessment which associates NI ranking position with overall quality. On other hand, the rankings you mentioned are rather different to SIR, not only because they do not measure “the same thing” (Webometrics focus on web presence while THE ranking measures the institutions globally) but also they pursue different evaluation approaches (i.e. one-indicator-fits-all) while SIR rankings measure research performance and offer several sounded indicators to allow users to elaborate their own evaluation exercises.
    Being specific about Japanese and Korean universities, the data show a trend that excludes these from top positions in the NI ranking –ability to produce relatively high impact papers–, where they seem have been replaced by companies from these countries.
    Finally, I am leaving out the discussion about the high rank correlations exhibited among different rankings and ranking systems that could help us understand what a good university is if we look at the big picture instead of concrete positions.

  5. Andrea Santos says:

    You are using this “Normalized Impact” in this interesting post. Can you clarify whether it is CROWN or Karolinska’s “Item oriented field normalized citation score average”? Thanks.

  6. Andrea, Prof. De Moya states he is using Karolinska method to compute Normalized Impact. Both this one and CROWN’s, the later developed at Leiden University, lead to very similar scientific impact indicators. In both, the number of citations a paper receives is put into context by facing it to the citation received by the papers of the same type, published in the same field and within the same period. However, the difference lies in how this mechanism to compare citations in different fields is set; to outline it, Karolinska’s method makes every item contribute equally to the indicator value while for CROWN’s method those belonging to fields and/or years with higher citation density will provide a higher weight to the final score.

  7. Teresa says:

    In the NI indicator you compare the number of citations of each journal with the average impact of publications of the same time frame, document type and subject area.

    For this comparison, how do you choose the subject area? How many subject areas do you use? and how do you deal with “general” journals with a high number of average citations like Nature or Science?

    Thank you in advance

  8. Teresa, we use about 300 subject categories to build the NI indicator. (http://www.scimagojr.com/journalrank.php) These categories belong to an in-house classification scheme, inherited from Scopus database, which has 27 Big Areas and about 300 Subject Categories (saying from memory). You can see the scheme at the SJR website forms, under the labels “subject Area” and “Subject Category”. As for journals as Science or Nature you mentioned in your question, these fall into Multidisciplinary subject category (which in turn fall into the homonymous Area: Multidisciplinary) so they are computed in this category.
    Also items are divided by type in “articles”, “reviews” and “conference papers” (only primary production is considered) and the normalization is performed using items belonging to its same type.
    I will give you an example to clarify things up. Let’s say you have a conference paper published in the subject category “Analytical chemistry” in 2009. Then the number of citations is normalized “against” all the conference papers published in Analytical Chemistry in 2009.

Powered by Wordpress