Kraker, P. & Lex, E.: A Critical Look at the ResearchGate Score as a Measure of Scientific Reputation

Abstract: In this paper, we present an assessment of the ResearchGate score as a measure of a researcher’s scientific reputation. This assessment is based on well-established bibliometric guidelines for research metrics. In our evaluation, we find that the ResearchGate Score has three serious shortcomings: (1) the score is intransparent and irreproducible, (2) the score incorporates the journal impact factor to evaluate individual researchers, and (3) changes in the score cannot be reconstructed. Therefore, we conclude that the ResearchGate Score should not be considered in the evaluation of academics in its current form.

DOI: 10.5281/zenodo.35401

License: Creative Commons Attribution 4.0 International (CC-BY 4.0)

File: ASCW15_kraker-lex-a-critical-look-at-the-researchgate-score_v1-1.pdf (v1.1)

7 Comments



  1. This is important work. The academics metrics business worldwide is getting more important and bodies such as RG will (if they aren’t already) be marketing various data to universities and government bodies. They will undoubtedly bring lots of arguments why their metric is a good metric, so independent analysis essential.

    One of the things that is potentially worrying is that even if the RG metric is not explicitly used, it will affect behaviour – if RG uses the metric to influence which publications it favours in email notifications etc., then this may well change what we read and hence what we cite.

    Arguably this is already true of journals and repositories such as DL, an equally good paper may exist elsewhere, but if it is in a ‘good’ and visible place it will be viewed more easily. The core difference is, as you point out, transparency in process.

    Reply

  2. The paper rejects the ResearchGate Score as a tool for measuring scientific reputation by pointing out its incompatibility with bibliometric standards. The arguments presented are convincing and sufficient for the purpose of the article.

    For a better understanding of the nature of this metric, it might be helpful to contextualize it a bit more. Obviously, the RG Score is tightly connected to the platform ResearchGate, a social network which largely depends on a high number of active users who contribute on a regular basis by adding different types of data (from uploading papers to interacting with others). Taking this into account, it becomes clear that the purpose of the RG Score is apparently not so much to take “all the research” of a researcher to turn it “into a source of reputation” (as the self-description claims) but rather to encourage interaction on the platform in order to make it more successful. This can be argued because the RG Score draws on publications and interactions within the platform while it largely ignores other possible indicators (the JIF is a relatively new exception which is itself problematic as described in the article).
    Therefore, it is not only unsuitable to measure scientific reputation but also a manipulative marketing tool which tries to put pressure on academics who are by default concerned about their reputation.

    I find this an important point because while I agree with the authors that “[i]ncluding research outputs other than papers (e.g. data, slides) is definitely a step into the right direction and the idea of considering interactions when thinking about academic reputation has some merit”, pressuring scholars to interact in the confinements of a platform with its very own agenda and interests clearly is not. The flaws of the RG Score are only flaws from a bibliometrical point of view. As a marketing tool they make perfect sense.

    Forgive me for the self-promotion but I also made these points in a talk at the Science 2.0 conference in Hamburg 2014 (https://www.youtube.com/watch?v=D8oP6C7n42k, skip to 10:45) in case you are interested. See also the (German) paper I published with Michael Nentwich at kommunikation@gesellschaft:

    König, René ; Nentwich, Michael: Cyberscience 2.0: Wissenschaftskommunikation in der Beta-Gesellschaft. In: kommunikation @ gesellschaft 15 (2014), Sonderausgabe.
    http://www.ssoar.info/ssoar/bitstream/handle/document/37844/ssoar-ketg-2014-Koenig_Nentwich-Cyberscience2.0.pdf?sequence=3

    Reply

  3. I found the paper quite interesting and would like to follow up Graham with my comments. I never would use RG score to evaluate researchers since it does not only try to consider research quality but also tries to increase usage of the platform. A clear indication for this is that the interactions with other RG members are considered by this indicator.

    Maybe you could also add some more information about RG in a revised/extended version of the paper:

    – number of users

    – their status: junior researchers, senior researchers

    – distribution of disciplines

    – …

    Reply

  4. Interesting paper providing a critical view on the RG score. The authors rightly point to some of the most important drawbacks of the composite indicator provided by the online platform Research Gate, namely the lack of transparency in the construction and calculation of the indicator, the impossibility to replicate its calculation, its composite nature (thus reducing the complex and multidimensional nature of research activities to a single number), or its reliability problems related with the data quality and the (explainable) changes in the algorithm which apparently have substantial effects in the scores of the individuals (as the authors point in their figure 3), among others.

    Perhaps the paper would benefit from a broader discussion on the use and application of metrics at the individual level. This has been a recurrent topic in the bibliometric field which is still quite controversial (debates and discussions that seem to be ignored by Research Gate). For example, the authors could use the ‘Dos and don’ts in individual-level bibliometric indicators’ (http://es.slideshare.net/paulwouters1/issi2013-wg-pw) to frame some of their criticisms, as well as other research results in the field of bibliometrics (e.g. see research around the h-index, individual-level bibliometrics and related indicators).

    Reply

  5. Is there any indication that ResearchGate scores are being used to evaluate researchers anywhere? A larger data study would be the next step. Then machine learning tools could be used to try to identify what factors the schore actually depends on.

    Reply

  6. This makes sense to me.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

ascw Captcha ensure human user *