Evaluative informetrics is defined as the study of evaluative aspects of science and scholarship using informetric data and methodologies, such as citation analysis and altmetrics. Following the main lines of an article by the Dutch philosopher O.D. Duintjer, nine interfaces are distinguished between quantitative science studies, especially evaluative informetrics, and the domain of values, including scientific, socio-historical, political, ethical and personal norms and objectives. Special attention is given to the “principle of value neutrality” at the meta-level of methodological rules guiding scientific inquiry and to the crucial, independent role of evaluative frameworks in research evaluation. The implications of the various relationships between science and values for research practices in evaluative informetrics and for its application in research assessment are considered.
The key notion that the author wishes to put forward is that of a “principle of value neutrality”, positioned at the meta-level of methodological rules or values guiding scientific-scholarly empirical inquiry. The principle of value neutrality aims to avoid the tacit or hidden influence of political, managerial or other extra-scientific values in informetric research and, equally importantly, in the actual use of informetric methods in the domain of evaluation and politics.
An article published by O.D. Duintjer provides a framework not merely to highlight the practical consequences of the methodological principle of value neutrality but also to illustrate other, extra-scientific ways in which scientific inquiry relates to human values. Published in Dutch several decades ago, it was written for a wide scientific-scholarly audience and was course material in teaching programs aimed to stimulate master students in science to reflect upon the role of modern science and technology in society.
The current author is not equipped to put forward a comprehensive overview of the theoretical-philosophical debate that has been going on already for so many years on the distinction between ‘facts’ and ‘values’. Readers who expect a thorough foundation for the philosophical assumptions underlying this essay will be disappointed. In addition, the current paper does not make statements at the level of informetric or another type of empirical research itself. Readers who hope to learn about new informetric concepts or about critical sociological analyses of the use of informetrics in the evaluation of researchers, important as they may be, will not find what they are looking for.
The current article is more of a theoretical-philosophical than an empirical nature. What it does offer is a series of theoretical distinctions that aim to help informetricians, evaluators using informetric indicators and researchers subjected to assessment to obtain insight into the potential and limits of informetrics. These distinctions discern between issues that can be solved informetrically and those relating to evaluative or political premises that play a key role in the application of informetric indicators but, relevant as they may be, on which informetrics has no jurisdiction.
Recent discussions in the science policy domain on responsible research and responsible metrics focus on guidelines for evaluation in general and for the use of informetric indicators in particular (e.g., Hicks et al, 2015; Bibliomagician, n.d.). The current author fully acknowledges the relevance of these discussions for the development of a set of adequate guidelines for how to carry out research assessment processes and aims to contribute to these discussions. The current article deals with setting guidelines in the following manner: it draws the consequences from the methodological value neutrality principle for the type of decisions that professional informetricians as informetric experts are allowed to make.
Citation analysis and related tools from evaluative informetrics and its applications in research assessment deal in many ways with values. As a scientific activity, it is subjected to internal-scientific principles and criteria, dealing with questions such as ‘which methodological rules should one obey?’, and also to external-scientific issues such as ‘what is the wider impact and societal relevance of my research?’ When informetric results are used in a research assessment exercise, things become even more complex. Informetric experts and other participants are confronted with the procedural rules of the assessment process, for instance, whether or not the principle of ‘hearing both sides’ is ensured, and, as regards the formation and justification of a quality judgment, with questions such as ‘what are the overall objectives of the assessment?’, ‘what has to be assessed?’ and ‘which yardsticks should be applied?’
Adopting a theoretical-philosophical approach, this contribution gives an overview of the various ways in which evaluative informetrics and ‘values’ are related. Moreover, it discusses the implications of this overview for the role of evaluative informetrics in its research practices and in research assessment.
The term informetrics refers to the quantitative-empirical study of information, especially, though not exclusively, related to science and technology. Bibliometrics, focusing on the analysis of scientific-scholarly journals and books; citation analysis, studying patterns in citations from one scientific document to another; and altmetrics, including the analysis of usage of documents and of the role of social media platforms, represent important approaches in informetrics. Evaluative informetrics can be described as the study of evaluative aspects of science and scholarship using informetric data and methodologies. The adjective ‘evaluative’ refers to research assessment as its primary application domain and delineates the context in which informetric indicators are used. As will be clarified below, this term does not mean that evaluative informetrics by itself evaluates.
Evaluation or assessment can be defined as “a systematic determination of a subject’s merit, worth and significance, using criteria governed by a set of standards” (“Evaluation”, n.d.). In his monograph Applied Evaluative Informetrics, Moed (2017) makes an analytical distinction between four domains of intellectual activity in an assessment process: (1) Policy and management: the formulation of a policy issue and assessment objectives; (2) Evaluation: the specification of an evaluative framework, i.e., a set of evaluation criteria, in agreement with the policy issue and assessment objectives; (3) Analytics: collecting, analyzing and reporting empirical knowledge on the subjects of assessment; and finally (4) Data collection: the collection of relevant data for analytical purposes. Informetrics is positioned at the level of analytics and data collection.1
The following example may clarify the difference between citation impact indicators as a research tool in quantitative science studies and as an assessment tool in research evaluation. In the empirical study of value perceptions, informetric methods can be used as a research tool. As Stephen Cole stated, “A crucial distinction must be made between using citations as a rough indicator of quality among a relatively large sample of scientists and in using citations to measure the quality of a particular individual’s work” (Cole, 1989, p. 9). “In sociological studies our goal is not to examine individuals but to examine the relationships among variables” (ibid., p. 11). “Citations are a very good measure of the quality of scientific work for use in sociological studies of science; but because the measure is far from perfect it would be an error to reify it and use it to make individual decisions (ibid., p. 12).”
Following the main lines of an article by the Dutch philosopher O.D. Duintjer, the current contribution distinguishes nine interfaces between modern science and values (Duintjer, 1974). The term “value” is used as a generic term covering “fundamental ethical principles; norms, rules, regulations, and objectives, including the objectives themselves insofar as they have as normative function; and evaluative judgments or valuations” (Duintjer, 1974, p. 21).2 “Modern science” denotes a type of science that has emerged in the natural sciences, but that has permeated almost all empirical sciences. According to Duintjer, “its aim is to explain, predict and thereby make manageable empirical phenomena” (Duintjer, 1974, p. 21). In the current contribution, informetrics, and the wider field of quantitative science studies are considered as domains of modern science in this sense.
Special attention is given to a “principle of value neutrality”, positioned at the meta-level of methodological rules guiding scientific inquiry. The Sociology Group (n.d.) introduces the notion of value neutrality as follows: “The concept of value-neutrality was proposed by Max Weber. It refers to the duty and responsibility of the social researcher to overcome his personal biases while conducting any research. It aims to separate fact and emotion and stigmatize people less. It is not only important in sociology but outlines the basic ethics of many disciplines. [….] A sociologist can become value-neutral by being aware of his own values, beliefs, or moral judgements. He should be able to identify the values he held and prevents them from influencing his research, its findings, or conclusions.” (Sociology Group, n.d.).
Weber’s concept was further developed by the German philosopher Hans Albert, who, building upon ideas by Karl Popper (1966), positioned and discussed this concept at the meta-level of rules that direct the scientific argumentation (Albert, 1965). Duintjer’s article builds further on Albert’s work. The methodological rules have an internal-scientific character and constitute scientific practice. However, Duintjer also underlines the normative function of political, moral and cultural values related to scientific research. In line with this, the current contribution not only considers intra-scientific values but also discusses these extra-scientific values with respect to informetrics and its role in research assessment. Table 1 gives an overview of the nine interfaces distinguished by Duintjer and gives typical examples invented by the current author from the domains of quantitative science studies and evaluative informetrics.
|Interfaces between science and values||Further explanation and typical examples from evaluative informetrics chosen by the current author|
|1||Values as the object of scientific research||Robert K. Merton’s study of the norms of science. Empirical investigation of how scientists perceive research quality.|
|2||Values in the language of scientific research about the object||Personal judgments can be mistakenly conceived as representing knowledge of the object validated in empirical research. Use of the term ‘closed’ as opposite to ‘open’ access to scientific journals seems suggestive.|
|3||Values that lie at the basis of scientific research||Value neutrality as a methodological principle: scientific statements should be free of any value judgment. Evaluation criteria and policy objectives cannot be founded informetrically.|
|4||Values behind selective viewpoints and core concepts of scientific theories||Extra-scientific values behind selective viewpoints should be made explicit. The assumption that generating impact is per se valuable; assessment must be formative, not summative.|
|5||Values when setting up investigations, the results of which are foreseeably relevant to certain societal powers||Institutions may commission informetric studies with the aim of solving an internal conflict or enhancing their international status. Critical informetric studies may aim at questioning funding policies.|
|6||Values as a limit on what is permissible in scientific experiments with laboratory animals and test subjects||The publication of outcomes of ‘experimental’ bibliometric studies may harm the prestige of the study subjects, especially individual researchers and departments.|
|7||Modern science produces, by virtue of its structure, results that necessitate an overall discussion about the ends pursued in society||Within the academic community, an overall discussion is needed about the objectives and criteria in research assessment processes and the role of informetric indicators therein.|
|8||Extension of elements from the ethos of science to moral values outside of science||Extending the principles of openness and adopting a critical attitude toward the formation of an evaluative judgment in a research assessment process.|
|9||The value area of modern science as part of a wider lifeworld||In research assessment, distinct value domains come together, both from science, social science and humanities and from extra-scientific areas, such as politics.|
The interfaces presented in the next section are interpreted in terms of the value configuration underlying the field of quantitative science studies, especially evaluative informetrics. In this way, the current contribution not only explains Duintjer’s interfaces between modern science and values but also aims to enlighten the role of values in evaluative informetrics. The final section of this contribution discusses some of the consequences of the principles outlined by and distinctions made by Duintjer for research in evaluative informetrics and its application in research assessment.
A well-known example of the study of values is the work by Robert K. Merton on social norms in scientific-scholarly research communities, conceived by him as ideals that are dictated by the goals and methods of science and are binding on scientists (Merton, 1942; 1979).
A typical example from the field of quantitative science studies is empirical research into how researchers perceive “research quality”. This type of research does not aim at drawing conclusions as to whether such perceptions are valid or not, but at examining how researchers form quality judgments and which decisions they make in a particular assessment process. One may also investigate the empirical conditions under which certain quality perceptions emerge or under which one perception is more likely to be shared than another. Research on biases in peer review is a good example of this type of investigation. A research analyst may find that females are underrepresented in permanent academic staff of a particular university. Such a conclusion should be empirically and statistically justified, independently of whether the analyst finds discriminating against females in promotion processes unfair.
In scientific-scholarly research practice, at the level of theoretical-empirical statements about objects, value judgments may enter, reflecting the valuation or appreciation of the analyst toward the object. Such personal judgments can be mistakenly conceived as representing knowledge of the object validated in the research itself. A typical example is the use of the term ‘closed access’ to indicate subscription-based access to scientific journals. Additionally, the use of the term ‘peripheral journal’—even if it has a well-defined meaning in network analysis—may give rise to a negative impression of a journal among non-experts.
The methodological requirement of value neutrality at the level of statements on objects based on empirical research dictates that such personal valuations of the object of research should be avoided. A researcher who wants to make them known to the reader should make clear that the qualifications are not based on the research itself, but are a priori assumptions of the investigator. The requirement of value neutrality also has the following implication: it is methodologically incorrect to neglect, on the basis of a value interest, relevant facts that fall within the intended field of research but contradict a preconceived conclusion and to one-sidedly emphasize facts that do support it.
This section is not about values as research object, nor about scientifically unfounded or hidden prescriptions and valuations by the analyst, but about the value basis on which science itself operates, including rules that standardize and direct scientific activity. These values can be explicated and discussed at a level higher than that on which scientific statements are made. It can be denoted as a meta-level.
The principle of value neutrality is itself a normative requirement that should be analytically positioned at this meta-level. Although the methodological rules at this level do have an ethical component, it is not their ethical dimension that is at stake in this section. These rules are constitutive for scientific inquiry. It is not so much that researchers who violate these rules are doing ‘bad science’, but rather that they are not doing science at all, in the same way as, for instance, two chess players who allow pawns to move three steps forward do not play chess.
The principle of value neutrality has strong implications for the position of an informetric analyst contributing to an assessment process toward the policy objectives underlying the process and the evaluation criteria applied in it. As stated by Moed, “A basic notion holds that from what is cannot be inferred what ought to be. Evaluation criteria and policy objectives are not informetrically demonstrable values. Of course, empirical informetric research may study quality perceptions, user satisfaction, the acceptability of policy objectives, or effects of particular policies, but they cannot provide a foundation of the validity of the quality criteria or the appropriateness of policy objectives. Informetricians should maintain in their informetric work a neutral position towards these values” (Moed, 2017, p. x).
The methodological rules stated at the meta-level constitute scientific practice and have an internal-scientific character. However, the methodological principle of value neutrality does not preclude that also political, moral and cultural values have a normative function toward scientific-scholarly research. Following Duintjer (1974), the next six sections show, from different viewpoints, the role of the social-political-cultural context in which research is being carried out.
Duintjer underlines that each problem statement is guided by selective points of view that are incorporated in theoretical core concepts and that determine the selection of empirical data. Extra-scientific values often hide behind these selective viewpoints and core concepts. “The fact that scientific research is guided by selective points of view, which in turn are inspired by social values, is not formally the same as a normative setting of these values. In research, these values can be assigned a hypothetical status” (Duintjer, 1970, p. 29).
Against the dependence on extra-scientific values, he recommends taking the following precautions: “making the concealed or implicit value background of selective viewpoints explicit and recognition of its non-scientific character; insight in the limits of the empirical knowledge acquired within such points of view; receptivity to investigations adopting different or wider viewpoints” (Duintjer, 1970, p. 32).
In quantitative science studies and applied evaluative informetrics, the concept of research impact plays a key role. In many studies, a tacit assumption seems to be that generating impact is per se valuable, and that it is worthwhile being studied in bibliometric studies and being assessed in research evaluation. However, the validity of this assumption cannot be grounded in informetric or sociological research, although it determines which data are to be collected and analyzed (namely, citation data). Impact as a value should be given a hypothetical status.
A second example relates to the distinction between formative and summative evaluation. In summative evaluation, the focus is on the outcome of a particular activity. This outcome can, for instance, be used in a funding decision. Formative evaluation assesses the development in an activity at a particular time and focuses on improving its performance. Whether an assessment process should be formative or summative cannot be decided on the basis of informetric grounds.
In empirical research into the effects of the application of indicators upon researchers’ behavior, the analyst’s view on whether this application is valuable and whether or not citations indicate performance should not play a role in the justification of empirical statements. However, this view may influence the selective viewpoints adopted during the set-up of the study. The principle of value neutrality does not imply that such influence is inadmissible, but rather that it should be made explicit in the presentation of the study set-up and discussion of the outcomes.
As Duintjer states, “The choice of research fields in science is not only related to conceptual points of view, but also to the extent to which the expected results of the research can foreseeably be of importance for certain social power groups, which are sometimes involved in conflicts and who will use these results in their favor and to the detriment of their opponents” (p. 32). Conversely, value decisions are also at stake when investigations are launched to unmask ideological facade decorations and hidden power relations and can therefore foreseeably contribute to public awareness or political “Aufklärung” (p. 33).
For instance, managers at a research institution or research funding organization may commission informetric researchers to conduct a study with the aim of using its outcomes to solve an internal conflict or to enhance the status of their organization. They may even commission informetricians to develop a strategy to artificially increase the bibliometric indicator score of their institution, with the objective to enhance its position in world university rankings. This type of applied evaluative informetrics is directly serving certain vested interests.
However, critical informetric studies can also question a particular situation or practice. For instance, the work of the Leiden team headed by Anthony van Raan in the early 1980s strongly underlined the critical role of bibliometric indicators and their potential to raise critical questions to academic department managers and funding organizations about their quality assessment and funding policies. In a study of research groups in their university, they obtained evidence of conservatism in funding policies, favoring groups with a long publication and funding track record, at the expense of emerging groups headed by young researchers exploring new approaches (Moed et al., 1985).
“Scientific explanations and predictions are often sought with the help of experiments, looking for behaviors of an isolated class of phenomena under self-determined conditions. But when organizing an experiment with laboratory animals or test subjects, one also has to deal with moral values that regulates also outside of science our relationship to animals and humans” (Duintjer, p. 34).
When developing and testing new assessment methodologies or indicators, their validity can be examined only if they are applied to “real” performing entities, such as individual researchers, groups or institutions. Therefore, application experiments or ‘try outs’ are organized, in which the outcomes of a method are discussed with the subjects of the assessment. However, if these outcomes become public—for instance, published in a journal article—or are shared with those who commissioned the experiment—e.g., a managing director of the commissioning organization—they may harm the prestige of the subjects under assessment, even if the final conclusion holds that the experiment has been only partially successful.
The environment in which evaluative-informetric experiments take place is not a scientific testing ground. Their effects upon the outside world may be irreversible. The experimental setting has not only a theoretical-methodological dimension but also a moral one: how to protect subjects under assessment against loss of status during and after a bibliometric experiment.
Duintjer emphasizes a so-called “structural equivalence” of science and technology. All theoretical statements or explanations in modern science can be converted into technological statements that answer the question “what could be done?” “All pure theoretical research in modern science provides society with means to steer nature and man, without indicating in which direction one should steer” (p. 38). Duintjer advocates an “overall, democratic value discussion regarding the direction, goals and standards to which society directs itself and is directed” (p. 38). “Obviously, not merely scientists should decide on the direction of the whole of society, especially not because value judgments cannot be derived from theoretical-empirical knowledge itself” (p. 38).
As outlined in the introduction section, Moed (2017) distinguished four levels of intellectual activity in a research assessment process: policy and management, evaluation, analytics and data collection. On the one hand, he stated that a view of what is valuable in science and scholarship and its corresponding evaluation criteria and standards cannot be grounded in empirical research conducted at the analytics level. In this respect, evaluative informetrics is and should be value neutral. On the other hand, he argued that “in the proper use of informetric tools an evaluative framework and an assessment model are indispensable. To the extent that in a practical application an evaluative framework is absent or implicit, there is a vacuum, that may be easily filled either with ad-hoc arguments of evaluators and policy makers, or with un-reflected assumptions underlying informetric tools” (Moed, 2017, p. xi).
Following Duintjer’s statements quoted above, one can maintain first of all that not merely informetricians should decide on the values, standards and political objectives underlying an assessment, especially not because value judgments cannot be derived from informetric research itself. What is more, one can argue that an overall discussion is needed within the academic community and the research policy domain about the objectives and criteria in research assessment processes at various levels of aggregation—e.g., individuals, institutions—and about appropriate conditions for the use of informetric indicators in these processes.
Duintjer introduces this interface between science and values as follows: “As already said, the theoretical-empirical knowledge of science does contain technological information about what we can do, but it does not yet answer the normative question of what we should do. The latter question was precisely the subject of the advocated value discussion (see previous section, HM). One may wonder, however, whether one can perhaps derive from the ethos that underlies scientific practice also elements that can be presented to society as moral and political values” (p. 39/40).
He proposes a possible extension not only of norms regulating the scientific attitude toward colleague-researchers but also of methodological rules constituting a researcher’s relationship with the object of research. He refers to, among other authors, Karl Popper (1966) and Hans Albert (1965), who, building upon the tradition of ‘critical rationalism’, propose that a moral belief in reason as a method to solve problems in science could function as a norm also in society.
Within the context of the current paper, the question emerges as to whether and how science-internal norms could provide guidance in science assessments, especially in the further theoretical and practical development of evaluative frameworks. Following Duintjer’s line of reasoning, one could, for instance, propose to extend the intra-scientific principles of openness and adopting a critical attitude, or the ‘Mertonian’ norm of disinterestedness toward the formation of an evaluative judgment in a research assessment process (Biagetti and Gedutis, 2019).
As outlined in Section 3, scientific statements and practices are regulated by specific values in the form of objectives and rules. According to Duintjer, these values define what could be termed as a particular access road into reality. The value field of modern science is not the only value sphere, but rather forms part of a comprehensive whole of values next to other value spheres, such as those of politics, art, philosophy of life and existential experience.
In research assessment, distinct value domains come together, making it a complex activity that is difficult to grasp. The term ‘research performance’ relates to an agglomerate of internal-scientific values, such as ‘methodological soundness’, or notions as complex as ‘contribution to scientific-scholarly knowledge in a discipline’ or, moving outside the boundaries of a particular discipline, ‘to the advancement of scientific-scholarly knowledge in general’. Reaching beyond the domain of science and scholarship, values in research assessment may refer to ‘improving the state of mankind’ or to technological, economical or societal merit.
The domain that Duintjer denoted as “modern science” and that is outlined in the introduction section does not embrace other forms of scholarship that he denotes as “hermeneutic”3—philosophy is a typical example—and “socio-critical”.4 It is good practice to denote these disciplines as ‘scholarly’ rather than ‘scientific’. The current author defends the position that these domains of scholarship could contribute to the formation of an overarching evaluative framework mentioned above, in which they could have an enlightening rather than a technological function. Such contributions can be marked as scholarly but fall outside the value area of modern science and indeed cover “other parts of a wider life world”. However, it should be noted that there is no explicit reference to this interpretation in Duintjer’s article.
This concluding section aims to draw conclusions from Duintjer’s framework and notions outlined above for the values and limits of the use of informetric methods in the evaluation of scientific-scholarly research. It proposes a series of practical guidelines that may guide practitioners in evaluative informetrics in reflecting upon these values and limits and in defining their role both as researchers in quantitative science studies and as consultants in actual assessment processes. A first series of guidelines relates to evaluative informetrics as a research activity and emerges from the principle of value neutrality as a methodological requirement outlined above. A second series concerns the application of evaluative informetrics in research assessment and its evaluative framework and refers to non-scientific criteria, principles and objectives from other value domains.
1The notion of an evaluative framework has a practical dimension and a theoretical dimension. The first relates to the process during which research assessment takes place: how it is organized, which evaluation model is chosen and which rules and principles regulate the process. The theoretical dimension relates to the assessment objectives and to an articulation of what has to be evaluated, which criteria and yardsticks are to be applied and how these are justified. The current contribution focuses on this second dimension.
2Duintjer states, “I start from the assumption that the distinction between facts and values is legitimate in the sense of being a logical distinction between types of language use. But this logical distinction should not be conceived as a metaphysical separation of two worlds. Facts are being searched and found in a context that comprises also value components, while on the other hand facts may give rise to the design of new values that can in turn be related to real possibilities” (p. 27).
3“Hermeneutics is the theory and methodology of interpretation, especially the interpretation of biblical texts, wisdom literature, and philosophical texts” (https://en.wikipedia.org/wiki/Hermeneutics).
4“Critical theory is the reflective assessment and critique of society and culture by applying knowledge from the social sciences and the humanities to reveal and challenge power structures” (https://en.wikipedia.org/wiki/Critical_theory).
The author has no competing interests to declare.
Biagetti, M. T., & Gedutis, A. (2019). Towards Ethical Principles of Research Evaluation in SSH. Lecture presented at the Third Conference on Research Evaluation in the Social Sciences and Humanities, Valencia, Spain, 19–20 Sept 2014. Available at ressh2019.webs.upv.es › 2019/10.
Bibliomagician. (n.d.). The Bibliomagician. Comment & practical guidance from the LIS-Bibliometrics community. https://thebibliomagician.wordpress.com/.
Cole, S. (1989). Citations and the evaluation of individual scientists. Trends in Biochemical Sciences, p. 9 a.f. DOI: https://doi.org/10.1016/0968-0004(89)90078-9
Duintjer, O. D. (1974). Moderne Wetenchap en Waardevrijheid. In Th. De Boer & A. J. F. Kobben (Eds.), Waarden en Wetenschap (pp. 20–45). Bilthoven: Ambo. Originally published in: Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 62(1), 2–44, 1970.
Evaluation. (n.d.). Evaluation. https://en.wikipedia.org/wiki/Evaluation.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429–431. DOI: https://doi.org/10.1038/520429a
Merton, R. K. (1942). The Normative Structure of Science. In R. K. Merton (Ed.) (September 15, 1979), The Sociology of Science: Theoretical and Empirical Investigations. Chicago, IL: University of Chicago Press. ISBN 978-0-226-52092-6.
Moed, H. F. (2017). Applied Evaluative Informetrics. Springer. ISBN 978-3-319-60521-0 (hard cover); 978-3-319-60522-7 (E-Book), DOI 10.1007/978-3-319-60522-7, XXI + 312 pp. DOI: https://doi.org/10.1007/978-3-319-60522-7
Moed, H. F., Burger, W. J. M., Frankfort, J. G., & Van Raan, A. F. J. (1985). A comparative study of bibliometric past performance analysis and peer judgement. Scientometrics, 8, 149–159. DOI: https://doi.org/10.1007/BF02016933
Sociology Group. (n.d.). Value Neutrality: Explained with Examples. https://www.sociologygroup.com/value-neutrality-meaning-examples/