How Evaluative Informetrics Relates to Scientific, Socio-Historical, Political, Ethical and Personal Values

Policy highlights: • If evaluation is defined as “a systematic determination of a subject’s merit, worth and significance, using criteria governed by a set of standards”, there is no evaluation without an evaluative framework specifying these criteria and standards. • On the other hand, evaluative informetrics itself, defined as the study of evaluative aspects of science and scholarship using citation analysis, altmetrics and other indicators, does not evaluate. • However, informetric indicators are often used in research assessment processes. To obtain a better understanding of their role, the links between evaluative informetrics and ‘values’ are investigated, and a series of practical guidelines are proposed. • Informetricians should maintain in their evaluative informetric studies a neutral position toward the policy issues addressed and the criteria specified in an evaluative framework. • As professional experts, informetricians’ competence lies primarily in the development and application of analytical models within the context of a given evaluative framework. • Informetric researchers could propose that evaluators and policy makers incorporate fundamental scientific values such as openness and adopting a critical attitude in assessment processes. • Informetricians could also promote and participate in an overall discussion within the academic community and the research policy domain about the objectives and criteria in research assessment processes and the role of informetric tools therein.


Introduction
Citation analysis and related tools from evaluative informetrics and its applications in research assessment deal in many ways with values. As a scientific activity, it is subjected to internal-scientific principles and criteria, dealing with questions such as 'which methodological rules should one obey?', and also to external-scientific issues such as 'what is the wider impact and societal relevance of my research?' When informetric results are used in a research assessment exercise, things become even more complex. Informetric experts and other participants are confronted with the procedural rules of the assessment process, for instance, whether or not the principle of 'hearing both sides' is ensured, and, as regards the formation and justification of a quality judgment, with questions such as 'what are the overall objectives of the assessment?', 'what has to be assessed?' and 'which yardsticks should be applied?' Adopting a theoretical-philosophical approach, this contribution gives an overview of the various ways in which evaluative informetrics and 'values' are related. Moreover, it discusses the implications of this overview for the role of evaluative informetrics in its research practices and in research assessment.
The term informetrics refers to the quantitative-empirical study of information, especially, though not exclusively, related to science and technology. Bibliometrics, focusing on the analysis of scientific-scholarly journals and books; citation analysis, studying patterns in citations from one scientific document to another; and altmetrics, including the analysis of usage of documents and of the role of social media platforms, represent important approaches in informetrics. Evaluative informetrics can be described as the study of evaluative aspects of science and scholarship using informetric data and methodologies. The adjective ' evaluative' refers to research assessment as its primary application domain and delineates the context in which informetric indicators are used. As will be clarified below, this term does not mean that evaluative informetrics by itself evaluates.
Evaluation or assessment can be defined as "a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards" ("Evaluation", n.d.). In his monograph Applied Evaluative Informetrics, Moed (2017) makes an analytical distinction between four domains of intellectual activity in an assessment process: (1) Policy and management: the formulation of a policy issue and assessment objectives; (2) Evaluation: the specification of an evaluative framework, i.e., a set of evaluation criteria, in agreement with the policy issue and assessment objectives; (3) Analytics: collecting, analyzing and reporting empirical knowledge on the subjects of assessment; and finally (4) Data collection: the collection of relevant data for analytical purposes. Informetrics is positioned at the level of analytics and data collection. 1 The following example may clarify the difference between citation impact indicators as a research tool in quantitative science studies and as an assessment tool in research evaluation. In the empirical study of value perceptions, informetric methods can be used as a research tool. As Stephen Cole stated, "A crucial distinction must be made between using citations as a rough indicator of quality among a relatively large sample of scientists and in using citations to measure the quality of a particular individual's work" (Cole, 1989, p. 9). "In sociological studies our goal is not to examine individuals but to examine the relationships among variables" (ibid., p. 11). "Citations are a very good measure of the quality of scientific work for use in sociological studies of science; but because the measure is far from perfect it would be an error to reify it and use it to make individual decisions (ibid., p. 12)." Following the main lines of an article by the Dutch philosopher O.D. Duintjer, the current contribution distinguishes nine interfaces between modern science and values (Duintjer, 1974). The term "value" is used as a generic term covering "fundamental ethical principles; norms, rules, regulations, and objectives, including the objectives themselves insofar as they have as normative function; and evaluative judgments or valuations" (Duintjer, 1974, p. 21). 2 "Modern science" denotes a type of science that has emerged in the natural sciences, but that has permeated almost all empirical sciences. According to Duintjer, "its aim is to explain, predict and thereby make manageable empirical phenomena" (Duintjer, 1974, p. 21). In the current contribution, informetrics, and the wider field of quantitative science studies are considered as domains of modern science in this sense.
Special attention is given to a "principle of value neutrality", positioned at the meta-level of methodological rules guiding scientific inquiry. The Sociology Group (n.d.) introduces the notion of value neutrality as follows: "The concept of value-neutrality was proposed by Max Weber. It refers to the duty and responsibility of the social researcher to overcome his personal biases while conducting any research. It aims to separate fact and emotion and stigmatize people less. It is not only important in sociology but outlines the basic ethics of many disciplines. [….] A sociologist can become value-neutral by being aware of his own values, beliefs, or moral judgements. He should be able to identify the values he held and prevents them from influencing his research, its findings, or conclusions." (Sociology Group, n.d.).
Weber's concept was further developed by the German philosopher Hans Albert, who, building upon ideas by Karl Popper (1966), positioned and discussed this concept at the meta-level of rules that direct the scientific argumentation (Albert, 1965). Duintjer's article builds further on Albert's work. The methodological rules have an internal-scientific character and constitute scientific practice. However, Duintjer also underlines the normative function of political, moral and cultural values related to scientific research. In line with this, the current contribution not only considers intrascientific values but also discusses these extra-scientific values with respect to informetrics and its role in research assessment. Table 1 gives an overview of the nine interfaces distinguished by Duintjer and gives typical examples invented by the current author from the domains of quantitative science studies and evaluative informetrics.
The interfaces presented in the next section are interpreted in terms of the value configuration underlying the field of quantitative science studies, especially evaluative informetrics. In this way, the current contribution not only explains Table 1: Nine interfaces between modern science and values (Duintjer, 1974). Institutions may commission informetric studies with the aim of solving an internal conflict or enhancing their international status. Critical informetric studies may aim at questioning funding policies.

Values as a limit on what is permissible in scientific experiments with laboratory animals and test subjects
The publication of outcomes of ' experimental' bibliometric studies may harm the prestige of the study subjects, especially individual researchers and departments.
7 Modern science produces, by virtue of its structure, results that necessitate an overall discussion about the ends pursued in society Within the academic community, an overall discussion is needed about the objectives and criteria in research assessment processes and the role of informetric indicators therein.
8 Extension of elements from the ethos of science to moral values outside of science Extending the principles of openness and adopting a critical attitude toward the formation of an evaluative judgment in a research assessment process.
9 The value area of modern science as part of a wider lifeworld In research assessment, distinct value domains come together, both from science, social science and humanities and from extra-scientific areas, such as politics.
Duintjer's interfaces between modern science and values but also aims to enlighten the role of values in evaluative informetrics. The final section of this contribution discusses some of the consequences of the principles outlined by and distinctions made by Duintjer for research in evaluative informetrics and its application in research assessment.

Values as the object of scientific research
A well-known example of the study of values is the work by Robert K. Merton on social norms in scientific-scholarly research communities, conceived by him as ideals that are dictated by the goals and methods of science and are binding on scientists (Merton, 1942;1979). A typical example from the field of quantitative science studies is empirical research into how researchers perceive "research quality". This type of research does not aim at drawing conclusions as to whether such perceptions are valid or not, but at examining how researchers form quality judgments and which decisions they make in a particular assessment process. One may also investigate the empirical conditions under which certain quality perceptions emerge or under which one perception is more likely to be shared than another. Research on biases in peer review is a good example of this type of investigation. A research analyst may find that females are underrepresented in permanent academic staff of a particular university. Such a conclusion should be empirically and statistically justified, independently of whether the analyst finds discriminating against females in promotion processes unfair.

Values in the language of scientific research about the object
In scientific-scholarly research practice, at the level of theoretical-empirical statements about objects, value judgments may enter, reflecting the valuation or appreciation of the analyst toward the object. Such personal judgments can be mistakenly conceived as representing knowledge of the object validated in the research itself. A typical example is the use of the term ' closed access' to indicate subscription-based access to scientific journals. Additionally, the use of the term 'peripheral journal'-even if it has a well-defined meaning in network analysis-may give rise to a negative impression of a journal among non-experts.
The methodological requirement of value neutrality at the level of statements on objects based on empirical research dictates that such personal valuations of the object of research should be avoided. A researcher who wants to make them known to the reader should make clear that the qualifications are not based on the research itself, but are a priori assumptions of the investigator. The requirement of value neutrality also has the following implication: it is methodologically incorrect to neglect, on the basis of a value interest, relevant facts that fall within the intended field of research but contradict a preconceived conclusion and to one-sidedly emphasize facts that do support it.

Values that lie at the basis of scientific research
This section is not about values as research object, nor about scientifically unfounded or hidden prescriptions and valuations by the analyst, but about the value basis on which science itself operates, including rules that standardize and direct scientific activity. These values can be explicated and discussed at a level higher than that on which scientific statements are made. It can be denoted as a meta-level.
The principle of value neutrality is itself a normative requirement that should be analytically positioned at this metalevel. Although the methodological rules at this level do have an ethical component, it is not their ethical dimension that is at stake in this section. These rules are constitutive for scientific inquiry. It is not so much that researchers who violate these rules are doing 'bad science', but rather that they are not doing science at all, in the same way as, for instance, two chess players who allow pawns to move three steps forward do not play chess.
The principle of value neutrality has strong implications for the position of an informetric analyst contributing to an assessment process toward the policy objectives underlying the process and the evaluation criteria applied in it. As stated by Moed, "A basic notion holds that from what is cannot be inferred what ought to be. Evaluation criteria and policy objectives are not informetrically demonstrable values. Of course, empirical informetric research may study quality perceptions, user satisfaction, the acceptability of policy objectives, or effects of particular policies, but they cannot provide a foundation of the validity of the quality criteria or the appropriateness of policy objectives. Informetricians should maintain in their informetric work a neutral position towards these values" (Moed, 2017, p. x).
The methodological rules stated at the meta-level constitute scientific practice and have an internal-scientific character. However, the methodological principle of value neutrality does not preclude that also political, moral and cultural values have a normative function toward scientific-scholarly research. Following Duintjer (1974), the next six sections show, from different viewpoints, the role of the social-political-cultural context in which research is being carried out.

Values behind selective viewpoints and core concepts of scientific theories
Duintjer underlines that each problem statement is guided by selective points of view that are incorporated in theoretical core concepts and that determine the selection of empirical data. Extra-scientific values often hide behind these selective viewpoints and core concepts. "The fact that scientific research is guided by selective points of view, which in turn are inspired by social values, is not formally the same as a normative setting of these values. In research, these values can be assigned a hypothetical status" (Duintjer, 1970, p. 29).
Against the dependence on extra-scientific values, he recommends taking the following precautions: "making the concealed or implicit value background of selective viewpoints explicit and recognition of its non-scientific character; insight in the limits of the empirical knowledge acquired within such points of view; receptivity to investigations adopting different or wider viewpoints" (Duintjer, 1970, p. 32).
In quantitative science studies and applied evaluative informetrics, the concept of research impact plays a key role. In many studies, a tacit assumption seems to be that generating impact is per se valuable, and that it is worthwhile being studied in bibliometric studies and being assessed in research evaluation. However, the validity of this assumption cannot be grounded in informetric or sociological research, although it determines which data are to be collected and analyzed (namely, citation data). Impact as a value should be given a hypothetical status.
A second example relates to the distinction between formative and summative evaluation. In summative evaluation, the focus is on the outcome of a particular activity. This outcome can, for instance, be used in a funding decision. Formative evaluation assesses the development in an activity at a particular time and focuses on improving its performance. Whether an assessment process should be formative or summative cannot be decided on the basis of informetric grounds.
In empirical research into the effects of the application of indicators upon researchers' behavior, the analyst's view on whether this application is valuable and whether or not citations indicate performance should not play a role in the justification of empirical statements. However, this view may influence the selective viewpoints adopted during the setup of the study. The principle of value neutrality does not imply that such influence is inadmissible, but rather that it should be made explicit in the presentation of the study set-up and discussion of the outcomes.

Values when setting up investigations, the results of which are foreseeably relevant to certain societal powers
As Duintjer states, "The choice of research fields in science is not only related to conceptual points of view, but also to the extent to which the expected results of the research can foreseeably be of importance for certain social power groups, which are sometimes involved in conflicts and who will use these results in their favor and to the detriment of their opponents" (p. 32). Conversely, value decisions are also at stake when investigations are launched to unmask ideological facade decorations and hidden power relations and can therefore foreseeably contribute to public awareness or political "Aufklärung" (p. 33).
For instance, managers at a research institution or research funding organization may commission informetric researchers to conduct a study with the aim of using its outcomes to solve an internal conflict or to enhance the status of their organization. They may even commission informetricians to develop a strategy to artificially increase the bibliometric indicator score of their institution, with the objective to enhance its position in world university rankings. This type of applied evaluative informetrics is directly serving certain vested interests.
However, critical informetric studies can also question a particular situation or practice. For instance, the work of the Leiden team headed by Anthony van Raan in the early 1980s strongly underlined the critical role of bibliometric indicators and their potential to raise critical questions to academic department managers and funding organizations about their quality assessment and funding policies. In a study of research groups in their university, they obtained evidence of conservatism in funding policies, favoring groups with a long publication and funding track record, at the expense of emerging groups headed by young researchers exploring new approaches (Moed et al., 1985).

Values as a limit on what is permissible in scientific experiments with laboratory animals and test subjects
"Scientific explanations and predictions are often sought with the help of experiments, looking for behaviors of an isolated class of phenomena under self-determined conditions. But when organizing an experiment with laboratory animals or test subjects, one also has to deal with moral values that regulates also outside of science our relationship to animals and humans" (Duintjer,p. 34).
When developing and testing new assessment methodologies or indicators, their validity can be examined only if they are applied to "real" performing entities, such as individual researchers, groups or institutions. Therefore, application experiments or 'try outs' are organized, in which the outcomes of a method are discussed with the subjects of the assessment. However, if these outcomes become public-for instance, published in a journal article-or are shared with those who commissioned the experiment-e.g., a managing director of the commissioning organization-they may harm the prestige of the subjects under assessment, even if the final conclusion holds that the experiment has been only partially successful.
The environment in which evaluative-informetric experiments take place is not a scientific testing ground. Their effects upon the outside world may be irreversible. The experimental setting has not only a theoretical-methodological dimension but also a moral one: how to protect subjects under assessment against loss of status during and after a bibliometric experiment.

Modern science produces, by virtue of its structure, results that necessitate an overall discussion with respect to the ends pursued in society
Duintjer emphasizes a so-called "structural equivalence" of science and technology. All theoretical statements or explanations in modern science can be converted into technological statements that answer the question "what could be done?" "All pure theoretical research in modern science provides society with means to steer nature and man, without indicating in which direction one should steer" (p. 38). Duintjer advocates an "overall, democratic value discussion regarding the direction, goals and standards to which society directs itself and is directed" (p. 38). "Obviously, not merely scientists should decide on the direction of the whole of society, especially not because value judgments cannot be derived from theoretical-empirical knowledge itself" (p. 38).
As outlined in the introduction section, Moed (2017) distinguished four levels of intellectual activity in a research assessment process: policy and management, evaluation, analytics and data collection. On the one hand, he stated that a view of what is valuable in science and scholarship and its corresponding evaluation criteria and standards cannot be grounded in empirical research conducted at the analytics level. In this respect, evaluative informetrics is and should be value neutral. On the other hand, he argued that "in the proper use of informetric tools an evaluative framework and an assessment model are indispensable. To the extent that in a practical application an evaluative framework is absent or implicit, there is a vacuum, that may be easily filled either with ad-hoc arguments of evaluators and policy makers, or with un-reflected assumptions underlying informetric tools" (Moed, 2017, p. xi).
Following Duintjer's statements quoted above, one can maintain first of all that not merely informetricians should decide on the values, standards and political objectives underlying an assessment, especially not because value judgments cannot be derived from informetric research itself. What is more, one can argue that an overall discussion is needed within the academic community and the research policy domain about the objectives and criteria in research assessment processes at various levels of aggregation-e.g., individuals, institutions-and about appropriate conditions for the use of informetric indicators in these processes.

Extension of elements from the ethos of science to moral values outside of science
Duintjer introduces this interface between science and values as follows: "As already said, the theoretical-empirical knowledge of science does contain technological information about what we can do, but it does not yet answer the normative question of what we should do. The latter question was precisely the subject of the advocated value discussion (see previous section, HM). One may wonder, however, whether one can perhaps derive from the ethos that underlies scientific practice also elements that can be presented to society as moral and political values" (p. 39/40).
He proposes a possible extension not only of norms regulating the scientific attitude toward colleague-researchers but also of methodological rules constituting a researcher's relationship with the object of research. He refers to, among other authors, Karl Popper (1966) and Hans Albert (1965), who, building upon the tradition of ' critical rationalism', propose that a moral belief in reason as a method to solve problems in science could function as a norm also in society.
Within the context of the current paper, the question emerges as to whether and how science-internal norms could provide guidance in science assessments, especially in the further theoretical and practical development of evaluative frameworks. Following Duintjer's line of reasoning, one could, for instance, propose to extend the intra-scientific principles of openness and adopting a critical attitude, or the 'Mertonian' norm of disinterestedness toward the formation of an evaluative judgment in a research assessment process (Biagetti and Gedutis, 2019).

The value area of modern science as part of a wider lifeworld
As outlined in Section 3, scientific statements and practices are regulated by specific values in the form of objectives and rules. According to Duintjer, these values define what could be termed as a particular access road into reality. The value field of modern science is not the only value sphere, but rather forms part of a comprehensive whole of values next to other value spheres, such as those of politics, art, philosophy of life and existential experience.
In research assessment, distinct value domains come together, making it a complex activity that is difficult to grasp. The term 'research performance' relates to an agglomerate of internal-scientific values, such as 'methodological soundness', or notions as complex as ' contribution to scientific-scholarly knowledge in a discipline' or, moving outside the boundaries of a particular discipline, 'to the advancement of scientific-scholarly knowledge in general'. Reaching beyond the domain of science and scholarship, values in research assessment may refer to 'improving the state of mankind' or to technological, economical or societal merit.
The domain that Duintjer denoted as "modern science" and that is outlined in the introduction section does not embrace other forms of scholarship that he denotes as "hermeneutic" 3 -philosophy is a typical example-and "sociocritical". 4 It is good practice to denote these disciplines as 'scholarly' rather than 'scientific'. The current author defends the position that these domains of scholarship could contribute to the formation of an overarching evaluative framework mentioned above, in which they could have an enlightening rather than a technological function. Such contributions can be marked as scholarly but fall outside the value area of modern science and indeed cover "other parts of a wider life world". However, it should be noted that there is no explicit reference to this interpretation in Duintjer's article.

Conclusions
This concluding section aims to draw conclusions from Duintjer's framework and notions outlined above for the values and limits of the use of informetric methods in the evaluation of scientific-scholarly research. It proposes a series of practical guidelines that may guide practitioners in evaluative informetrics in reflecting upon these values and limits and in defining their role both as researchers in quantitative science studies and as consultants in actual assessment processes. A first series of guidelines relates to evaluative informetrics as a research activity and emerges from the principle of value neutrality as a methodological requirement outlined above. A second series concerns the application of evaluative informetrics in research assessment and its evaluative framework and refers to non-scientific criteria, principles and objectives from other value domains.

Research-oriented considerations
• When informetric investigators empirically examine value perceptions of researchers, they should maintain a neutral position toward these perceptions and acknowledge that their validity cannot be grounded in informetric research. • In their scientific statements, informetricians should avoid the use of suggestive or insinuating terms that evoke an impression or sentiment that is not supported by presented empirical evidence. • Empirical informetric research related to research quality almost inevitably must make certain assumptions related to political or evaluative context. The value-free principle requires that informetricians make the assumptions of their tools as explicit as they can to their colleagues and to users of their information. • Making evaluative assumptions of informetric methods explicit is a common responsibility in which researchers should keep each other focused. In a sense, this is a never-ending activity. Rather than ignoring this principle, informetricians should accept it and give it a permanent place in their work. • When outcomes of informetric assessments are made public a theoretical justification of the methodology should highlight both its potential and its limits. • In experimental informetric studies of the validity or usefulness of informetric tools, outcomes may damage the prestige of assessed subjects when they are made public. Research reports on such experiments should anonymize investigated individuals and departments.

Research-assessment-oriented considerations
• An evaluative informetric analysis should be carried out within a well-articulated evaluative framework, i.e., a set of evaluation criteria to be applied that are in agreement with the policy issue and assessment objectives. • An important element in the definition of an evaluative framework is distinguishing categories of performance values, including extra-scientific merits as well, each with its own assessment methods and indicators. An evaluative framework should justify the choice of a particular combination of approaches. • Informetricians should make clear that the informetric approach to research assessment represents only one particular way of analyzing research performance, next to other approaches, and that none of these has a priori a preferred status. • Humanities and social sciences could contribute to the development of assessment methods and to the definition of an assessment's evaluative framework, having an enlightenment function rather than a technological function. • Informetricians should maintain in their applied evaluative studies a neutral position toward the policy issues addressed and the criteria specified in an evaluative framework. They should refrain from advocating particular political or evaluative a prioris. • As professional experts, informetricians' competence lies primarily in the development and application of analytical models within the context of a given evaluative or political framework. • Informetric researchers could propose that evaluators and policy makers incorporate fundamental scientific values such as openness and adopting a critical attitude in the set-up and implementation of research assessment processes. • Informetricians could promote and participate in an overall discussion within the academic community and the research policy domain about the objectives and criteria in research assessment processes and the role of informetric tools therein.