Meta-choices in ranking knowledge-based organizations

Cinzia Daraio (Computer, Control and Management Engineering, University of Rome La Sapienza, Rome, Italy)
Gianpaolo Iazzolino (Mechanical, Energy and Management Engineering, University of Calabria, Rende, Italy)
Domenico Laise (Computer, Control and Management Engineering, University of Rome La Sapienza, Rome, Italy)
Ilda Maria Coniglio (Mechanical, Energy and Management Engineering, University of Calabria, Rende, Italy)
Simone Di Leo (Computer, Control and Management Engineering, University of Rome La Sapienza, Rome, Italy)

Management Decision

ISSN: 0025-1747

Article publication date: 17 August 2021

Issue publication date: 21 March 2022

821

Abstract

Purpose

The purpose of this paper is to address the issue of knowledge visualization and its connection with performance measurement from an epistemological point of view, considering quantification and measurement not just as technical questions but showing their relevant implications on the management decision-making of knowledge-based organizations.

Design/methodology/approach

This study proposes a theoretical contribution that combines two lines of research for identifying the three main meta-choices problems that arise in the multidimensional benchmarking of knowledge-based organizations. The first is the meta-choice problem related to the choice of the algorithm used (Iazzolino et al., 2012; Laise et al., 2015; Daraio, 2017a). The second refers to the choice of the variables to be included in the model (Daraio, 2017a). The third concerns the choice of the data on which the analyses are carried out (Daraio, 2017a).

Findings

The authors show the interplay existing among the three meta-choices in multidimensional benchmarking, considering as key performance indicators intellectual capital, including Human Capital, Structural Capital and Relational Capital, and performances, evaluated in financial and non-financial terms. This study provides an empirical analysis on Italian Universities, comparing the ranking distributions obtained by several efficiency and multi-criteria methods.

Originality/value

This study demonstrates the difficulties of the “implementation problem” in performance measurement, related to the subjectivity of results of the evaluation process when there are many evaluation criteria, and proposes the adoption of the technologies of humility related to the awareness that we can only achieve “satisficing” results.

Keywords

Citation

Daraio, C., Iazzolino, G., Laise, D., Coniglio, I.M. and Di Leo, S. (2022), "Meta-choices in ranking knowledge-based organizations", Management Decision, Vol. 60 No. 4, pp. 995-1016. https://doi.org/10.1108/MD-01-2021-0069

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Cinzia Daraio, Gianpaolo Iazzolino, Domenico Laise, Ilda Maria Coniglio and Simone Di Leo

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction and contribution of the paper

The article deals with the problem of evaluating business performances in knowledge-based organizations. In particular, we analyze the problem of ranking knowledge-based organizations on performance-related criteria.

This problem is very important when a ranking has to be carried out on a set of different organizations, for example, to select the “best in class.” This problem occurs also in ranking firms to give them awards. Let us think, for example, of the Deming Prize, the Baldridge award, the European Foundation for Quality Management Good Practice Competition and also the problem of evaluating top managers when the company decides to assign them incentives or rewards. Likewise, the problem is important also with reference to knowledge organizations and, in particular, Universities. There are many rankings in this area, such as the ARWU of Shanghai, the QS World University Rankings and the Times Higher Education World University Rankings.

An important element to be underlined is that the issue of measuring organizational performance is a genuine multi-criteria problem. In spite of the traditional way of measuring performance based only on one criterion, that is, the value creation for the shareholder (Jensen, 2002), the performance of the modern enterprise cannot be measured only on one dimension. In the wake of the Balanced Scorecard approach (Kaplan and Norton, 1996) and of Sustainability, it is necessary to adopt a stakeholder's point of view in which many actors have to be considered for correctly evaluating the organizational performance (Iazzolino et al., 2012; Laise et al., 2015).

For knowledge organizations and, in particular, for universities, the capability of existing rankings to measure knowledge creation and provide an effective measure of multidimensional performance has been questioned (Olcay and Bulu, 2017; Vernon et al., 2018) and the main methodological limitations of existing rankings outlined (see e.g. Valmorbida and Ensslin, 2017; Fauzi et al., 2020). Daraio et al. (2015) showed that advanced efficiency analysis techniques can be helpful to overcome the main limitation of existing rankings, that is, their mono-dimensional nature (or the consideration of a few variables), allowing the performance to be evaluated in a multidimensional way.

The theme of strategic decisions and of the methods of knowledge visualization is at the heart of our work. In fact, we will show that in making their decisions, managers are influenced by three main meta-choices, which interact with each other and influence the way in which knowledge is displayed and interpreted. This topic is particularly important for knowledge-based organizations like universities, on which we will carry out an empirical analysis.

Knowledge visualization is an emerging and interdisciplinary research area (Bertschi et al., 2011). There are many different concepts of knowledge visualization in diverse fields. A comprehensive survey of existing concepts is reported in Eppler (2013). Table 1 of Eppler (2013, p. 7) reports the different approaches to knowledge visualization proposed in the literature and cites the epistemological approach to knowledge visualization initiated by Knorr-Cetina (2003) in the second place in terms of number of citations received. The focus of the investigation of Knorr-Cetina (2003) is to try to understand how science creates knowledge and how we know what we know.

We address the issue of knowledge visualization following an epistemological approach. A recent contribution describing the epistemological point of view we are interested in is presented in Carson (2020) and more generally the history of quantification on which Carson relies. As Carson (2020, p. 1) states: “quantification and measurement should be seen not just as technical pursuits, but also as normative ones. Every act of seeing, whether through sight or numbers, is also an act of occlusion, of not-seeing. And every move to make decisions more orderly and rational by translating a question into numerical comparisons is also a move to render irrelevant and often invisible the factors that were not included. The reductions and simplifications quantifications rely on can without question bring great and important clarity, but always at a cost.” In the managerial context, we have the phenomenon of “decisional myopia,” in which the manager decides “to see” only a few dimensions, and perhaps not the important ones, but those more easily available and understandable.

In this work, we propose a theoretical contribution that brings together two lines of research in which the authors have worked in recent years. The first strand concerns the complexity of evaluating the activities of knowledge organizations, which include universities (Daraio, 2017a, b, 2019, 2020). The second refers to the measurement of the performance of companies with multi-criteria methods and the so-called meta-choice problem that always arises in a benchmark multi-criteria analysis that can be synthesized as follows: “how to choose an algorithm to choose?” (Iazzolino et al., 2012; Laise et al., 2015). By putting these two contributions together, we propose a new model for evaluating the performance of knowledge organizations that includes the different components of knowledge capital as inputs and includes the Value Added produced by the institutions among the outputs. By applying this model to the case study of Italian universities, we highlight how the evaluation of the business performance of knowledge organizations is affected by three different meta-choice problems that interact with each other, influencing the obtained results.

The main result we show is that behind rankings there is not a perfect measurement, or in economic terms, a maximization of performance is not feasible (or reachable), owing to the existence of the three meta-choice problems. The first is the meta-choice problem recalled above and investigated by Iazzolino et al. (2012) and Laise et al. (2015) which relates the methodology dimension in Daraio (2017a) and underlies the choice of the algorithm used to compute the ranking. A second meta-choice problem underlies the theoretical dimension of the modeling, called theory in Daraio (2017a), that relates to the following choice: which variables to include in the model or better “how to choose the theory to identify the variables to consider in the model and in the empirical analysis? A third meta-choice problem that arises relates the data dimension in the framework of Daraio (2017a) and consists in the choice and main limitations of the data on which the analyzes are carried out, in other words, what data have to be used in the analysis, and how data problems and limitations affect the empirical analysis?

As we will see in the empirical illustration on Italian universities, these three meta-choice problems are important and are able to affect the rankings of the knowledge institutions considered; furthermore, they interact with each other, as witnessed by the complexity of the evaluation related to its implementation (Daraio, 2017b).

When studying Universities and in general knowledge-based organizations, we have to consider an important element. The main characteristic that drives performance is knowledge and, in particular, the so-called intellectual capital (IC). The inclusion of knowledge and IC in the assessment of performance is not immediate also owing to the data problems related to its measurement.

In this paper, we focus on multidimensional benchmarking analysis applied to key performance indicators (KPIs). KPIs are related, on the one hand, to the IC (divided in the three dimensions of Human Capital, Structural Capital and Relational Capital) and on the other hand, to performances, evaluated in both financial (Revenues, Value added and EBITDA) and non-financial (number of publications and number of patents) terms.

The paper provides several practical implications in all cases in which a ranking has to be assigned to a group of organizations based on performances.

The adoption of a set of criteria is certainly an advantage to avoid mono-criterial or mono-dimensional myopic evaluation. However, this also creates some methodological problems. The paper demonstrates the difficulties of the so-called “implementation problem” in performance measurement, related to the “relativity” (subjectivity) of results of the evaluation process when there are many evaluation dimensions, as is the case in a benchmark context.

2. Complexity of the assessment: the meta-choice problems of knowledge organizations

In this paper, we want to highlight the meta-choice problems that always arise in a multidimensional benchmarking analysis.

The authors of this paper argue that any multidimensional benchmarking evaluation implies the development of a model that concerns the choices made from a theoretical, methodological and empirical point of view (data). See Figure 1 that shows the main dimensions of an assessment which coincide with the meta-choice problems.

From a methodological point of view, a meta-choice problem arises because multi-criteria (or multidimensional) ranking algorithms cannot be selected using a multi-criteria algorithm; the choice of an algorithm is ultimately determined by the subjective preference of the policymaker; and the meta-choice solution to the benchmarking problem is, in accordance with Simon’s satisficing solution, describing a non-maximizing performance measurement methodology.

To perform a benchmark analysis, a set of dimensions in addition to criteria must be chosen. The choice of the main conceptual references and main conceptual dimensions to be considered in a multidimensional benchmark highlights the theoretical meta-choice problem. To decide what are the main conceptual references of the model of the benchmark we pursue, we cannot use a theoretical justification, we need to explicit the subjective preference of the policymaker and/or the analyst who carry out the analysis.

A third meta-choice problem arises when an empirical analysis has to be carried out. A third dimension to consider is data, and the problems of choosing the data, the variables that proxy them, their availability and their quality interact with the two previously described meta-choice problems, showing the complexity of the benchmarking exercises particularly when the focus is on multi-criteria benchmarking analysis applied to a set of knowledge organizations.

Figure 2 illustrates the decision-making problem that managers have to face in multi-criteria benchmarking analysis.

3. Case study: analysis of the performance of Italian universities

In this section, we illustrate the case study. The empirical analysis is based on the analysis of the performance of Italian Universities.

3.1 Sample and data collected

The analyzed sample is constituted of 64 Italian universities. We consider universities of different types and size: 11 mega-universities, 15 large universities, 17 medium-sized universities, 12 small universities, 4 polytechnics, 2 doctoral institutes and 3 schools of advanced study that are part of the Italian higher education system. Data collection considers three years, from 2016 to 2018.

The indicators of inputs and outputs to evaluate the performance of Italian universities will be illustrated in detail in the next section.

3.2 Selected indicators: intellectual capital and performance

This paper proposes a set of indicators especially designed for universities and related to the IC dimension.

The term intellectual capital (IC) was first introduced by John Kenneth Galbraith: the concept of the term incorporated a degree of “intellectual action” rather than “intellect as pure intellect” (Edvinsson and Sullivan, 1996). Dumay (2016) defines IC as the collection of intangible resources, knowledge, experience and intellectual property an organization, community, country or society has and uses to create economic, utility, social and environmental values. The intangible assets and IC constitute the largest proportion of universities’ assets (Ramírez et al., 2011; Secundo et al., 2010). When related to a university, IC is a term used to cover all the institution’s nontangible or nonphysical assets, including processes, innovation, patents, the tacit knowledge of its members and their capacities, talents and skills, the recognition of society, its network of collaborators and contacts, etc. (Ramírez Corcólez et al., 2013).

At an international level, it is generally accepted that there are three basic components of IC: (1) Human capital, (2) Structural capital and (3) Relational capital (Ramezan, 2011; Steward, 1994; Johnson, 1999; Smith and Parr, 2000; Edvinsson and Malone, 1997; Secundo et al., 2017). The three dimensions of IC have a positive and significant influence on organizational performance (Ibarra-Cisneros et al., 2020). The components of university’s IC have been categorized in various ways, although undoubtedly it is the tripartite classification that is most widely accepted in the specialized literature (Secundo et al., 2010; Leither, 2004; Bezhani, 2010; Paloma Sànchez et al., 2009). Specifically, it is possible to read the three components as follows (Ramírez Córcoles et al., 2011):

  1. Human capital: The sum of the explicit and tacit knowledge of the university staff (teachers, researchers, managers, administration and service staff) acquired through formal and non-formal education and refresher processes included in their activities.

  2. Structural capital: The explicit knowledge related to the internal process of dissemination, communication and management of the scientific and technical knowledge at the university.

  3. Relational capital: The extensive collection of economic, political and institutional relations developed and upheld between the university and its non-academic partners, i.e. enterprises, non-profit organizations, local government and society in general.

The input indicators were selected by the authors within the set of indicators generally accepted in the literature (Paloma Sànchez et al., 2009; Córcoles, 2013; Di Berardino and Corsi, 2018; Frutos-Belizón et al., 2019) and are grouped in the three different components of the IC as follows:

Input indicators:

Human Capital (HC)

  1. Number of Academic Staff

  2. Number of Technical and administrative staff

  3. Academic Staff costs

  4. Technical and administrative staff costs

Structural Capital (SC)

  1. Number of Departments

  2. Patents and similar intellectual property rights

  3. Licenses and trademarks

  4. Scientific equipment

Relational Capital (RC)

  1. Grants from others (private)

  2. Grants from others (public)

  3. Revenues from research projects

The main goals for universities are generally accepted to be the production, diffusion, transfer and preservation of knowledge (Young Chu et al., 2006); for this reason, university performance assessments are defined by the output indicators listed below.

Output indicators:

  1. Total Revenues

  2. Value Added

  3. EBITDA

  4. Number of patents

  5. Number of spin-offs

  6. Number of journal articles

The indicators were selected with the criteria of the feasibility of data gathering and of consistency between universities: most of the indicators can be valued through the items of the university’s income statement and balance sheet, the others through online portals.

Some important theories (Kaplan and Norton, 1996; Sveiby, 1989) suggest that non-financial measures provide a means of complementing financial measures and should also be present at the strategic level of the firm; therefore, we consider both financial indicators on the amount of resources devoted to a given activity and non-financial indicators, such as number of academic staff or number of spin-offs.

4. Results

4.1 Descriptive analyses

On the basis of the discussion and the literature described in the previous section, we identified the variables reported in Table 1 to be considered, respectively, as inputs and outputs to assess the performance of the Italian universities.

The tables reported in Appendix show some descriptive statistics on the data available for the 64 Italian universities over the years 2016–2018. As emerges by inspecting the values reported in the tables, there is a high heterogeneity among the Italian universities considered (high standard deviation values and high interquartile ranges) and most of the dimensions considered of inputs and outputs show skewed distributions (average and median values differ for almost all variables over the whole period). For the variables X11 and Y6, the data are available only for 2018.

The following Tables 2 and 3 show the correlations among inputs and outputs, respectively.

The analysis of the correlations is particularly useful to carry out a preliminary assessment of the relationships among the variables because for the methods that will be used to compute the rankings of the Italian universities, and in particular, for the Data Envelopment Analysis (DEA), the so-called “curse of dimensionality” is really a plague. This means that the algorithms involved in the DEA efficiency score computation require many (even thousands!) of data when the number of variables used is quite high, as is the case here (for further details, see, e.g. Daraio and Simar, 2007).

For this reason, the least correlated variables were chosen, excluding variables that show a correlation greater than 0.9 and were redundant for the analyses. In addition, in the final selection of variables, we considered also the theoretical significance of the indicators. Hence, given the high correlation between variables X1, X2, X6, X7 and X11, and given the theoretical significance of the represented indicators, only X1 and X2 were selected as indicators for personnel. Moreover, given the high correlation between X1 and X2, these two indicators were aggregated into a single variable, “personnel costs” (I1). In addition, X3 and X4 were also aggregated in the variable I4 because they express similar concepts despite the low correlation. X10 was excluded because, despite having a low correlation between the other variables, it has a high percentage of missing information (a percentage of 31%, 60 units out of 192 units). The same kind of reasoning was applied to the outputs. Given the high correlation between variables Y1 with Y3, Y4 and Y5 and given the low correlation between the other variables, Y1 was excluded. Y4 and Y5 have a very high correlation between them and almost the same correlation with the other variables, so we decided to exclude Y5 and use Y4 for our analysis.

At this stage of the analysis, then, we see that the selection of the dimensions of performance to calculate the ranking of universities is influenced by a methodological problem (the curse of dimensionality).

Table 4 shows the variables that will be finally used to calculate the rankings of Italian universities.

The indicator I1, which indicates total personnel cost, is the sum of the variables X1 and X2 in Table 1. By “Scientific Equipment” (I2), we mean the instruments used mainly in laboratories. They relate to scientific and research activities and they may also have a high technological content. Indicator I3 contains the licenses and trademarks; they indicate the granting of rights on goods owned by the granting institution. I4 is the sum of the variables X3 and X4; it represents all external contributions to universities. The I5 indicator indicates the resources obtained by the University from research projects commissioned by external parties.

“Value Added” (O1) is the difference between Production and External costs. O2 represents the number of patents owned by the university. O3 indicates the number of scientific papers published in peer-reviewed journals. “Spin-offs” (O4) are firms founded by academics.

4.2 Methods applied to compute the rankings of knowledge organizations

As described in the introduction, the aim of this work is to show how the three meta-choice problems emerge in the evaluation of the performance of knowledge organizations and influence the results obtained.

The concept of performance we pursue in this paper is multidimensional. Measuring performance means having a representation model of the output/outcome process connected to the inputs (resources) needed to produce it. It also requires the availability of data to apply mathematical–statistical methods to assess performance. Performance measurement methods include quantitative frontier benchmarking methods and multi-criteria methods.

For the comparison of performance results, we chose two types of methods: (1) methods based on the estimation of a multidimensional best-practice frontier, based on multiple inputs and multiple outputs, called Efficiency analysis methods; (2) multi-criteria methods, based on different criteria, specified as benefits and costs, to measure the performance of knowledge organizations, called Multi-Criteria Decision methods (MCDM).

Efficiency analysis methods are based on the estimation of an efficient benchmarking frontier against which to compare the performance of a sample of units. The efficiency scores obtained in an efficiency analysis, based on the estimation of the distance of each unit in the sample from the efficient frontier, allow us to rank the units in the sample according to the performance score obtained. In the literature on efficiency analysis, the nonparametric approach has received a considerable amount of interest in the context of multiple performance measurement, both from a theoretical and an applied perspective. This is mainly because it does not require many assumptions and particularly because it does not need the specification of a functional form for the frontier. Hence, the parameters of the functional form of the frontier do not have to be estimated in this approach, from which the name “nonparametric” approach derives; whereas in the parametric approach, the parameters of the efficient frontier must be estimated. DEA (Charnes et al., 1978; Banker et al., 1984) and Free Disposal Hull (FDH, Deprins et al., 1984) are among the best-known and most applied nonparametric techniques for the measurement of the efficiency in many service activities, including those of universities. DEA uses mathematical programming techniques to estimate a set of efficiency scores that measure the distance of a set of units from an efficient or best-practice frontier. DEA is based on two main assumptions: the convexity and the free disposability of the attainable set. In a DEA setting, it is also possible to choose the returns to scale of the best-practice frontier, considering for instance Constant Returns to Scale (CRS) or Variable Returns to Scale (VRS). In CRS production processes, an increase of 10% of all the inputs produces an increase of 10% of the output, while in VRS production processes, there may be constant, increasing or decreasing returns to scale, admitting the variability of the returns to scale. It is also possible to use instead of DEA, the FDH estimator that assumes only the free disposability of the production set, from which the name Free Disposal Hull derives. An illustration of these different nonparametric efficiency estimators is reported in Figure 3.

Figure 3 shows a simplified efficient frontier estimated using an input factor and an output factor which aggregate, respectively, all the inputs and all the outputs according to the factorial method described in Daraio and Simar (2007, pp. 148–149). DMU is decision-making unit and identifies each observation reported in the plot. CRS, the black line in Figure 3, is the efficient frontier estimated with DEA under the hypothesis of Constant Returns to Scale (CRS). VRS, the dashed line in Figure 3, is the efficient frontier estimated with DEA under the hypothesis of Variable Returns to Scale (VRS). FDH, the gray line in Figure 3, is the efficient frontier estimated with Free Disposal Hull. As the plot shows, we can see that the CRS frontier is the furthest from the cloud of points of the DMUs (the points reported in the graph). On the other hand, the VRS frontier seems to be a closer envelopment of the observations (DMUs), being closer to the cloud of points. However, the VRS frontier relies on the hypothesis of convexity that could be violated by the observed data and, therefore, should be tested before adopting the DEA approach. As Figure 3 clearly shows us, we have three different estimators of the efficient frontier that provide different frontiers. The value of the efficiency scores, which are the distances of each DMU from the best practice efficient frontier, changes according to the selected efficient frontier estimator (CRS, VRS or FDH).

Multi-criteria decision analysis is a discipline aimed at supporting the decision-making process when there are numerous evaluations, allowing a compromise solution to be obtained in a transparent way. This methodology allows the decision maker to analyze and evaluate different alternatives, monitoring their impact on the different players in the decision-making process. There are various methods for multi-criteria analysis (Vincke, 1992; Figueira, 2005). MCDM methods offer the possibility of finding out satisfactory measures of performance that provide a balance between multiple criteria offering a solid and balanced support to the decision-making process.

The MCDM methods considered in this paper are those implemented in the R package “Multi-Criteria Decision Making Methods for Crisp Data” by Ceballos Martin (2016). These methods consider two kinds of dimension: benefits (that we will consider equivalent to the outputs of the efficiency analysis) and costs (that we will consider equivalent to the inputs of the efficiency analysis) and differ in the specification of different technicalities, including for instance the normalization adopted. The MCDM that will be applied with their labels (that will be used in the following to illustrate the results) are the following:

  1. Multi-Objective Optimization by Ratio Analysis labelled as RSM (Brauers et al., 2010).

  2. Multi-Objective Optimization by Reference Point labelled as RPM (Brauers et al., 2010).

  3. Multiplicative Form labelled as MFM (Brauers et al., 2010).

  4. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) Method with the linear transformation of maximum as normalization labelled as TSL (Garcia Cascales et al., 2012).

  5. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) Method with the vectorial normalization procedure, labelled as TSV (Hwang et al., 1981).

  6. Weighted Product Model, labeled as WSM (Zavadskas et al., 2012).

  7. Weighted Sum Model, labeled as WPM (dZavadskas et al., 2012).

4.3 Comparative analysis on the obtained results

Both efficiency analysis methods and MCDM then provide as a result a ranking of the knowledge-based organizations according to a multidimensional perspective.

The comparative analysis on the rankings of the Italian universities will be based then on the calculation of different rankings according to a few variants of the efficiency analysis and MCDM methods. After that, we will compute the Spearman ranking correlations among the results obtained to check how rankings vary according to the methods applied.

Moreover, to run a balanced comparative analysis among all the methods, we choose the same vector of weight for all the dimensions considered. For the same purpose (balanced comparison), all the scores have been normalized to obtain comparable values comprised between 0 and 1.

Before comparing the results obtained by efficiency analysis and MCDM, a natural question arises: that is, what method to choose among the efficiency methods introduced earlier? We know that DEA relies on the convexity assumption while FDH does not rely on it. To answer to this question, we test for the convexity assumption, by applying a recently introduced test by (Kneip et al. (2016) and Daraio et al., 2018) and we did not accept the convexity assumption on our data. This means that the application of DEA methods is not appropriate for our dataset, and in the following of the analysis, we will use only the FDH efficiency scores.

There is an additional warning we have to take into account. The FDH estimator of efficiency scores is determinist by nature. This means that all the deviations observed from the efficient frontier are attributed to inefficiency, so no noise is allowed. As a consequence, the efficiency scores calculated by FDH suffer from the influence of outliers and/or errors in the data. For this reason, before the comparison with MCDM, we considered also a robust nonparametric efficiency method that is not influenced by outliers: namely, an order-m efficient frontier estimator, where m is the number of random peers selected to compute the robust frontier which does not envelop all the DMUs leaving out the most extreme and outliers observations (for more details, see Daraio and Simar, 2007).

But a question may arise now to the reader that is the following. Of course, by applying different methods, we obtain different results and so different values for the efficiency scores of the Italian universities; but are those units that are on the top of one ranking also on the top of the other rankings? In other words, is the rank correlations among the efficiency scores high or low?

To answer this question, we computed the Spearman’s rank correlations among the efficiency scores obtained by applying FDH in an output orientation (which means given the inputs we look at the maximum production of the outputs, indicated as FDH O in the following) with the efficiency scores obtained by applying an order-m frontier estimation with m = 25 and with m = 100.

A Spearman’s rank correlation value close to 1 means that the two efficiency scores calculated on the basis of two methods, although they show different values, correspond to the same ranking of the universities analyzed. A rank correlation close to −1 indicates that the efficiency scores calculated with a method are almost the inverse to the order of those calculated with another method, meaning that those universities that are on the top of a method are on the bottom of the other and vice versa. Intermediary values of the rank correlation show varying level of correlations among the rankings obtained by the different methods.

The results of the rank correlation calculated between FDH O efficiency scores and Order-m with m = 25 is 0.95 while the rank correlation obtained between FDH O and order-m with m = 100 efficiency scores is 0.99.

On the basis of the high values of the rank correlations, we can consider FDH O efficiency scores as not affected significantly by outliers in the data and in the following we will use only FDH O in the comparison with MCDM methods.

The results of the comparison among FDH O efficiency scores and MCDM methods are reported in Figure 4, which shows the boxplots of normalized FDH O and MCDM scores calculated on the sample of Italian universities.

A boxplot is a graphical representation used to describe the distribution of a sample using simple dispersion and position indices. It is represented by a rectangle divided into two parts, from which two segments come out. The rectangle (the “box”) is delimited by the first and third quartiles and divided inside by the median. The segments are delimited by the minimum and maximum of the values.

By inspecting Figure 4, we can see that the scores obtained by the different methods are all different.

Table 5 reports Spearman’s correlations calculated among the normalized scores obtained by the different methods implemented.

Inspecting the values reported in Table 5, we can observe that the ranks or positions obtained by the units according to the different methods differ, and so the choice of the method to apply should be carefully discussed before using the obtained ranking for decision-making purposes.

A final meta-choice problem that arises in the evaluation of the rankings of knowledge organization, relates to the choice of the theory, considering how the results are affected including or excluding some variables. For illustrating this meta-choice problem, we analyze what happens if we include in the FDH O model as inputs all intellectual capital components (namely, HC, SC and RC) or only each of them separately, keeping the outputs constant (the same as the previously described overall model). The results are reported in Figure 5 and Table 6. From Figure 5, we see that choosing all the components of the Intellectual capital as inputs or selecting a component of it affects the results obtained (the boxplots are all different). When considering the HC and RC, we observed that there are several outliers (the points that fall outside the boxes) that differ from the bulk of the observations. Spearman’s correlations reported in Table 6 confirm the large variation of the results obtained showing very low rank correlations. Again, these results show that the third meta-choice problem is in place and the selection of the variables that have to be included in the analysis should be carefully motivated by existing theory (the literature). In our case study, the choice of considering all the different components of IC is motivated by the review of the literature carried out and also confirmed by the results reported in Table 6 and Figure 5, which show how each component of the IC plays a role (determining different results) and for this reason it is advisable to include all the components of the intellectual capital in the empirical analysis.

5. Discussion and conclusion

In this paper, we tackle the issue of knowledge visualization and its connection with performance measurement from an epistemological point of view. Following Carson (2020) and more generally, the history of quantification on which Carson relies on, we consider quantification and measurement not just as technical questions but show their relevant implications on the management decision-making of knowledge-based organizations. This is because quantification and measurement produce empirical results, such as rankings, that can be used to inform and support the decision-making process of Knowledge organizations. If the results obtained by the analysis are not carefully considered in terms of reliability and robustness, they can provide an unreliable and biased support to the decision-making process.

The main conclusions of this paper are the following:

  1. In the evaluation of knowledge organization (as universities are), when the evaluation is based on different genuine criteria and on a set of different dimensions including IC, three meta-choice problems arise and interact among each other. Multi-criteria ranking algorithms cannot be selected using a multi-criteria algorithm (methodological choice); so, they rely on the subjective choice of the analyst/policy maker who has to choose also the conceptual background of the benchmarking (theoretical choice) addressing the empirical issues that arise (data choice).

  2. The choice of an algorithm, of the conceptual reference and of the data is ultimately determined by the subjective preference of the analyst/policy maker who should be explicitly described, highlighting the “fitness for purpose” strategy (satisficing principle) followed, inevitably based on the compromise choices made to address the existing meta-choice problems.

Our proposal or solution to the benchmarking problem is in accordance with Simon’s satisficing solution, describing a non-maximizing performance measurement methodology (Simon 1955, 1978, 1997). It may be worth emphasizing that Simon’s point of view is adopted by the Managerial Accounting multi-criteria approach: “we believe however that achieving satisfactory profit is a better way of stating corporation goals” (Anthony, 1966). As is well known, such a perspective has a long tradition in managerial and accounting literature (Anthony, 1966; Cyert and March, 1963; Drucker, 1966; March and Simon, 1958; March, 1966a, 1996b). This approach is strictly related to the multidimensionality in measuring firm performances. There is a traditional way of measuring the performance of an organization that mainly focuses on value creation for the shareholder (shareholder point of view) (Jensen, 2002). According to this approach, the only variable to be considered is related to profit and ultimately to dividend for shareholders. But in the modern enterprise, the performance cannot be measured only on one dimension. The approach based on the Balanced Scorecard (Kaplan and Norton, 1996) and the theme of Sustainability point out that the issue of measuring organizational performance is a genuine multi-criteria problem. A sustainable strategy has to create value not only for the shareholder, but also for the other stakeholders such as, for example, the employees and the external environment. It is necessary to switch from the shareholder point of view to the stakeholders point of view, in which many actors have to be considered for correctly evaluating the performance of an organization. The evaluation of the organizational performance cannot generally be conducted by means of a unique criterion but considering a multi-criteria approach (Iazzolino et al., 2012; Laise et al., 2015).

Our suggestion to the analyst who conducts performance assessments is therefore to be transparent with stakeholders (including policy-makers and managers of universities that use rankings), describing all the crucial meta-choices that underlie their analyses and highlighting the role that they have on the results. This behavior, defined “technologies of humility” by Jasanoff (2007) could be achieved through a checklist as proposed by Daraio (2019), according to which, the analyst describes all the choices made in the analysis and the impact that these choices have on the results obtained, possibly identifying lines for further improving the analyses. This approach corresponds to the awareness of having achieved a “satisficing” result in terms of rankings of the institutions, à la Simon, which considers the hypotheses and compromises made, rather than an objective measure of the institutions that may fit all the purposes.

This research has some limitations that we suggest addressing in future studies from both a theoretical and an empirical point of view.

From a theoretical perspective, the authors have already started to explore deeply the relationships between IC and performance of knowledge organizations to obtain a systematic review of the topic. The review can be carried out from a strictly scientific/academic point of view, and so mainly analyzing journal papers, but also by considering the most-used methodologies in managerial practice. It would be interesting to map the existing information and relationships between IC and Performance with a focus on knowledge organizations, including Universities, but not limited to them.

From the empirical point of view, the main issue will be related to the sample used for the analysis. Our research was carried out using a sample of Italian universities. The sample could be enlarged to include other Universities, also belonging to other countries.

Figures

Main dimensions of the meta-choice problems of knowledge organizations

Figure 1

Main dimensions of the meta-choice problems of knowledge organizations

An illustration of the decision-making problem with existing meta-choices

Figure 2

An illustration of the decision-making problem with existing meta-choices

Plot of the efficient frontiers estimated according to different methods

Figure 3

Plot of the efficient frontiers estimated according to different methods

Boxplots of normalized FDH and MCDM scores calculated on the sample of Italian universities

Figure 4

Boxplots of normalized FDH and MCDM scores calculated on the sample of Italian universities

Boxplots of results obtained by the different FDH models: FDH O includes all IC inputs, i.e. HC, RC and SC. FDH O_HC includes as input-only HC as IC. FDH O_RC includes as input-only RC as IC. FDH O_SC includes as input-only SC as IC

Figure 5

Boxplots of results obtained by the different FDH models: FDH O includes all IC inputs, i.e. HC, RC and SC. FDH O_HC includes as input-only HC as IC. FDH O_RC includes as input-only RC as IC. FDH O_SC includes as input-only SC as IC

Inputs and outputs selected according to the literature

Label
Input
Technical and administrative staff costsX1
Academic staff costsX2
Grants from others (private)X3
Grants from others (public)X4
Revenues from research projectsX5
Number of technical and administrative staffX6
Number of academic staffX7
Scientific equipmentX8
Licenses and trademarksX9
Patents and similar intellectual property rightsX10
Number of departmentsX11
Output
Total revenuesY1
Number of patentsY2
Number of journal articlesY3
Value addedY4
EBITDAY5
Number of spin-offsY6

Input correlations matrix

X1X2X3X4X5X6X7X8X9X10X11
X11
X20.9621
X30.660.7211
X40.4350.440.2921
X50.5390.5860.7230.1031
X60.9680.9430.6220.430.4771
X70.9330.9680.7160.3930.5920.9611
X80.490.5210.4330.2080.2920.480.5021
X90.2040.2420.354−0.0010.0760.2090.2330.131
X100.4650.4480.4610.0260.4930.4190.4480.2610.2591
X110.9060.9050.5860.4430.3810.9480.930.5170.2120.3691

Output correlations matrix

Y1Y2Y3Y4Y5Y6
Y11
Y20.7251
Y30.9310.6751
Y40.9960.7130.9231
Y50.9210.6770.8350.9321
Y60.4880.5840.4790.4830.4051

Inputs and outputs finally chosen for the empirical analysis

Input variableOutput variable
I1: Total cost of employeesO1: Value added (VA)
I2: Scientific equipmentO2: Number of patents
I3: Licenses and trademarksO3: Number of journal articles
I4: Grants from others (public + private)O4: Number of spin-offs
I5: Revenues from research projects

Spearman’s correlations of normalized FDH and MCDM methods’ scores

FDH OWSMWPMTSVTSLRSMRPMMFM
FDH O1
WSM0.551
WPM0.59−0.031
TSV0.39−0.050.841
TSL0.541−0.06−0.071
RSM0.540.090.870.950.071
RPM−0.21−0.4−0.22−0.46−0.4−0.431
MFM0.59−0.0310.84−0.060.87−0.221

Spearman’s correlations calculated on the results obtained by the following different FDH models

FDH OFDH O_HCFDH O_ RCFDH O_ SC
FDH O1
FDH O_HC0.471
FDH O_ RC0.430.261
FDH O_SC0.320.190.121

Note(s): FDH O includes all intellectual capital inputs, i.e. HC, RC and SC. FDH O_HC includes as input-only HC as intellectual capital. FDH O_RC includes as input-only RC as intellectual capital. FDH O_SC includes as input-only SC as intellectual capital

Descriptive statistics on the analyzed data, 2016

2016AverageStandard devMinFirst quartileMedianThird quartileMax
X130886990.6330068500.65839241.1610699106.572220662138263916.35156479849.6
X272862854.9649053102183048.9923132656.266003595289545413.62303736781.2
X33239643.7884327921.3320599330.0851933586.213184376.63519471787.24
X43534592.245737047.7830613198.7951372930.0953976555.3733445929.37
X54100221.2476942952.7190625289.371520443.4954409778.09545069339.49
X6813.09375775.629303518274.255861058.754101
X71004.8125901.204694230344.25825.51239.254340
X84980193.8736074817.0250927085.89752691301.3456967648.99335481238.27
X94451372.20932115245.540010035.1109927254740019
X10134716.35238669.30730019857.021635181144351
Y1182588052.6165433466.4058839021.33139100272.6225258162.3794318158
Y2202.90625223.1282441057150243.751295
Y31880.656251808.4566010593.2513522435.758002
Y4132016143.7120549687.8045739328.598030140171780055579161026.9
Y529934612.8229664840.76010627828.6220613848.0840182185.72135099933

Descriptive statistics on the analyzed data, 2017

2017AverageSdMinFirst quartileMedianThird quartileMax
X130484819.0929008906.59906129.8810579975.121953933.9140933881.23154763171.3
X273602877.0166358404.551950996.4623318411.2258881091.0991474051.3297412285.9
X32995596.733633393.9846903.64535801.7151904430.393825308.38514645497
X43190562.3555610717.580437803.50251182479.533602470.94332948685.92
X54185693.3097579828.3110699837.00751878066.264283587.97851184764.48
X6801.703125763.76998718265.75566.51038.54102
X7998.34375896.5639385303348021253.54145
X84835263.7975982081.6680964965.272926961.5156009921.636838861.6
X94383974.19631864034.580312.632512582.18113980.25254740019
X10139887.1169265470.22850018706.215120588.651403037
Y1189353686.71689243886582896.9661451593.75145881253.1229751394.5769644208.1
Y2210.5236.9207221058156.5253.51413
Y31911.4531251838.6703870601.25135925427943
Y4135742018.1120992899−2781198.7744578909.92103411672.7170337236568487762.8
Y531654321.9928949335.87−5638325.1110288626.5624876372.540803251.35118842044

Descriptive statistics on the analyzed data, 2018

2018AverageSdMinFirst quartileMedianThird quartileMax
X131540692.2329185518.261042861.5110468668.52209677043095741.26157343178.1
X270838356.162858805.981828979.223861065.5753084760.186677668.54291157122.4
X33037705.4674272521.3894460476008.3351853033.483270972.72523245571.07
X42578927.5714075890.99103850221274234.223249584.04522105574.39
X54667227.0998519516.7670801773.871817319.454858339.45559760757.9
X6788.765625737.892596423262.755591027.53906
X71003.359375902.129832134340.257761281.254090
X84599266.3416229324.0290753196.143101016.52485236340417510.15
X9158969.2489371274.4660150.432515835.35112703.57752008491.66
X10139350.84278966.56680013600.2051418371370355
X1112.1718759.69299319416101658
Y1184681310.3169186477.3057003842.07139625754.6231277795.3783678965.1
Y2216.265625248.6477232060.25163260.251522
Y31858.81251783.6832760574.751352.52349.757517
Y4130674081.4119728940−2894887.3544163616.2398147142165589621581301790
Y529894705.7530000559.37−11126794.839396363.39321331117.1240360152.5132801489.5
Y618.3548387115.1766610907152770

Appendix. Descriptive statistics on the analyzed data

Tables A1-A3

References

Anthony, R.N. (1966), “The trouble with profit maximization”, in Wadia, M.S. (Ed.), The Nature and Scope of Management, Scott Foresman and Company, Chicago, pp. 47-54.

Banker, R.D., Charnes, A. and Cooper, W.W. (1984), “Some models for estimating technical and scale inefficiencies in data envelopment analysis”, Management Science, Vol. 30 No. 9, pp. 1078-1092.

Bertschi, S., Bresciani, S., Crawford, T., Goebel, R., Kienreich, W., Lindner, M. and Moere, A.V. (2011), “What is knowledge visualization? Perspectives on an emerging discipline”, IEEE 2011 15th International Conference on Information Visualisation, pp. 329-336.

Bezhani, I. (2010), “Intellectual capital reporting at UK universities”, Journal of Intellectual Capital, Vol. 11 No. 2, pp. 179-207.

Brauers, W.K.M. and Zavadskas, E.K. (2010), “Project management by MULTIMOORA as an instrument for transition economies”, Technological and Economic Development of Economy, Vol. 16 No. 1, pp. 5-24.

Carson, J. (2020), “Quantification – affordances and limits”, Scholarly Assessment Reports, Vol. 2 No. 1, p. 8.

Ceballos Martin, B.A. (2016), “MCDM “multi-criteria decision making methods for Crisp data”, R package”. available at: https://cran.r-project.org/web/packages/MCDM/MCDM.pdf.

Charnes, A., Cooper, W.W. and Rhodes, E. (1978), “Measuring the efficiency of decision making units”, European Journal of Operational Research, Vol. 2 No. 6, pp. 429-444.

Córcoles, Y.R. (2013), “Importance of intellectual capital disclosure in Spanish universities”, Intangible Capital, Vol. 9 No. 3, pp. 931-944.

Cyert, R.M. and March, J.G. (1963), A Behavioural Theory of the Firm, Englewood Cliff, New York, NY.

Daraio, C. (2017a), “A framework for the assessment of research and its impacts”, Journal of Data and Information Science, Vol. 2 No. 4, pp. 7-42.

Daraio, C. (2017b), “Assessing research and its impacts: the generalized implementation problem and a doubly-conditional performance evaluation model”, ISSI 2017 - 16th International Conference on Scientometrics and Informetrics, Conference Proceedings, pp. 1546-1557.

Daraio, C. (2019), “Econometric approaches to the measurement of research productivity”, in Glänzel, W., Moed, H.F., Schmoch, H. and Thelwall, M. (Eds), Springer Handbook of Science and Technology Indicators, pp. 633-666.

Daraio, C. (2020), “A framework for the assessment and consolidation of productivity stylized facts”, in Christopher, P. and Robin, S. (Eds), Methodological Contributions to the Advancement of Productivity and Efficiency Analysis, Springer.

Daraio, C. and Simar, L. (2007), Advanced Robust and Nonparametric Methods in Efficiency Analysis: Methodology and Applications, Springer, New York, NY.

Daraio, C., Bonaccorsi, A. and Simar, L. (2015), “Rankings and university performance: a conditional multidimensional approach”, European Journal of Operational Research, Vol. 244, pp. 918-930.

Daraio, C., Simar, L. and Wilson, P.W. (2018), “Central limit theorems for conditional efficiency measures and tests of the ‘separability’ condition in non-parametric, two-stage models of production”, Econometrics Journal, Vol. 21, pp. 170-191.

Deprins, D., Simar, L. and Tulkens, H. (1984), “Measuring labor inefficiency in post offices”, in Marchand, M., Pestieau, P. and Tulkens, H. (Eds), The Performance of Public Enterprises: Concepts and Measurements, Amsterdam, North-Holland, pp. 243-267.

Di Berardino, D. and Corsi, C. (2018), “A quality evaluation approach to disclosing third mission activities and intellectual capital in Italian universities”, Journal of Intellectual Capital, Vol. 19 No. 1, pp. 178-201.

Drucker, P.F. (1966), “Business objectives and survival needs”, in Wadia, M.S. (Ed.), The Nature and Scope of Management, Scott Foresman and Company, Chicago.

Dumay, J. (2016), “A critical reflection on the future of intellectual capital: from reporting to disclosure”, Journal of Intellectual Capital, Vol. 17, pp. 168-184.

Edvinsson, L. and Malone, M.S. (1997), Intellectual Capital, Harper Business, New York.

Edvinsson, L. and Sullivan, P. (1996), “Developing a model for managing intellectual capital”, European Management Journal, Vol. 14 No. 4, pp. 356-364.

Eppler, M.J. (2013), “What is an effective knowledge visualization? Insights from a review of seminal concepts”, in Marchese, F.T. and Banissi, E. (Eds), Knowledge Visualization Currents, Springer-Verlag, London, pp. 3-12.

Fauzi, M.A., Tan, C.N.L., Daud, M. and Awalludin, M.M.N. (2020), “University rankings: a review of methodological flaws”, Issues in Educational Research.

Figueira, J., Greco, S. and Ehrgott, M. (Eds), (2005), Multiple Criteria Decision Analysis: State of the Art Surveys, International Series in Operations Research and Management Science, Springer-Verlag, Boston, Massachusetts, MA, Vol. 78.

Frutos-Belizón, J., Martín-Alcázar, F. and Sánchez-Gardey, G. (2019), “Conceptualizing academic intellectual capital: definition and proposal of a measurement scale”, Journal of Intellectual Capital, Vol. 20 No. 3, pp. 306-334.

Garcia Cascales, M.S. and Lamata, M.T. (2012), “On rank reversal and TOPSIS method”, Mathematical and Computer Modelling, Vol. 56 Nos 5-6, pp. 123-132.

Hwang, C.L. and Yoon, K. (1981), “Multiple attribute decision making”, Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, Berlin, Vol. 186.

Iazzolino, G., Laise, D. and Marraro, L. (2012), “Business multicriteria performance analysis: a tutorial”, Benchmarking: An International Journal, Vol. 19 No. 3, pp. 395-411.

Ibarra-Cisneros, M., Hernández-Perlines, F. and Rodríguez-García, M. (2020), “Intellectual capital, organisational performance and competitive advantage”, European Journal of International Management, Vol. 14 No. 6, pp. 955-975.

Jasanoff, S. (2007), “Technologies of humility”, Nature, Vol. 450 No. 7166, p. 33.

Jensen, M.C. (2002), “Value maximization, stakeholder theory, and the corporate objective function”, Business Ethics Quarterly, Vol. 12 No. 2, pp. 235-256.

Johnson, W. (1999), “An integrative taxonomy of intellectual capital: measuring the stock and flow of intellectual capital components in the firm”, International Journal of Technology Management, Vol. 18, pp. 562-575.

Kaplan, R. and Norton, D. (1996), “Using the balanced scorecard as a strategic management system”, Harvard Business Review, pp. 1-13.

Knorr-Cetina, K. (2003), Epistemic Cultures. How the Sciences Make Knowledge, Harvard University Press, Cambridge.

Kneip, A., Simar, L. and Wilson, P.W. (2016), “Testing hypothesis in nonparametric models of production”, Journal of Business and Economic Statistics, Vol. 34, pp. 435-456.

Laise, D., Marraro, L. and Iazzolino, G. (2015), “Metachoice for Benchmarking: a case study”, Benchmarking: An International Journal, Vol. 22 No. 3, pp. 338-353.

Leitner, K.H. (2004), “Intellectual capital reporting for universities: conceptual background and application for Austrian universities”, Research Evaluation, Vol. 13 No. 2, pp. 129-140.

March, J.G. (1966a), “Business decision making”, in Wadia, M.S. (Ed.), The Nature and Scope of Management, Scott Foresman and Company, Chicago.

March, J.G. (1996b), “A preface to understanding how decisions happen in organizations.”, in Zur Shapira (Ed.), Organizational Decision Making, Cambridge University Press, New York, NY.

March, J.G. and Simon, H.A. (1958), Organizations, Wiley, New York, NY.

Olcay, G.A. and Bulu, M. (2017), “Is measuring the knowledge creation of universities possible?: a review of university rankings”, Technological Forecasting and Social Change, Vol. 123, pp. 153-160.

Paloma Sánchez, M., Elena, S. and Castrillo, R. (2009), “Intellectual capital dynamics in universities: a reporting model”, Journal of Intellectual Capital, Vol. 10 No. 2, pp. 307-324.

Ramezan, M. (2011), “Intellectual capital and organizational organic structure in knowledge society: how are these concepts related?”, International Journal of Information Management, Vol. 31 No. 1, pp. 88-95.

Ramírez Córcoles, Y., Peñalver, J.F. and Ponce, Á.T. (2011), “Intellectual capital in Spanish public universities: stakeholders' information needs”, Journal of Intellectual Capital, Vol. 12 No. 3, pp. 356-376.

Ramírez Corcólez, Y., Tejada, Á. and Gordillo, S. (2013), “Recognition of intellectual capital importance in the university sector”, International Journal of Business and Social Research, Vol. 3 No. 4, pp. 27-41.

Secundo, G., Margherita, A., Elia, G. and Passiante, G. (2010), “Intangible assets in higher education and research: mission, performance or both?”, Journal of Intellectual Capital, Vol. 11 No. 2, pp. 140-157.

Secundo, G., Perez, S., Martinaitis, Ž. and Leitner, H.K. (2017), “An Intellectual Capital framework to measure universities' third mission activities”, Technological Forecasting and Social Change, Vol. 123, pp. 229-239.

Simon, H. (1955), “A behavioural model of rational choice”, Quarterly Journal of Economics, Vol. 69, pp. 99-118.

Simon, H.A. (1978), “Rationality as process and as product of thought”, American Economic Review, Vol. 68, pp. 1-16.

Simon, H.A. (1997), “Models of bounded rationality”, Empirically Grounded Economic Reason, The MIT Press, New York, NY, Vol. 3, pp. 87-110.

Smith, G.V. and Parr, R.L. (2000), Valuation of Intellectual Property and Intangible Assets, 3rd ed., John Wiley & Sons, New York, NY.

Steward, T.A. (1994), “Your company's most valuable asset: intellectual capital”, Fortune, Vol. 3, pp. 28-33.

Sveiby, K. (1989), Den Osynliga Balansräkningen, Affärsvärlden, Stockholm.

Valmorbida, S.M.I. and Ensslin, S.R. (2017), “Performance evaluation of university rankings: literature review and guidelines for future research”, International Journal of Business Innovation and Research, Vol. 14 No. 4, pp. 479-501.

Vernon, M.M., Balas, E.A. and Momani, S. (2018), “Are university rankings useful to improve research? A systematic review”, PloS One, Vol. 13 No. 3, p. e0193762, doi: 10.1371/journal.pone.0193762.

Vincke, P. (1992), Multicriteria Decision-Aid, Wiley, New York, NY.

Young Chu, P., Ling Lin, Y., Hwa Hsiung, H. and Yar Liu, T. (2006), “Intellectual capital: an empirical study of ITRI”, Technological Forecasting and Social Change, Vol. 73 No. 7, pp. 886-902.

Zavadskas, E.K., Turskis, Z., Antucheviciene, J. and Zakarevicius, A. (2012), “An optimization of weighted aggregated sum product assessment”, Electronics and Electrical Engineering, Vol. 122 No. 6, pp. 3-6.

Acknowledgements

A previous version of this paper was presented at the IFKAD 2020 Conference held in Matera (Italy) on 9–11 September 2020. The authors thank the conference participants for their useful comments and discussion.

Corresponding author

Gianpaolo Iazzolino is the corresponding author and can be contacted at: gp.iazzolino@unical.it

About the authors

Cinzia Daraio is Full Professor at the Department of Computer, Control and Management Engineering at the University of Rome “La Sapienza,” Italy. Her main interests are in Science and Technology Indicators, Higher Education Microdata and Methodological and Empirical Studies in Productivity and Efficiency Analysis.

Gianpaolo Iazzolino is Associate Professor in Business Economics at the Department of Mechanical, Energy and Management Engineering at the University of Calabria, Italy. He is currently a member of the Faculty of Management Engineering at the same University. His research interests are in Business Performance Evaluation Models, Evaluation of Innovation and Intangibles, Knowledge Management Systems.

Domenico Laise is Adjunct Professor (formerly Associate Professor) at the Department of Computer, Control and Management Engineering at the University of Rome “La Sapienza,” Italy. He was associate professor at the same department. His main interests are in Bounded Rationality Decision Models, Multi-criteria Decision-Making, Management Control Models and Organizational Design.

Ilda Maria Coniglio is research fellow at the Department of Mechanical, Energy and Management Engineering at the University of Calabria, Italy. She received her M.Sc. in Management Engineering from the same University.

Simone Di Leo is Phd Student at the Department of Computer, Control and Management Engineering at the University of Rome “La Sapienza,” Italy. He received his M.Sc. in Management Engineering from the same University.

Related articles