Search results
1 – 10 of over 1000Alex Copping, Noorullah Kuchai, Laura Hattam, Natalia Paszkiewicz, Dima Albadra, Paul Shepherd, Esra Sahin Burat and David Coley
Understanding the supply network of construction materials used to construct shelters in refugee camps, or during the reconstruction of communities, is important as it can reveal…
Abstract
Purpose
Understanding the supply network of construction materials used to construct shelters in refugee camps, or during the reconstruction of communities, is important as it can reveal the intricate links between different stakeholders and the volumes and speeds of material flows to the end-user. Using social network analysis (SNA) enables another dimension to be analysed – the role of commonalities. This is likely to be particularly important when attempting to replace vernacular materials with higher-performing alternatives or when encouraging the use of non-vernacular methods. This paper aims to analyse the supply networks of four different disaster-relief situations.
Design/methodology/approach
Data were collected from interviews with 272 displaced (or formally displaced) families in Afghanistan, Bangladesh, Nepal and Turkey, often in difficult conditions.
Findings
The results show that the form of the supply networks was highly influenced by the nature/cause of the initial displacement, the geographical location, the local availability of materials and the degree of support/advice given by aid agencies and or governments. In addition, it was found that SNA could be used to indicate which strategies might work in a particular context and which might not, thereby potentially speeding up the delivery of novel solutions.
Research limitations/implications
This study represents the first attempt in theorising and empirically investigating supply networks using SNA in a post-disaster reconstruction context. It is suggested that future studies might map the up-stream supply chain to include manufacturers and higher-order, out of country, suppliers. This would provide a complete picture of the origins of all materials and components in the supply network.
Originality/value
This is original research, and it aims to produce new knowledge.
Details
Keywords
Despite the general recommendation of using a combination of multiple criteria for research assessment and faculty promotion decisions, the raise of quantitative indicators is…
Abstract
Purpose
Despite the general recommendation of using a combination of multiple criteria for research assessment and faculty promotion decisions, the raise of quantitative indicators is generating an emerging trend in Business Schools to use single journal impact factors (IFs) as key (unique) drivers for those relevant school decisions. This paper aims to investigate the effects of using single Web of Science (WoS)-based journal impact metrics when assessing research from two related disciplines: Business and Economics, and its potential impact for the strategic sustainability of a Business School.
Design/methodology/approach
This study collected impact indicators data for Business and Economics journals from the Clarivate Web of Science database. We concentrated on the IF indicators, the Eigenfactor and the article influence score (AIS). This study examined the correlations between these indicators and then ranked disciplines and journals using these different impact metrics.
Findings
Consistent with previous findings, this study finds positive correlations among these metrics. Then this study ranks the disciplines and journals using each impact metric, finding relevant and substantial differences, depending on the metric used. It is found that using AIS instead of the IF raises the relative ranking of Economics, while Business remains basically with the same rank.
Research limitations/implications
This study contributes to the research assessment literature by adding substantial evidence that given the sensitivity of journal rankings to particular indicators, the selection of a single impact metric for assessing research and hiring/promotion and tenure decisions is risky and too simplistic. This research shows that biases may be larger when assessment involves researchers from related disciplines – like Business and Economics – but with different research foundations and traditions.
Practical implications
Consistent with the literature, given the sensibility of journal rankings to particular indicators, the selection of a single impact metric for assessing research, assigning research funds and hiring/promotion and tenure decisions is risky and simplistic. However, this research shows that risks and biases may be larger when assessment involves researchers from related disciplines – like Business and Economics – but with different research foundations and trajectories. The use of multiple criteria is advised for such purposes.
Originality/value
This is an applied work using real data from WoS that addresses a practical case of comparing the use of different journal IFs to rank-related disciplines like Business and Economics, with important implications for faculty tenure and promotion committees and for research funds granting institutions and decision-makers.
Details
Keywords
Santo Raneri, Fabian Lecron, Julie Hermans and François Fouss
Artificial intelligence (AI) has started to receive attention in the field of digital entrepreneurship. However, few studies propose AI-based models aimed at assisting…
Abstract
Purpose
Artificial intelligence (AI) has started to receive attention in the field of digital entrepreneurship. However, few studies propose AI-based models aimed at assisting entrepreneurs in their day-to-day operations. In addition, extant models from the product design literature, while technically promising, fail to propose methods suitable for opportunity development with high level of uncertainty. This study develops and tests a predictive model that provides entrepreneurs with a digital infrastructure for automated testing. Such an approach aims at harnessing AI-based predictive technologies while keeping the ability to respond to the unexpected.
Design/methodology/approach
Based on effectuation theory, this study identifies an AI-based, predictive phase in the “build-measure-learn” loop of Lean startup. The predictive component, based on recommendation algorithm techniques, is integrated into a framework that considers both prediction (causal) and controlled (effectual) logics of action. The performance of the so-called active learning build-measure-predict-learn algorithm is evaluated on a data set collected from a case study.
Findings
The results show that the algorithm can predict the desirability level of newly implemented product design decisions (PDDs) in the context of a digital product. The main advantages, in addition to the prediction performance, are the ability to detect cases where predictions are likely to be less precise and an easy-to-assess indicator for product design desirability. The model is found to deal with uncertainty in a threefold way: epistemological expansion through accelerated data gathering, ontological reduction of uncertainty by revealing prior “unknown unknowns” and methodological scaffolding, as the framework accommodates both predictive (causal) and controlled (effectual) practices.
Originality/value
Research about using AI in entrepreneurship is still in a nascent stage. This paper can serve as a starting point for new research on predictive techniques and AI-based infrastructures aiming to support digital entrepreneurs in their day-to-day operations. This work can also encourage theoretical developments, building on effectuation and causation, to better understand Lean startup practices, especially when supported by digital infrastructures accelerating the entrepreneurial process.
Details
Keywords
Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…
Abstract
Purpose
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.
Design/methodology/approach
This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.
Findings
This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.
Originality/value
The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
Details
Keywords
Oscar Claveria and Petar Sorić
The purpose of this paper is to investigate the adjustment of government redistributive policies in Scandinavian and Mediterranean countries following changes in income inequality…
Abstract
Purpose
The purpose of this paper is to investigate the adjustment of government redistributive policies in Scandinavian and Mediterranean countries following changes in income inequality over the period 1980–2021.
Design/methodology/approach
The authors first modelled the time-varying dynamics between income inequality and redistribution and then used a non-linear framework to test for the existence of asymmetries and cointegration in their long-run relationship. The authors used two complementary measures of inequality – the share of total income accruing to top percentile income holders and the ratio of the share of total income accruing to top decile income holders divided by that accumulated by the bottom 50% – and computed redistribution as the difference between the two inequality indicators before and after taxes and transfers.
Findings
The authors found that the sign of the relationship between income inequality and redistribution is mostly positive and time-varying. Overall, the authors also found evidence that the impact of increases in inequality on redistributive measures is higher than that of decreases. Finally, the authors obtained a significant long-run relationship between both variables in all countries except Denmark and Spain. These results hold for both Scandinavian and Mediterranean countries.
Originality/value
To the best of the authors’ knowledge, this is the first paper to account for the potential existence of non-linearities and to examine the asymmetries in the adjustment of redistributive policies to increases in income inequality using alternative income inequality metrics.
Details
Keywords
Basma Makhlouf Shabou, Julien Tièche, Julien Knafou and Arnaud Gaudinat
This paper aims to describe an interdisciplinary and innovative research conducted in Switzerland, at the Geneva School of Business Administration HES-SO and supported by the…
Abstract
Purpose
This paper aims to describe an interdisciplinary and innovative research conducted in Switzerland, at the Geneva School of Business Administration HES-SO and supported by the State Archives of Neuchâtel (Office des archives de l'État de Neuchâtel, OAEN). The problem to be addressed is one of the most classical ones: how to extract and discriminate relevant data in a huge amount of diversified and complex data record formats and contents. The goal of this study is to provide a framework and a proof of concept for a software that helps taking defensible decisions on the retention and disposal of records and data proposed to the OAEN. For this purpose, the authors designed two axes: the archival axis, to propose archival metrics for the appraisal of structured and unstructured data, and the data mining axis to propose algorithmic methods as complementary or/and additional metrics for the appraisal process.
Design/methodology/approach
Based on two axes, this exploratory study designs and tests the feasibility of archival metrics that are paired to data mining metrics, to advance, as much as possible, the digital appraisal process in a systematic or even automatic way. Under Axis 1, the authors have initiated three steps: first, the design of a conceptual framework to records data appraisal with a detailed three-dimensional approach (trustworthiness, exploitability, representativeness). In addition, the authors defined the main principles and postulates to guide the operationalization of the conceptual dimensions. Second, the operationalization proposed metrics expressed in terms of variables supported by a quantitative method for their measurement and scoring. Third, the authors shared this conceptual framework proposing the dimensions and operationalized variables (metrics) with experienced professionals to validate them. The expert’s feedback finally gave the authors an idea on: the relevance and the feasibility of these metrics. Those two aspects may demonstrate the acceptability of such method in a real-life archival practice. In parallel, Axis 2 proposes functionalities to cover not only macro analysis for data but also the algorithmic methods to enable the computation of digital archival and data mining metrics. Based on that, three use cases were proposed to imagine plausible and illustrative scenarios for the application of such a solution.
Findings
The main results demonstrate the feasibility of measuring the value of data and records with a reproducible method. More specifically, for Axis 1, the authors applied the metrics in a flexible and modular way. The authors defined also the main principles needed to enable computational scoring method. The results obtained through the expert’s consultation on the relevance of 42 metrics indicate an acceptance rate above 80%. In addition, the results show that 60% of all metrics can be automated. Regarding Axis 2, 33 functionalities were developed and proposed under six main types: macro analysis, microanalysis, statistics, retrieval, administration and, finally, the decision modeling and machine learning. The relevance of metrics and functionalities is based on the theoretical validity and computational character of their method. These results are largely satisfactory and promising.
Originality/value
This study offers a valuable aid to improve the validity and performance of archival appraisal processes and decision-making. Transferability and applicability of these archival and data mining metrics could be considered for other types of data. An adaptation of this method and its metrics could be tested on research data, medical data or banking data.
Details
Keywords
James Christopher Westland and Jian Mou
Internet search is a $120bn business that answers lists of search terms or keywords with relevant links to Internet webpages. Only a few companies have sufficient scale to compete…
Abstract
Purpose
Internet search is a $120bn business that answers lists of search terms or keywords with relevant links to Internet webpages. Only a few companies have sufficient scale to compete and thus economics of the process are paramount. This study aims to develop a detailed industry-specific modeling of the economics of internet search.
Design/methodology/approach
The current research develops a stochastic model of the process of Internet indexing, search and retrieval in order to predict expected costs and revenues of particular configurations and usages.
Findings
The models define behavior and economics of parameters that are not directly observable, where it is difficult to empirically determine the distributions and economics.
Originality/value
The model may be used to guide the economics of large search engine operations, including the advertising platforms that depend on them and largely fund them.
Details
Keywords
Carlos Alexander Grajales and Santiago Medina Hurtado
This paper measures different market risk impacts on options portfolios under the new Fundamental Review of the Trading Book (FRTB) regulation, issued in Basel and coming into…
Abstract
Purpose
This paper measures different market risk impacts on options portfolios under the new Fundamental Review of the Trading Book (FRTB) regulation, issued in Basel and coming into effect in 2023.
Design/methodology/approach
This paper first suggests an algorithm for implementing the FRTB standardised approach via the sensitivities-based method to estimate a portfolio's risk capital and presents an illustration applied to an option position. Second, it proposes a methodology to estimate the expected shortfall in options portfolios from the FRTB internal models approach. In this regard, an application is developed to measure expected shortfall (ES) and value at risk (VaR) impacts under FRTB versus conventional VaR in a currency option position by considering stress scenarios from the 2007–9 and 2020–1 crises and back-testing procedures.
Findings
The suggested algorithm satisfactorily captures impacts via the sensitivities-based method, and higher risk capital demands are expected for emerging economies. Also, the planned FRTB methodology to measure ES and VaR is appropriate; in particular, historical metrics perform well. Astonishingly, their revealed impacts are more significant under the 2020–1 pandemic crisis than the 2007–9 financial crisis.
Originality/value
The proposals developed weave a communication bridge between the standardised and internal approaches of FRTB regulation, which can be scaled up technologically and institutionally.
Details
Keywords
This study analyzed Korea's relations table through network analysis. In particular, among the centralities, eigenvector centrality, PageRank centrality and degree were used. The…
Abstract
Purpose
This study analyzed Korea's relations table through network analysis. In particular, among the centralities, eigenvector centrality, PageRank centrality and degree were used. The author studied which network characteristics affected the value-added rate.
Design/methodology/approach
A network analysis method was used.
Findings
It is the inward relationship that affects the value-added ratio of Korea's industries and the outward relationship has less influence. In particular, the inward relationship not only acts as a cost but also has an effect on the rate of added value recently.
Research limitations/implications
Since the three years of 2010, 2015 and 2019 are the target, the data are somewhat insufficient to generalize.
Practical implications
As for the value-added ratio of an industry, input is more important than output (sales). Therefore, where the input is received is very important.
Social implications
It is possible to increase the understanding of the determinants of the value-added rate of Korean industries.
Originality/value
(1) It was clarified which side is inward or outward in determining the industry in Korea. (2) The relationship between PageRank, eigenvector centrality and degree was analyzed in Korean cases. (3) Input is a cost and acts to increase added value.
Details
Keywords
H. Bello-Salau, A.M. Aibinu, A.J. Onumanyi, E.N. Onwuka, J.J. Dukiya and H. Ohize
This paper presents a new algorithm for detecting and characterizing potholes and bumps directly from noisy signals acquired using an Accelerometer. A wavelet transformation based…
Abstract
This paper presents a new algorithm for detecting and characterizing potholes and bumps directly from noisy signals acquired using an Accelerometer. A wavelet transformation based filter was used to decompose the signals into multiple scales. These coefficients were correlated across adjacent scales and filtered using a spatial filter. Road anomalies were then detected based on a fixed threshold system, while characterization was achieved using unique features extracted from the filtered wavelet coefficients. Our analyses show that the proposed algorithm detects and characterizes road anomalies with high levels of accuracy, precision and low false alarm rates.
Details