Search results
1 – 10 of over 5000Warattaya Chinnakum, Laura Berrout Ramos, Olugbenga Iyiola and Vladik Kreinovich
In real life, we only know the consequences of each possible action with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds…
Abstract
Purpose
In real life, we only know the consequences of each possible action with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds on the expected gain. A usual way to compare such interval-valued alternatives is to use the optimism–pessimism criterion developed by Nobelist Leo Hurwicz. In this approach, a weighted combination of the worst-case and the best-case gains is maximized. There exist several justifications for this criterion; however, some of the assumptions behind these justifications are not 100% convincing. The purpose of this paper is to find a more convincing explanation.
Design/methodology/approach
The authors used utility approach to decision-making.
Findings
The authors proposed new, hopefully more convincing, justifications for Hurwicz’s approach.
Originality/value
This is a new, more intuitive explanation of Hurwicz’s approach to decision-making under interval uncertainty.
Details
Keywords
Miguel Jerez, Alejandra Montealegre-Luna and Alfredo Garcia-Hiernaux
The purpose of this paper is to estimate the impact of the 2008 and 2020 economic crises on employment in Spain.
Abstract
Purpose
The purpose of this paper is to estimate the impact of the 2008 and 2020 economic crises on employment in Spain.
Design/methodology/approach
The authors perform a counterfactual analysis, combining intervention (interrupted time series) analysis and conditional forecasting to estimate a “crisis-free” scenario. These counterfactual estimates are used as a synthetic control, to be compared with the observed values of the main variables of the Spanish Labor Force Survey (EPA).
Findings
The authors measure the effect on Spanish employment of the 2008 recession and the ongoing COVID/Ukraine crisis and the speed of recovery, which yields a rigorous dating for the beginning and end of the crises studied. Finally, the authors provide estimates about which part of the employed and unemployed people was in furlough (ERTE) based on microdata provided by the Spanish Institute of Statistics.
Originality/value
To the best of the authors’ knowledge, there are no counterfactual studies covering all the basic variables in EPA and no estimates for the effect of ERTEs on the basic employment variables. Finally, the authors combine well-known intervention and forecasting techniques into an integrated framework to assess the effects of both, past and ongoing crises.
Details
Keywords
Yuhan Liu, Linhong Wang, Ziling Zeng and Yiming Bie
The purpose of this study is to develop an optimization method for charging plans with the implementation of time-of-day (TOD) electricity tariff, to reduce electricity bill.
Abstract
Purpose
The purpose of this study is to develop an optimization method for charging plans with the implementation of time-of-day (TOD) electricity tariff, to reduce electricity bill.
Design/methodology/approach
Two optimization models for charging plans respectively with fixed and stochastic trip travel times are developed, to minimize the electricity costs of daily operation of an electric bus. The charging time is taken as the optimization variable. The TOD electricity tariff is considered, and the energy consumption model is developed based on real operation data. An optimal charging plan provides charging times at bus idle times in operation hours during the whole day (charging time is 0 if the bus is not get charged at idle time) which ensure the regular operation of every trip served by this bus.
Findings
The electricity costs of the bus route can be reduced by applying the optimal charging plans.
Originality/value
This paper produces a viable option for transit agencies to reduce their operation costs.
Details
Keywords
Marco Botta and Luca Vittorio Angelo Colombo
It is widely believed that deviating from the “one share-one vote” principle leads to corporate inefficiencies. To measure the market appraisal of this potential inefficiency…
Abstract
Purpose
It is widely believed that deviating from the “one share-one vote” principle leads to corporate inefficiencies. To measure the market appraisal of this potential inefficiency, this study aims to analyse the market reaction to a change from the “one head-one vote” to the “one share-one vote” mechanism by means of a quasi-natural experiment: a 2015 Italian reform forcing all listed cooperative banks to transform into joint-stock companies.
Design/methodology/approach
To investigate the market reaction around the regulatory change, this study uses both a traditional event study and a novel methodology based on the synthetic control method as well as on Bayesian statistical techniques.
Findings
This study estimates the market valuation of the effects of the governance change around the event date being equal to a cumulative average increase in market value of about 14 per cent using an event study methodology, and of about 13 per cent using Bayesian techniques.
Originality/value
This study provides evidence on the fact that the voting mechanism significantly affects the market values of companies. The study also introduces a novel statistical technique that can be extremely useful in analysing single-firm event studies.
Details
Keywords
Elisa Verna, Gianfranco Genta and Maurizio Galetto
The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality…
Abstract
Purpose
The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality performance in both assembly and disassembly operations. This topic has not been extensively investigated in previous research.
Design/methodology/approach
An extensive experimental campaign involving 84 operators was conducted to repeatedly assemble and disassemble six different products of varying complexity to construct productivity and quality learning curves. Data from the experiment were analysed using statistical methods.
Findings
The human learning factor of productivity increases superlinearly with the increasing architectural complexity of products, i.e. from centralised to distributed architectures, both in assembly and disassembly, regardless of the level of overall product complexity. On the other hand, the human learning factor of quality performance decreases superlinearly as the architectural complexity of products increases. The intrinsic characteristics of product architecture are the reasons for this difference in learning factor.
Practical implications
The results of the study suggest that considering product complexity, particularly architectural complexity, in the design and planning of manufacturing processes can optimise operator learning, productivity and quality performance, and inform decisions about improving manufacturing operations.
Originality/value
While previous research has focussed on the effects of complexity on process time and defect generation, this study is amongst the first to investigate and quantify the effects of product complexity, including architectural complexity, on operator learning using an extensive experimental campaign.
Details
Keywords
Kanak Meena, Devendra K. Tayal, Oscar Castillo and Amita Jain
The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the…
Abstract
The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the uneven distribution of attributes occurs, and it can cause a severe load imbalance problem. When database join operations are applied to these datasets, skewness occurs exponentially. All the algorithms developed to date for the implementation of database joins are highly skew sensitive. This paper presents a new approach for handling data-skewness in a character- based string similarity join using the MapReduce framework. In the literature, no such work exists to handle data skewness in character-based string similarity join, although work for set based string similarity joins exists. Proposed work has been divided into three stages, and every stage is further divided into mapper and reducer phases, which are dedicated to a specific task. The first stage is dedicated to finding the length of strings from a dataset. For valid candidate pair generation, MR-Pass Join framework has been suggested in the second stage. MRFA concepts are incorporated for string similarity join, which is named as “MRFA-SSJ” (MapReduce Frequency Adaptive – String Similarity Join) in the third stage which is further divided into four MapReduce phases. Hence, MRFA-SSJ has been proposed to handle skewness in the string similarity join. The experiments have been implemented on three different datasets namely: DBLP, Query log and a real dataset of IP addresses & Cookies by deploying Hadoop framework. The proposed algorithm has been compared with three known algorithms and it has been noticed that all these algorithms fail when data is highly skewed, whereas our proposed method handles highly skewed data without any problem. A set-up of the 15-node cluster has been used in this experiment, and we are following the Zipf distribution law for the analysis of skewness factor. Also, a comparison among existing and proposed techniques has been shown. Existing techniques survived till Zipf factor 0.5 whereas the proposed algorithm survives up to Zipf factor 1. Hence the proposed algorithm is skew insensitive and ensures scalability with a reasonable query processing time for string similarity database join. It also ensures the even distribution of attributes.
Details
Keywords
Hung T. Nguyen, Olga Kosheleva and Vladik Kreinovich
In 1951, Kenneth Arrow proved that it is not possible to have a group decision-making procedure that satisfies reasonable requirements like fairness. From the theoretical…
Abstract
Purpose
In 1951, Kenneth Arrow proved that it is not possible to have a group decision-making procedure that satisfies reasonable requirements like fairness. From the theoretical viewpoint, this is a great result – well-deserving the Nobel Prize that was awarded to Professor Arrow. However, from the practical viewpoint, the question remains – so how should we make group decisions? A usual way to solve this problem is to provide some reasonable heuristic ideas, but the problem is that different seemingly reasonable idea often lead to different group decision – this is known, e.g. for different voting schemes.
Design/methodology/approach
In this paper we analyze this problem from the viewpoint of decision theory, the basic theory underlying all our activities – including economic ones.
Findings
We show how from the first-principles decision theory, we can extract explicit recommendations for group decision making.
Originality/value
Most of the resulting recommendations have been proposed earlier. The main novelty of this paper is that it provides a unified coherent narrative that leads from the fundamental first principles to practical recommendations.
Details
Keywords
Currently, there is a conflict in developing countries between the requirements for the self-development of forestry and the insufficient investment in the forestry sector, and…
Abstract
Purpose
Currently, there is a conflict in developing countries between the requirements for the self-development of forestry and the insufficient investment in the forestry sector, and the forest ticket system is an innovative forestry management method to solve this contradiction. In the research on the forest ticket system, the study of its price formation mechanism is relatively important. The key issues of the forest ticket system are how to form the forest ticket price and whether the forest ticket pricing methods are reasonable. Solving these problems is the purpose of this study.
Design/methodology/approach
This study will use three methods, namely the forest ecosystem service value evaluation index method, the ecosystem service value based on per unit area evaluation method and the contingent valuation method, to study the forest ticket price formation mechanism, filling the gap in the current research on forest ticket pricing methods. It will analyze how these three pricing methods specifically price the forest ticket and evaluate whether these pricing methods are reasonable. This study will then summarize and comprehensively study the forest ticket price formation mechanism and provide policy recommendations for decision-making departments.
Findings
The contingent valuation method and the forest ecosystem service value evaluation index method should be mainly used and given priority in the forest ticket pricing process. When the forest ticket is mainly issued for local residents' willingness to compensate for the forestry ecological value, the contingent valuation method should be mainly considered; when the forest ticket is mainly issued for compensating for the ecological value of local used forest land, the forest ecosystem service value evaluation index method should be mainly considered. The ecosystem service value based on per unit area evaluation method does not need to be the focus.
Originality/value
Compared with existing research studies, which focus more on the forest ticket system itself and the definition of forest ticket, this study mainly focuses on the forest ticket price formation mechanism, emphasizing how to form the forest ticket price and whether the forest ticket pricing methods are reasonable, which has a certain degree of innovation and research value and can partially fill the gap in related fields. At the same time, this study has certain help for the enrichment of the forest ticket system and the extension of related research studies.
Details
Keywords
Vicente Esteve and María A. Prats
This paper aims to analyze the dynamics of the Spanish public debt–gross domestic product ratio during the period 1850–2020.
Abstract
Purpose
This paper aims to analyze the dynamics of the Spanish public debt–gross domestic product ratio during the period 1850–2020.
Design/methodology/approach
This study uses a recent procedure to test for recurrent explosive behavior (Phillips et al., 2011; Phillips et al., 2015a, 2015b) to identify episodes of explosive public debt dynamics and also the episodes of fiscal adjustments over this long period.
Findings
The identified episodes of explosive behavior of public debt coincided with fiscal stress events, whereas fiscal adjustments and changes in economic policies stabilized public finances after periods of explosive dynamics of public debt.
Originality/value
The longer than usual span of the data should allow the authors to obtain some more robust results than in most of previous analyses of long-run sustainability.
Details
Keywords
Mohammed Belal Uddin and Bilkis Akhter
The purpose of this paper is to investigate the institutional and significant competences that have allowed organizations to employ supply chain management (SCM) practices, the…
Abstract
Purpose
The purpose of this paper is to investigate the institutional and significant competences that have allowed organizations to employ supply chain management (SCM) practices, the practices of SCM and the benefits of SCM practices for both buyers and suppers.
Design/methodology/approach
A theoretical model (including hypotheses) has been proposed regarding antecedents, SCM practices and outcomes of SCM. Using purposive sampling method, data were collected from different manufacturing, distributing, wholesaling and retailing organizations. Collected data were analyzed in a principal component analysis and structural equation modeling, including confirmatory factor analysis, and path analysis.
Findings
The empirical results provided supportive evidences in favor of the hypotheses and theoretical arguments except one hypothesis. This study did not a find positive relationship between organizational compatibility and SCM practices. The study found relationships between mutual trust and SCM practices, communication and SCM practices, and cooperation and SCM practices, which were positive and significant. Again, the relationships between SCM practices and competitive advantages, and SCM practices and long-term orientation and growth were also positive and significant.
Practical implications
Practitioners could also use the findings to align SCM with business strategy and gain an insight for better utilization of the available resources and technology to perform better.
Originality/value
This study will provide guidance as to the preconditions that need to be in place in order for a company to implement SCM with its suppliers and customers. It will remind practitioners to stay focused on the ultimate goals of SCM – lower costs, increased customer value and satisfaction, and, ultimately, competitive advantage.
Details