Search results
1 – 10 of 176Ellen Goddard, Albert Boaitey, Getu Hailu and Kenneth Poon
The purpose of this paper is to evaluate cow-calf producer incentive to adopt innovations in traits with important environmental and economic implications for the beef supply…
Abstract
Purpose
The purpose of this paper is to evaluate cow-calf producer incentive to adopt innovations in traits with important environmental and economic implications for the beef supply chain.
Design/methodology/approach
A whole farm multi-year farm optimization model that tracks changes in discounted net returns and methane emissions from the use of newer DNA-related technologies to breed for feed efficient cattle is developed. The analysis is situated within the context of whole beef cattle supply chain. This allows for the derivation of the entire value and environmental impact of the innovation, and the decomposition of value by different participants. The impact of different policies that can stimulate producer uptake and the diffusion of the innovation is also addressed.
Findings
The results of the study showed that whilst the use of the breeding technology yielded positive economic and environmental benefits to all producers in the supply chain, primary adopters were unlikely to adopt. This paper finds evidence of the misalignment in incentives within the supply chain with a significant proportion of the additional value going to producers who do not incur any additional cost from the adoption of the innovation. The study also highlighted the role of both public and market-based mechanisms in the innovation diffusion process.
Originality/value
This paper is unique as it is the first study that addresses producer incentive to adopt genomic selection for feed efficiency across the entire beef cattle supply chain, and incorporates both economic and environmental outcomes.
Details
Keywords
Amr S. Allam, Hesham Bassioni, Mohammed Ayoub and Wael Kamel
This study aims to compare the performance of two nature-inspired metaheuristics inside Grasshopper in optimizing daylighting and energy performance against brute force in terms…
Abstract
Purpose
This study aims to compare the performance of two nature-inspired metaheuristics inside Grasshopper in optimizing daylighting and energy performance against brute force in terms of the resemblance to ideal solution and calculation time.
Design/methodology/approach
The simulation-based optimization process was controlled using two population-based metaheuristic algorithms, namely, the genetic algorithm (GA) and particle swarm optimization (PSO). The objectives of the optimization routine were optimizing daylighting and energy consumption of a standard reference office while varying the urban context configuration in Alexandria, Egypt.
Findings
The results from the GA and PSO were compared to those from brute force. The GA and PSO demonstrated much faster performance to converge to design solution after conducting only 25 and 43% of the required simulation runs, respectively. Also, the average proportion of the resulted weighted sum optimization (WSO) per case using the GA and PSO to that from brute force algorithm was 85 and 95%, respectively.
Originality/value
The work of this paper goes beyond the current practices for showing that the performance of the optimization algorithm can differ by changing the urban context configuration while solving the same problem under the same design variables and objectives.
Details
Keywords
Mahmoud Al-Ayyoub, Ahmed Alwajeeh and Ismail Hmeidi
The authorship authentication (AA) problem is concerned with correctly attributing a text document to its corresponding author. Historically, this problem has been the focus of…
Abstract
Purpose
The authorship authentication (AA) problem is concerned with correctly attributing a text document to its corresponding author. Historically, this problem has been the focus of various studies focusing on the intuitive idea that each author has a unique style that can be captured using stylometric features (SF). Another approach to this problem, known as the bag-of-words (BOW) approach, uses keywords occurrences/frequencies in each document to identify its author. Unlike the first one, this approach is more language-independent. This paper aims to study and compare both approaches focusing on the Arabic language which is still largely understudied despite its importance.
Design/methodology/approach
Being a supervised learning problem, the authors start by collecting a very large data set of Arabic documents to be used for training and testing purposes. For the SF approach, they compute hundreds of SF, whereas, for the BOW approach, the popular term frequency-inverse document frequency technique is used. Both approaches are compared under various settings.
Findings
The results show that the SF approach, which is much cheaper to train, can generate more accurate results under most settings.
Practical implications
Numerous advantages of efficiently solving the AA problem are obtained in different fields of academia as well as the industry including literature, security, forensics, electronic markets and trading, etc. Another practical implication of this work is the public release of its sources. Specifically, some of the SF can be very useful for other problems such as sentiment analysis.
Originality/value
This is the first study of its kind to compare the SF and BOW approaches for authorship analysis of Arabic articles. Moreover, many of the computed SF are novel, while other features are inspired by the literature. As SF are language-dependent and most existing papers focus on English, extra effort must be invested to adapt such features to Arabic text.
Details
Keywords
Xia Li, Ruibin Bai, Peer-Olaf Siebers and Christian Wagner
Many transport and logistics companies nowadays use raw vehicle GPS data for travel time prediction. However, they face difficult challenges in terms of the costs of information…
Abstract
Purpose
Many transport and logistics companies nowadays use raw vehicle GPS data for travel time prediction. However, they face difficult challenges in terms of the costs of information storage, as well as the quality of the prediction. This paper aims to systematically investigate various meta-data (features) that require significantly less storage space but provide sufficient information for high-quality travel time predictions.
Design/methodology/approach
The paper systematically studied the combinatorial effects of features and different model fitting strategies with two popular decision tree ensemble methods for travel time prediction, namely, random forests and gradient boosting regression trees. First, the investigation was conducted using pseudo travel time data that were generated using a pseudo travel time sampling algorithm, which allows generating travel time data using different noise processes so that the prediction performance under different travel conditions and noise characteristics can be studied systematically. The results and findings were then further compared and evaluated through a real-life case.
Findings
The paper provides empirical insights and guidelines about how raw GPS data can be reduced into a small-sized feature vector for the purposes of vehicle travel time prediction. It suggests that, add travel time observations from the previous departure time intervals are beneficial to the prediction, particularly when there is no other types of real-time information (e.g. traffic flow, speed) are available. It was also found that modular model fitting does not improve the quality of the prediction in all experimental settings used in this paper.
Research limitations/implications
The findings are primarily based on empirical studies on limited real-life data instances, and the results may lack generalisabilities. Therefore, the researchers are encouraged to test them further in more real-life data instances.
Practical implications
The paper includes implications and guidelines for the development of efficient GPS data storage and high-quality travel time prediction under different types of travel conditions.
Originality/value
This paper systematically studies the combinatorial feature effects for tree-ensemble-based travel time prediction approaches.
Details
Keywords
Lilia Inés Stubrin, Anabel Marin, Lara Yeyati Preiss and Rocío Palacín Roitbarg
The purpose of this paper is to expand the understanding of the type of strategies that can be successful for firms located in the South to get integrated and compete in modern…
Abstract
Purpose
The purpose of this paper is to expand the understanding of the type of strategies that can be successful for firms located in the South to get integrated and compete in modern export fruit markets.
Design/methodology/approach
To achieve the research purpose of the paper the authors carry out an in-depth case study. They analyze the export strategy of Patagonian Fruits Trade, an Argentinean leading exporter of apple, pear and kiwi.
Findings
Results revealed that Patagonian Fruits Trade developed a strategy focused on supplying decommoditization to compete in modern fruit export markets. A key aspect of the firms' business model relies on its capability to meet the demand of high-income markets by providing conventional, organic and biodynamic club varieties. However, the sustainability of the strategy heavily relies on the firm's capability to fund club varieties' licenses and on the firm's ability to negotiate with clients and suppliers.
Research limitations/implications
Adopting a case study method limits the generalization of results. However, it provides new insights into the type of export strategies that can be successful in modern fruit markets as well as its main limitations.
Originality/value
Results of the study, based on original empirical evidence, shed light on key factors for the integration of Southern fruit producers into modern fruit markets.
Details
Keywords
Meseret Getnet Meharie, Wubshet Jekale Mengesha, Zachary Abiero Gariy and Raphael N.N. Mutuku
The purpose of this study to apply stacking ensemble machine learning algorithm for predicting the cost of highway construction projects.
Abstract
Purpose
The purpose of this study to apply stacking ensemble machine learning algorithm for predicting the cost of highway construction projects.
Design/methodology/approach
The proposed stacking ensemble model was developed by combining three distinct base predictive models automatically and optimally: linear regression, support vector machine and artificial neural network models using gradient boosting algorithm as meta-regressor.
Findings
The findings reveal that the proposed model predicted the final project cost with a very small prediction error value. This implies that the difference between predicted and actual cost was quite small. A comparison of the results of the models revealed that in all performance metrics, the stacking ensemble model outperforms the sole ones. The stacking ensemble cost model produces 86.8, 87.8 and 5.6 percent more accurate results than linear regression, vector machine support, and neural network models, respectively, based on the root mean square error values.
Research limitations/implications
The study shows how stacking ensemble machine learning algorithm applies to predict the cost of construction projects. The estimators or practitioners can use the new model as an effectual and reliable tool for predicting the cost of Ethiopian highway construction projects at the preliminary stage.
Originality/value
The study provides insight into the machine learning algorithm application in forecasting the cost of future highway construction projects in Ethiopia.
Details
Keywords
Yue Zhang, Derek Baker and Garry Griffith
This paper aims to address the association between the quality and quantity of information in supply chains and the costs and benefits of generating, using and sharing it.
Abstract
Purpose
This paper aims to address the association between the quality and quantity of information in supply chains and the costs and benefits of generating, using and sharing it.
Design/methodology/approach
The authors’ conceptual framework draws on multiple disciplines and theories of the value and use of product information. Controllable aspects of information, its quality and quantity, are the focus of the study as drivers of firm and chain performance. Structural equation models of constructs at two stages of the Australian red meat supply chain are employed, using data from a survey of 81 sheep and cattle breeders and commercial producers.
Findings
Information quality influences performance more for some product attributes than others and is more influential than is information quantity. Information sharing for many attributes generates benefits only at high cost. Investment in measurement and transmission technologies is supported for intrinsic and extrinsic measures of quality. Differences in respondents' evaluation of information quality are interpreted as evidence of persistent chain failure.
Originality/value
To the authors' knowledge, this is the first attempt at quantifying and comparing the benefits and costs of information sharing across multiple stages of a supply chain and the first to assess quantitatively the role played by information quality and quantity in generating costs and benefits.
Details
Keywords
Soukaina Laabadi, Mohamed Naimi, Hassan El Amri and Boujemâa Achchab
The purpose of this paper is to provide an improved genetic algorithm to solve 0/1 multidimensional knapsack problem (0/1 MKP), by proposing new selection and crossover operators…
Abstract
Purpose
The purpose of this paper is to provide an improved genetic algorithm to solve 0/1 multidimensional knapsack problem (0/1 MKP), by proposing new selection and crossover operators that cooperate to explore the search space.
Design/methodology/approach
The authors first present a new sexual selection strategy that significantly improves the one proposed by (Varnamkhasti and Lee, 2012), while working in phenotype space. Then they propose two variants of the two-stage recombination operator of (Aghezzaf and Naimi, 2009), while they adapt the latter in the context of 0/1 MKP. The authors evaluate the efficiency of both proposed operators on a large set of 0/1 MKP benchmark instances. The obtained results are compared against that of conventional selection and crossover operators, in terms of solution quality and computing time.
Findings
The paper shows that the proposed selection respects the two major factors of any metaheuristic: exploration and exploitation aspects. Furthermore, the first variant of the two-stage recombination operator pushes the search space towards exploitation, while the second variant increases the genetic diversity. The paper then demonstrates that the improved genetic algorithm combining the two proposed operators is a competitive method for solving the 0/1 MKP.
Practical implications
Although only 0/1 MKP standard instances were tested in the empirical experiments in this paper, the improved genetic algorithm can be used as a powerful tool to solve many real-world applications of 0/1 MKP, as the latter models several industrial and investment issues. Moreover, the proposed selection and crossover operators can be incorporated into other bio-inspired algorithms to improve their performance. Furthermore, the two proposed operators can be adapted to solve other binary combinatorial optimization problems.
Originality/value
This research study provides an effective solution for a well-known non-deterministic polynomial-time (NP)-hard combinatorial optimization problem; that is 0/1 MKP, by tackling it with an improved genetic algorithm. The proposed evolutionary mechanism is based on two new genetic operators. The first proposed operator is a new and deeply different variant of the so-called sexual selection that has been rarely addressed in the literature. The second proposed operator is an adaptation of the two-stage recombination operator in the 0/1 MKP context. This adaptation results in two variants of the two-stage recombination operator that aim to improve the quality of encountered solutions, while taking advantage of the sexual selection criteria to prevent the classical issue of genetic algorithm that is premature convergence.
Details
Keywords
Tim S. McLaren and David C.H. Vuong
This paper has the objective of demonstrating a more structured and useful method for evaluating functionality of enterprise software packages such as supply chain management…
Abstract
Purpose
This paper has the objective of demonstrating a more structured and useful method for evaluating functionality of enterprise software packages such as supply chain management information systems (SCM IS). Existing taxonomies have limited utility for software selection and analysis due to the variation and overlap in functionality found in modern enterprise systems.
Design/methodology/approach
A qualitative analysis of over 1,800 pages of SCM IS documentation and independent analyst reports is used to identify relevant SCM IS functional attributes in the seven most widespread SCM IS packages. Pattern matching and coding of constructs is used to iteratively build a hierarchical taxonomy of SCM IS functionality.
Findings
The taxonomy developed describes 83 major functional attributes that form five top‐level categories: primary supply chain processes, data management, decision support, relationship management, and performance improvement. The codes representing supply chain processes agree with the widely used Supply Chain Operations Reference (SCOR) process model, although the terminology was not used consistently in vendor and analyst documents.
Research limitations/implications
The approach described enables richer classification schemes to be built that will better distinguish between the wide‐ranging functionality found in modern enterprise information systems.
Practical implications
Selection and analysis of SCM IS is difficult due to the functional overlaps in different systems. The approach described enables a more structured, detailed, and useful analysis of an organization's current or proposed information systems.
Originality/value
This paper contributes a novel approach for conceptualizing and analyzing complex information systems using hierarchical rather than traditional flat taxonomies.
Details