Search results
1 – 10 of over 1000Manuel Rossetti, Juliana Bright, Andrew Freeman, Anna Lee and Anthony Parrish
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management…
Abstract
Purpose
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management processes creates difficulties in both the complexity of the analysis and in performing risk assessments that are based on the manual (human analyst) assessment methods. Thus, analysts require methods that can be automated and that can incorporate on-going operational data on a regular basis.
Design/methodology/approach
The approach taken to address the identification of supply chain risk within an operational setting is based on aspects of multiobjective decision analysis (MODA). The approach constructs a risk and importance index for supply chain elements based on operational data. These indices are commensurate in value, leading to interpretable measures for decision-making.
Findings
Risk and importance indices were developed for the analysis of items within an example supply chain. Using the data on items, individual MODA models were formed and demonstrated using a prototype tool.
Originality/value
To better prepare risk mitigation strategies, analysts require the ability to identify potential sources of risk, especially in times of disruption such as natural disasters.
Details
Keywords
Andreas Gschwentner, Manfred Kaltenbacher, Barbara Kaltenbacher and Klaus Roppert
Performing accurate numerical simulations of electrical drives, the precise knowledge of the local magnetic material properties is of utmost importance. Due to the various…
Abstract
Purpose
Performing accurate numerical simulations of electrical drives, the precise knowledge of the local magnetic material properties is of utmost importance. Due to the various manufacturing steps, e.g. heat treatment or cutting techniques, the magnetic material properties can strongly vary locally, and the assumption of homogenized global material parameters is no longer feasible. This paper aims to present the general methodology and two different solution strategies for determining the local magnetic material properties using reference and simulation data.
Design/methodology/approach
The general methodology combines methods based on measurement, numerical simulation and solving an inverse problem. Therefore, a sensor-actuator system is used to characterize electrical steel sheets locally. Based on the measurement data and results from the finite element simulation, the inverse problem is solved with two different solution strategies. The first one is a quasi Newton method (QNM) using Broyden's update formula to approximate the Jacobian and the second is an adjoint method. For comparison of both methods regarding convergence and efficiency, an artificial example with a linear material model is considered.
Findings
The QNM and the adjoint method show similar convergence behavior for two different cutting-edge effects. Furthermore, considering a priori information improved the convergence rate. However, no impact on the stability and the remaining error is observed.
Originality/value
The presented methodology enables a fast and simple determination of the local magnetic material properties of electrical steel sheets without the need for a large number of samples or special preparation procedures.
Details
Keywords
Maria Angela Butturi, Francesco Lolli and Rita Gamberini
This study presents the development of a supply chain (SC) observatory, which is a benchmarking solution to support companies within the same industry in understanding their…
Abstract
Purpose
This study presents the development of a supply chain (SC) observatory, which is a benchmarking solution to support companies within the same industry in understanding their positioning in terms of SC performance.
Design/methodology/approach
A case study is used to demonstrate the set-up of the observatory. Twelve experts on automatic equipment for the wrapping and packaging industry were asked to select a set of performance criteria taken from the literature and evaluate their importance for the chosen industry using multi-criteria decision-making (MCDM) techniques. To handle the high number of criteria without requiring a high amount of time-consuming effort from decision-makers (DMs), five subjective, parsimonious methods for criteria weighting are applied and compared.
Findings
A benchmarking methodology is presented and discussed, aimed at DMs in the considered industry. Ten companies were ranked with regard to SC performance. The ranking solution of the companies was on average robust since the general structure of the ranking was very similar for all five weighting methodologies, though simplified-analytic hierarchy process (AHP) was the method with the greatest ability to discriminate between the criteria of importance and was considered faster to carry out and more quickly understood by the decision-makers.
Originality/value
Developing an SC observatory usually requires managing a large number of alternatives and criteria. The developed methodology uses parsimonious weighting methods, providing DMs with an easy-to-use and time-saving tool. A future research step will be to complete the methodology by defining the minimum variation required for one or more criteria to reach a specific position in the ranking through the implementation of a post-fact analysis.
Details
Keywords
Yanhao Sun, Tao Zhang, Shuxin Ding, Zhiming Yuan and Shengliang Yang
In order to solve the problem of inaccurate calculation of index weights, subjectivity and uncertainty of index assessment in the risk assessment process, this study aims to…
Abstract
Purpose
In order to solve the problem of inaccurate calculation of index weights, subjectivity and uncertainty of index assessment in the risk assessment process, this study aims to propose a scientific and reasonable centralized traffic control (CTC) system risk assessment method.
Design/methodology/approach
First, system-theoretic process analysis (STPA) is used to conduct risk analysis on the CTC system and constructs risk assessment indexes based on this analysis. Then, to enhance the accuracy of weight calculation, the fuzzy analytical hierarchy process (FAHP), fuzzy decision-making trial and evaluation laboratory (FDEMATEL) and entropy weight method are employed to calculate the subjective weight, relative weight and objective weight of each index. These three types of weights are combined using game theory to obtain the combined weight for each index. To reduce subjectivity and uncertainty in the assessment process, the backward cloud generator method is utilized to obtain the numerical character (NC) of the cloud model for each index. The NCs of the indexes are then weighted to derive the comprehensive cloud for risk assessment of the CTC system. This cloud model is used to obtain the CTC system's comprehensive risk assessment. The model's similarity measurement method gauges the likeness between the comprehensive risk assessment cloud and the risk standard cloud. Finally, this process yields the risk assessment results for the CTC system.
Findings
The cloud model can handle the subjectivity and fuzziness in the risk assessment process well. The cloud model-based risk assessment method was applied to the CTC system risk assessment of a railway group and achieved good results.
Originality/value
This study provides a cloud model-based method for risk assessment of CTC systems, which accurately calculates the weight of risk indexes and uses cloud models to reduce uncertainty and subjectivity in the assessment, achieving effective risk assessment of CTC systems. It can provide a reference and theoretical basis for risk management of the CTC system.
Details
Keywords
Xuanhui Liu, Karl Werder, Alexander Maedche and Lingyun Sun
Numerous design methods are available to facilitate digital innovation processes in user interface design. Nonetheless, little guidance exists on their appropriate selection…
Abstract
Purpose
Numerous design methods are available to facilitate digital innovation processes in user interface design. Nonetheless, little guidance exists on their appropriate selection within the design process based on specific situations. Consequently, design novices with limited design knowledge face challenges when determining suitable methods. Thus, this paper aims to support design novices by guiding the situational selection of design methods.
Design/methodology/approach
Our research approach includes two phases: i) we adopted a taxonomy development method to identify dimensions of design methods by reviewing 292 potential design methods and interviewing 15 experts; ii) we conducted focus groups with 25 design novices and applied fuzzy-set qualitative comparative analysis to describe the relations between the taxonomy's dimensions.
Findings
We developed a novel taxonomy that presents a comprehensive overview of design conditions and their associated design methods in innovation processes. Thus, the taxonomy enables design novices to navigate the complexities of design methods needed to design digital innovation. We also identify configurations of these conditions that support the situational selections of design methods in digital innovation processes of user interface design.
Originality/value
The study’s contribution to the literature lies in the identification of both similarities and differences among design methods, as well as the investigation of sufficient condition configurations within the digital innovation processes of user interface design. The taxonomy helps design novices to navigate the design space by providing an overview of design conditions and the associations between methods and these conditions. By using the developed taxonomy, design novices can narrow down their options when selecting design methods for their specific situations.
Details
Keywords
Vasileios Stamatis, Michail Salampasis and Konstantinos Diamantaras
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the…
Abstract
Purpose
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the results merging process. In this work, the authors apply machine learning methods for results merging in federated patent search. Even though several methods for results merging have been developed, none of them were tested on patent data nor considered several machine learning models. Thus, the authors experiment with state-of-the-art methods using patent data and they propose two new methods for results merging that use machine learning models.
Design/methodology/approach
The methods are based on a centralized index containing samples of documents from all the remote resources, and they implement machine learning models to estimate comparable scores for the documents retrieved by different resources. The authors examine the new methods in cooperative and uncooperative settings where document scores from the remote search engines are available and not, respectively. In uncooperative environments, they propose two methods for assigning document scores.
Findings
The effectiveness of the new results merging methods was measured against state-of-the-art models and found to be superior to them in many cases with significant improvements. The random forest model achieves the best results in comparison to all other models and presents new insights for the results merging problem.
Originality/value
In this article the authors prove that machine learning models can substitute other standard methods and models that used for results merging for many years. Our methods outperformed state-of-the-art estimation methods for results merging, and they proved that they are more effective for federated patent search.
Details
Keywords
This paper aims to explore the interplay between methods and methodologies in the field of international relations (IR) over the 100 years of its lifetime reflecting on the…
Abstract
Purpose
This paper aims to explore the interplay between methods and methodologies in the field of international relations (IR) over the 100 years of its lifetime reflecting on the relationship between the rise of new research methods and the rise of new methodologies.
Design/methodology/approach
This paper looks in retrospect into the field’s great debates using a historiography approach. It maps chronologically the interplay of methods and methodology throughout the stages of the development of the study of IR.
Findings
This paper argues that inspite of narratives of triumph being common in the field, the coexistence of competing research methods and methodologies is the defining feature of the field. All theories, all methods and all methodologies have undergone a process of criticism, self-criticism and change. New methodologies have not necessarily accompanied the rise of new research methods in the field.
Originality/value
Drawing a map of the field’s methodologies and methods reveals necessarily its dynamism and its plurality. An honest map of the field is one that highlights not only theoretical differences but also ontological, epistemological and methodological differences embedded in the field’s debates.
Details
Keywords
Mariam AlKandari and Imtiaz Ahmad
Solar power forecasting will have a significant impact on the future of large-scale renewable energy plants. Predicting photovoltaic power generation depends heavily on climate…
Abstract
Solar power forecasting will have a significant impact on the future of large-scale renewable energy plants. Predicting photovoltaic power generation depends heavily on climate conditions, which fluctuate over time. In this research, we propose a hybrid model that combines machine-learning methods with Theta statistical method for more accurate prediction of future solar power generation from renewable energy plants. The machine learning models include long short-term memory (LSTM), gate recurrent unit (GRU), AutoEncoder LSTM (Auto-LSTM) and a newly proposed Auto-GRU. To enhance the accuracy of the proposed Machine learning and Statistical Hybrid Model (MLSHM), we employ two diversity techniques, i.e. structural diversity and data diversity. To combine the prediction of the ensemble members in the proposed MLSHM, we exploit four combining methods: simple averaging approach, weighted averaging using linear approach and using non-linear approach, and combination through variance using inverse approach. The proposed MLSHM scheme was validated on two real-time series datasets, that sre Shagaya in Kuwait and Cocoa in the USA. The experiments show that the proposed MLSHM, using all the combination methods, achieved higher accuracy compared to the prediction of the traditional individual models. Results demonstrate that a hybrid model combining machine-learning methods with statistical method outperformed a hybrid model that only combines machine-learning models without statistical method.
Details
Keywords
Liqun Hu, Tonghui Wang, David Trafimow, S.T. Boris Choy, Xiangfei Chen, Cong Wang and Tingting Tong
The authors’ conclusions are based on mathematical derivations that are supported by computer simulations and three worked examples in applications of economics and finance…
Abstract
Purpose
The authors’ conclusions are based on mathematical derivations that are supported by computer simulations and three worked examples in applications of economics and finance. Finally, the authors provide a link to a computer program so that researchers can perform the analyses easily.
Design/methodology/approach
Based on a parameter estimation goal, the present work is concerned with determining the minimum sample size researchers should collect so their sample medians can be trusted as good estimates of corresponding population medians. The authors derive two solutions, using a normal approximation and an exact method.
Findings
The exact method provides more accurate answers than the normal approximation method. The authors show that the minimum sample size necessary for estimating the median using the exact method is substantially smaller than that using the normal approximation method. Therefore, researchers can use the exact method to enjoy a sample size savings.
Originality/value
In this paper, the a priori procedure is extended for estimating the population median under the skew normal settings. The mathematical derivation and with computer simulations of the exact method by using sample median to estimate the population median is new and a link to a free and user-friendly computer program is provided so researchers can make their own calculations.
Details
Keywords
Algorithmic and computational thinking are necessary skills for designers in an increasingly digital world. Parametric design, a method to construct designs based on algorithmic…
Abstract
Purpose
Algorithmic and computational thinking are necessary skills for designers in an increasingly digital world. Parametric design, a method to construct designs based on algorithmic logic and rules, has become widely used in architecture practice and incorporated in the curricula of architecture schools. However, there are few studies proposing strategies for teaching parametric design into architecture students, tackling software literacy while promoting the development of algorithmic thinking.
Design/methodology/approach
A descriptive study and a prescriptive study are conducted. The descriptive study reviews the literature on parametric design education. The prescriptive study is centered on proposing the incomplete recipe as instructional material and a new approach to teaching parametric design.
Findings
The literature on parametric design education has mostly focused on curricular discussions, descriptions of case studies or studio-long approaches; day-to-day instructional methods, however, are rarely discussed. A pedagogical strategy to teach parametric design is introduced: the incomplete recipe. The instructional method proposed provides students with incomplete recipes for parametric scripts that are increasingly pared down as the students become expert users.
Originality/value
The article contributes to the existing literature by proposing the incomplete recipe as a strategy for teaching parametric design. The recipe as a pedagogical tool provides a means for both software skill acquisition and the development of algorithmic thinking.
Details