Search results
1 – 10 of 45Abhishek Das and Mihir Narayan Mohanty
In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent…
Abstract
Purpose
In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.
Design/methodology/approach
In this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.
Findings
The proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.
Research limitations/implications
Research in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.
Originality/value
The proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.
Details
Keywords
In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When…
Abstract
In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When assigning values to some variables in a function the resulting functions are called subfunctions, and when identifying some variables the resulting functions are called minors. The sets of essential variables in subfunctions of
We examine the maximal separable subsets of variables and their conjugates, introduced in the paper, proving that each such set has at least one conjugate. The essential arity gap
Details
Keywords
Ahm Shamsuzzoha, Sujan Piya and Mohammad Shamsuzzaman
This study aims to propose a method known as the fuzzy technique for order preference by similarity to ideal solution (fuzzy TOPSIS) for complex project selection in…
Abstract
Purpose
This study aims to propose a method known as the fuzzy technique for order preference by similarity to ideal solution (fuzzy TOPSIS) for complex project selection in organizations. To fulfill study objectives, the factors responsible for making a project complex are collected through literature review, which is then analyzed by fuzzy TOPSIS, based on three decision-makers’ opinions.
Design/methodology/approach
The selection of complex projects is a multi-criteria decision-making (MCDM) process for global organizations. Traditional procedures for selecting complex projects are not adequate due to the limitations of linguistic assessment. To crossover such limitation, this study proposes the fuzzy MCDM method to select complex projects in organizations.
Findings
A large-scale engine manufacturing company, engaged in the energy business, is studied to validate the suitability of the fuzzy TOPSIS method and rank eight projects of the case company based on project complexity. Out of these eight projects, the closeness coefficient of the most complex project is found to be 0.817 and that of the least complex project is found to be 0.274. Finally, study outcomes are concluded in the conclusion section, along with study limitations and future works.
Research limitations/implications
The outcomes from this research may not be generalized sufficiently due to the subjectivity of the interviewers. The study outcomes support project managers to optimize their project selection processes, especially to select complex projects. The presented methodology can be used extensively used by the project planners/managers to find the driving factors related to project complexity.
Originality/value
The presented study deliberately explained how complex projects in an organization could be select efficiently. This selection methodology supports top management to maintain their proposed projects with optimum resource allocations and maximum productivity.
Details
Keywords
Petteri Annunen, Erno Mustonen, Janne Harkonen and Harri Haapasalo
This study aims to focus on creating sales capability as part of new product development (NPD). The aim is to define generic requirements for building sales capability as a part…
Abstract
Purpose
This study aims to focus on creating sales capability as part of new product development (NPD). The aim is to define generic requirements for building sales capability as a part of NPD and to propose a necessary process by defining key activities for sales readiness.
Design/methodology/approach
An inductive and qualitative research method was used to construct a sales capability creation process based on a current state analysis in seven companies.
Findings
The results indicate that the status of companies’ sales-related planning varies during the NPD, and the related activities are not systematically managed. Considering sales early is necessary to enable a smooth and cost-efficient start of sales, and to avoid unnecessary delays and problems in other functions. At the same time, the companies recognise the need for improvement.
Originality/value
This paper presents a potential process including systematic activities for creating sales capability in conjunction with product development, which is novel to the literature. The proposed process is applicable in aligning industrial company needs.
Details
Keywords
Mostafa Abd-El-Barr, Kalim Qureshi and Bambang Sarif
Ant Colony Optimization and Particle Swarm Optimization represent two widely used Swarm Intelligence (SI) optimization techniques. Information processing using Multiple-Valued…
Abstract
Ant Colony Optimization and Particle Swarm Optimization represent two widely used Swarm Intelligence (SI) optimization techniques. Information processing using Multiple-Valued Logic (MVL) is carried out using more than two discrete logic levels. In this paper, we compare two the SI-based algorithms in synthesizing MVL functions. A benchmark consisting of 50,000 randomly generated 2-variable 4-valued functions is used for assessing the performance of the algorithms using the benchmark. Simulation results show that the PSO outperforms the ACO technique in terms of the average number of product terms (PTs) needed. We also compare the results obtained using both ACO-MVL and PSO-MVL with those obtained using Espresso-MV logic minimizer. It is shown that on average, both of the SI-based techniques produced better results compared to those produced by Espresso-MV. We show that the SI-based techniques outperform the conventional direct-cover (DC) techniques in terms of the average number of product terms required.
Details
Keywords
Sumit Gupta, Deepika Joshi, Sandeep Jagtap, Hana Trollman, Yousef Haddad, Yagmur Atescan Yuksek, Konstantinos Salonitis, Rakesh Raut and Balkrishna Narkhede
The paper proposes a framework for the successful deployment of Industry 4.0 (I4.0) principles in the aerospace industry, based on identified success factors. The paper challenges…
Abstract
Purpose
The paper proposes a framework for the successful deployment of Industry 4.0 (I4.0) principles in the aerospace industry, based on identified success factors. The paper challenges the perception of I4.0 being aligned with de-skilling and personnel reduction and instead promotes a route to successful deployment centred on upskilling and retaining personnel for future role requirements.
Design/methodology/approach
The research methodology involved a literature review and industrial data collection via questionnaires to develop and validate the framework. The questionnaire was sent to a purposive sample of 50 respondents working in operations, and a response rate of 90% was achieved. Content analysis was used to identify patterns, themes, or biases, and the data were tabulated based on specific common attributes. The proposed framework consists of a series of gates and criteria that must be met before progressing to the next gate.
Findings
The proposed framework provides a feedback mechanism to review minimum standards for successful deployment, aligned with new developments in capability and technology, and ensures quality assessment at each gate. The paper highlights the potential benefits of I4.0 implementation in the aerospace industry, including reducing operational costs and improving competitiveness by eliminating variation in manufacturing processes. The identified success factors were used to define the framework, and the identified failure points were used to form mitigation actions or controls for inclusion in the framework.
Originality/value
The paper provides a framework for the successful deployment of I4.0 principles in the aerospace industry, based on identified success factors. The framework challenges the perception of I4.0 as being aligned with de-skilling and personnel reduction and instead promotes a route to successful deployment centred on upskilling and retaining personnel for future role requirements. The framework can be used as a guideline for organizations to deploy I4.0 principles successfully and improve competitiveness.
Details
Keywords
Federico Paolo Zasa and Tommaso Buganza
This study aims to investigate how configurations of boundary objects (BOs) support innovation teams in developing innovative product concepts. Specifically, it explores the…
Abstract
Purpose
This study aims to investigate how configurations of boundary objects (BOs) support innovation teams in developing innovative product concepts. Specifically, it explores the effectiveness of different artefact configurations in facilitating collaboration and bridging knowledge boundaries during the concept development process.
Design/methodology/approach
The research is based on data from ten undergraduate innovation teams working with an industry partner in a creative industry. Six categories of BOs are identified, which serve as tools for collaboration. The study applies fsQCA (fuzzy-set qualitative comparative analysis) to analyse the configurations employed by the teams to bridge knowledge boundaries and support the development of innovative product concepts.
Findings
The findings of the study reveal two distinct groups of configurations: product envisioning and product design. The configurations within the “product envisioning” group support the activities of visioning and pivoting, enabling teams to innovate the product concept by altering the product vision. On the other hand, the configurations within the “product design” group facilitate experimenting, modelling and prototyping, allowing teams to design the attributes of the innovative product concept while maintaining the product vision.
Originality/value
This research contributes to the field of innovation by providing insights into the role of BOs and their configurations in supporting innovation teams during concept development. The results suggest that configurations of “product envisioning” support bridging semantic knowledge boundaries, while configurations within “product design” bridge pragmatic knowledge boundaries. This understanding contributes to the broader field of knowledge integration and innovation in design contexts.
Details
Keywords
Weiwei Zhu, Jinglin Wu, Ting Fu, Junhua Wang, Jie Zhang and Qiangqiang Shangguan
Efficient traffic incident management is needed to alleviate the negative impact of traffic incidents. Accurate and reliable estimation of traffic incident duration is of great…
Abstract
Purpose
Efficient traffic incident management is needed to alleviate the negative impact of traffic incidents. Accurate and reliable estimation of traffic incident duration is of great importance for traffic incident management. Previous studies have proposed models for traffic incident duration prediction; however, most of these studies focus on the total duration and could not update prediction results in real-time. From a traveler’s perspective, the relevant factor is the residual duration of the impact of the traffic incident. Besides, few (if any) studies have used dynamic traffic flow parameters in the prediction models. This paper aims to propose a framework to fill these gaps.
Design/methodology/approach
This paper proposes a framework based on the multi-layer perception (MLP) and long short-term memory (LSTM) model. The proposed methodology integrates traffic incident-related factors and real-time traffic flow parameters to predict the residual traffic incident duration. To validate the effectiveness of the framework, traffic incident data and traffic flow data from Shanghai Zhonghuan Expressway are used for modeling training and testing.
Findings
Results show that the model with 30-min time window and taking both traffic volume and speed as inputs performed best. The area under the curve values exceed 0.85 and the prediction accuracies exceed 0.75. These indicators demonstrated that the model is appropriate for this study context. The model provides new insights into traffic incident duration prediction.
Research limitations/implications
The incident samples applied by this study might not be enough and the variables are not abundant. The number of injuries and casualties, more detailed description of the incident location and other variables are expected to be used to characterize the traffic incident comprehensively. The framework needs to be further validated through a sufficiently large number of variables and locations.
Practical implications
The framework can help reduce the impacts of incidents on the safety of efficiency of road traffic once implemented in intelligent transport system and traffic management systems in future practical applications.
Originality/value
This study uses two artificial neural network methods, MLP and LSTM, to establish a framework aiming at providing accurate and time-efficient information on traffic incident duration in the future for transportation operators and travelers. This study will contribute to the deployment of emergency management and urban traffic navigation planning.
Details
Keywords
Chang Liu, Shiwu Yang, Yixuan Yang, Hefei Cao and Shanghe Liu
In the continuous development of high-speed railways, ensuring the safety of the operation control system is crucial. Electromagnetic interference (EMI) faults in signaling…
Abstract
Purpose
In the continuous development of high-speed railways, ensuring the safety of the operation control system is crucial. Electromagnetic interference (EMI) faults in signaling equipment may cause transportation interruptions, delays and even threaten the safety of train operations. Exploring the impact of disturbances on signaling equipment and establishing evaluation methods for the correlation between EMI and safety is urgently needed.
Design/methodology/approach
This paper elaborates on the necessity and significance of studying the impact of EMI as an unavoidable and widespread risk factor in the external environment of high-speed railway operations and continuous development. The current status of research methods and achievements from the perspectives of standard systems, reliability analysis and safety assessment are examined layer by layer. Additionally, it provides prospects for innovative ideas for exploring the quantitative correlation between EMI and signaling safety.
Findings
Despite certain innovative achievements in both domestic and international standard systems and related research for ensuring and evaluating railway signaling safety, there’s a lack of quantitative and strategic research on the degradation of safety performance in signaling equipment due to EMI. A quantitative correlation between EMI and safety has yet to be established. On this basis, this paper proposes considerations for research methods pertaining to the correlation between EMI and safety.
Originality/value
This paper overviews a series of methods and outcomes derived from domestic and international studies regarding railway signaling safety, encompassing standard systems, reliability analysis and safety assessment. Recognizing the necessity for quantitatively describing and predicting the impact of EMI on high-speed railway signaling safety, an innovative approach using risk assessment techniques as a bridge to establish the correlation between EMI and signaling safety is proposed.
Details
Keywords
Taknaz Alsadat Banihashemi, Jiangang Fei and Peggy Shu-Ling Chen
The implementation of reverse logistics (RL) as a strategic decision has gained significant attention amongst organisations due to its benefits to sustainable development. The…
Abstract
Purpose
The implementation of reverse logistics (RL) as a strategic decision has gained significant attention amongst organisations due to its benefits to sustainable development. The purpose of this paper is to provide a comprehensive review of the literature to evaluate the performance of the RL process based on the three dimensions of sustainability including environmental, economic and social aspects.
Design/methodology/approach
Content analysis was adopted to collect and analyse the information.
Findings
The findings of this research show that most of the studies have focused on the performance evaluation of RL by considering the factors associated with economic and environmental performance. The social aspect of RL has been overlooked and requires investigation due to its contribution to positive social outcomes. In addition, no research has been conducted to assess the impact of each of the disposition options on the triple-bottom-line sustainability performance in one study.
Originality/value
Although RL can make a significant contribution to improving the sustainability performance of firms, little research has been undertaken on exploring the relationship between RL and sustainability performance. This paper provides practitioners, academics and researchers a broad and complete view of the relationship between RL and sustainability performance with suggestion for future research.
Details