Search results

1 – 10 of over 2000
Article
Publication date: 22 January 2024

Chen Wang, Yan Zhang and Ran Zhang

This study investigated the impacts of the interaction experiential customization (IEC) mode on consumers' information processing fluency and green customization intention (GCI…

Abstract

Purpose

This study investigated the impacts of the interaction experiential customization (IEC) mode on consumers' information processing fluency and green customization intention (GCI) as well as the moderating effect of consumers' self-construal.

Design/methodology/approach

This study conducted an online field experiment, questionnaire study and between-subjects laboratory experiment to test the hypotheses.

Findings

It was found that IEC had a significant positive effect on consumers' GCI. Moreover, consumer retrieval processing fluency played a partial mediating role in the relationship between IEC and GCI. In addition, consumers' self-construal moderated the “IEC? Three dimensions of processing fluency” relationships.

Practical implications

The results emphasized the importance of IEC in influencing consumers' consumption intention in a green customization setting and have some practical implications, that is, companies have the opportunity to use appropriate digital choice architecture designs, which can enhance consumer processing fluency when promoting eco-friendly products in the customized consumption process, especially for independent consumers.

Originality/value

This study focused on the customization design on consumers' GCI and explained the mechanism of impact of IEC on improving consumers' processing fluency and GCI in a product customization setting based on the fluency theory. In addition, this study investigated the moderating effect of consumers' self-construal (independent vs interdependent) on their significant different information processing modes for low-carbon choices.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 22 February 2024

Ranjeet Kumar Singh

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The…

53

Abstract

Purpose

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The purpose of this study is to propose a solution to this problem.

Design/methodology/approach

The current study identifies relevant literature and provides a review of big data adoption in libraries. It also presents a step-by-step guide for the development of a BDA platform using the Apache Hadoop Ecosystem. To test the system, an analysis of library big data using Apache Pig, which is a tool from the Apache Hadoop Ecosystem, was performed. It establishes the effectiveness of Apache Hadoop Ecosystem as a powerful BDA solution in libraries.

Findings

It can be inferred from the literature that libraries and librarians have not taken the possibility of big data services in libraries very seriously. Also, the literature suggests that there is no significant effort made to establish any BDA architecture in libraries. This study establishes the Apache Hadoop Ecosystem as a possible solution for delivering BDA services in libraries.

Research limitations/implications

The present work suggests adapting the idea of providing various big data services in a library by developing a BDA platform, for instance, providing assistance to the researchers in understanding the big data, cleaning and curation of big data by skilled and experienced data managers and providing the infrastructural support to store, process, manage, analyze and visualize the big data.

Practical implications

The study concludes that Apache Hadoops’ Hadoop Distributed File System and MapReduce components significantly reduce the complexities of big data storage and processing, respectively, and Apache Pig, using Pig Latin scripting language, is very efficient in processing big data and responding to queries with a quick response time.

Originality/value

According to the study, there are significantly fewer efforts made to analyze big data from libraries. Furthermore, it has been discovered that acceptance of the Apache Hadoop Ecosystem as a solution to big data problems in libraries are not widely discussed in the literature, although Apache Hadoop is regarded as one of the best frameworks for big data handling.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 24 October 2023

WenFeng Qin, Yunsheng Xue, Hao Peng, Gang Li, Wang Chen, Xin Zhao, Jie Pang and Bin Zhou

The purpose of this study is to design a wearable medical device as a human care platform and to introduce the design details, key technologies and practical implementation…

Abstract

Purpose

The purpose of this study is to design a wearable medical device as a human care platform and to introduce the design details, key technologies and practical implementation methods of the system.

Design/methodology/approach

A multi-channel data acquisition scheme based on PCI-E (rapid interconnection of peripheral components) was proposed. The flexible biosensor is integrated with the flexible data acquisition card with monitoring capability, and the embedded (device that can operate independently) chip STM32F103VET6 is used to realize the simultaneous processing of multi-channel human health parameters. The human health parameters were transferred to the upper computer LabVIEW by intelligent clothing through USB or wireless Bluetooth to complete the transmission and processing of clinical data, which facilitates the analysis of medical data.

Findings

The smart clothing provides a mobile medical cloud platform for wearable medical through cloud computing, which can continuously monitor the body's wrist movement, body temperature and perspiration for 24 h. The result shows that each channel is completely accurate to the top computer display, which can meet the expected requirements, and the wearable instant care system can be applied to healthcare.

Originality/value

The smart clothing in this study is based on the monitoring and diagnosis of textiles, and the electronic communication devices can cooperate and interact to form a wearable textile system that provides medical monitoring and prevention services to individuals in the fastest and most accurate way. Each channel of the system is precisely matched to the display screen of the host computer and meets the expected requirements. As a real-time human health protection platform technology, continuous monitoring of human vital signs can complete the application of human motion detection, medical health monitoring and human–computer interaction. Ultimately, such an intelligent garment will become an integral part of our everyday clothing.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 21 March 2023

Manikandan R. and Raja Singh R.

The purpose of this paper is to prevent the destruction of other parts of a wind energy conversion system because of faults, the diagnosis of insulated-gate bipolar transistor…

Abstract

Purpose

The purpose of this paper is to prevent the destruction of other parts of a wind energy conversion system because of faults, the diagnosis of insulated-gate bipolar transistor (IGBT) faults has become an essential topic of study. Demand for sustainable energy sources has been prompted by rising environmental pollution and energy requirements. Renewable energy has been identified as a viable substitute for conventional fossil fuel energy generation. Because of its rapid installation time and adaptable expenditure for construction scale, wind energy has emerged as a great energy resource. Power converter failure is particularly significant for the reliable operation of wind power conversion systems because it not only has a high yearly fault rate but also a prolonged downtime. The power converters will continue to operate even after the failure, especially the open-circuit fault, endangering their other parts and impairing their functionality.

Design/methodology/approach

The most widely used signal processing methods for locating open-switch faults in power devices are the short-time Fourier transform and wavelet transform (WT) – based on time–frequency analysis. To increase their effectiveness, these methods necessitate the intensive use of computational resources. This study suggests a fault detection technique using empirical mode decomposition (EMD) that examines the phase currents from a power inverter. Furthermore, the intrinsic mode function’s relative energy entropy (REE) and simple logical operations are used to locate IGBT open switch failures.

Findings

The presented scheme successfully locates and detects 21 various classes of IGBT faults that could arise in a two-level three-phase voltage source inverter (VSI). To verify the efficacy of the proposed fault diagnosis (FD) scheme, the test is performed under various operating conditions of the power converter and induction motor load. The proposed method outperforms existing FD schemes in the literature in terms of fault coverage and robustness.

Originality/value

This study introduces an EMD–IMF–REE-based FD method for VSIs in wind turbine systems, which enhances the effectiveness and robustness of the FD method.

Article
Publication date: 27 March 2024

Temesgen Agazhie and Shalemu Sharew Hailemariam

This study aims to quantify and prioritize the main causes of lean wastes and to apply reduction methods by employing better waste cause identification methodologies.

Abstract

Purpose

This study aims to quantify and prioritize the main causes of lean wastes and to apply reduction methods by employing better waste cause identification methodologies.

Design/methodology/approach

We employed fuzzy techniques for order preference by similarity to the ideal solution (FTOPSIS), fuzzy analytical hierarchy process (FAHP), and failure mode effect analysis (FMEA) to determine the causes of defects. To determine the current defect cause identification procedures, time studies, checklists, and process flow charts were employed. The study focuses on the sewing department of a clothing industry in Addis Ababa, Ethiopia.

Findings

These techniques outperform conventional techniques and offer a better solution for challenging decision-making situations. Each lean waste’s FMEA criteria, such as severity, occurrence, and detectability, were examined. A pairwise comparison revealed that defect has a larger effect than other lean wastes. Defects were mostly caused by inadequate operator training. To minimize lean waste, prioritizing their causes is crucial.

Research limitations/implications

The research focuses on a case company and the result could not be generalized for the whole industry.

Practical implications

The study used quantitative approaches to quantify and prioritize the causes of lean waste in the garment industry and provides insight for industrialists to focus on the waste causes to improve their quality performance.

Originality/value

The methodology of integrating FMEA with FAHP and FTOPSIS was the new contribution to have a better solution to decision variables by considering the severity, occurrence, and detectability of the causes of wastes. The data collection approach was based on experts’ focus group discussion to rate the main causes of defects which could provide optimal values of defect cause prioritization.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 12 December 2023

Laura Lucantoni, Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua

The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely…

Abstract

Purpose

The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely used for analyzing OEE results and identifying corrective actions. Therefore, the approach proposed in this paper aims to provide a new rule-based Machine Learning (ML) framework for OEE enhancement and the selection of improvement actions.

Design/methodology/approach

Association Rules (ARs) are used as a rule-based ML method for extracting knowledge from huge data. First, the dominant loss class is identified and traditional methodologies are used with ARs for anomaly classification and prioritization. Once selected priority anomalies, a detailed analysis is conducted to investigate their influence on the OEE loss factors using ARs and Network Analysis (NA). Then, a Deming Cycle is used as a roadmap for applying the proposed methodology, testing and implementing proactive actions by monitoring the OEE variation.

Findings

The method proposed in this work has also been tested in an automotive company for framework validation and impact measuring. In particular, results highlighted that the rule-based ML methodology for OEE improvement addressed seven anomalies within a year through appropriate proactive actions: on average, each action has ensured an OEE gain of 5.4%.

Originality/value

The originality is related to the dual application of association rules in two different ways for extracting knowledge from the overall OEE. In particular, the co-occurrences of priority anomalies and their impact on asset Availability, Performance and Quality are investigated.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 14 March 2024

Qiang Wen, Lele Chen, Jingwen Jin, Jianhao Huang and HeLin Wan

Fixed mode noise and random mode noise always exist in the image sensor, which affects the imaging quality of the image sensor. The charge diffusion and color mixing between…

Abstract

Purpose

Fixed mode noise and random mode noise always exist in the image sensor, which affects the imaging quality of the image sensor. The charge diffusion and color mixing between pixels in the photoelectric conversion process belong to fixed mode noise. This study aims to improve the image sensor imaging quality by processing the fixed mode noise.

Design/methodology/approach

Through an iterative training of an ergoable long- and short-term memory recurrent neural network model, the authors obtain a neural network model able to compensate for image noise crosstalk. To overcome the lack of differences in the same color pixels on each template of the image sensor under flat-field light, the data before and after compensation were used as a new data set to further train the neural network iteratively.

Findings

The comparison of the images compensated by the two sets of neural network models shows that the gray value distribution is more concentrated and uniform. The middle and high frequency components in the spatial spectrum are all increased, indicating that the compensated image edges change faster and are more detailed (Hinton and Salakhutdinov, 2006; LeCun et al., 1998; Mohanty et al., 2016; Zang et al., 2023).

Originality/value

In this paper, the authors use the iterative learning color image pixel crosstalk compensation method to effectively alleviate the incomplete color mixing problem caused by the insufficient filter rate and the electric crosstalk problem caused by the lateral diffusion of the optical charge caused by the adjacent pixel potential trap.

Details

Sensor Review, vol. 44 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 23 January 2024

Young Jin Shin, Ebrahim Farrokh, Jaehoon Jung, Jaewon Lee and Hanbyul Kang

Despite the many advantages this type of equipment offers, there are still some major drawbacks. Linear cutting machine (LCM) cannot accurately simulate the true rock-cutting…

Abstract

Purpose

Despite the many advantages this type of equipment offers, there are still some major drawbacks. Linear cutting machine (LCM) cannot accurately simulate the true rock-cutting process as 1. it does not account for the circular path along which tunnel boring machine (TBM) disk cutters cut the tunnel face, 2. it does not accurately model the position of a disk cutter on the cutterhead, 3. it cannot perfectly replicate the rotational speed of a TBM. To enhance the knowledge of these issues and in order to mimic the real rock-cutting process, a new lab testing equipment was developed by Hyundai Engineering and Construction.

Design/methodology/approach

A new testing machine called rotary cutting machine (RCM) is designed to simulate the excavation process of hard-rock TBMs and includes features such as TBM cutterhead, RPM simulation, constant normal force mode and constant penetration rate mode. Two sets of tests were conducted on Hwandeung granite using different disk cutter sizes to analyze the cutting forces in various excavation modes. The results are analyzed using statistical analysis and dimensional analysis. A new model is generated using dimensional analysis, and its results are compared against the results of actual cases.

Findings

The effectiveness of the new RCM test was demonstrated in its ability to apply various modes of excavation. Initial analysis of chip size revealed that the thickness of the chips is largely dependent on the cutter spacing. Tests with varying RPM showed that an increase in RPM results in an increase in the normal force and rolling force. The cutting coefficient (CC) demonstrated a linear correlation with penetration. The optimal specific energy is achieved at an S/p ratio of around 15. However, a slightly lower S/p ratio can also be used in the design if the cutter specifications permit. A dimensional analysis was utilized to develop a new RCM model based on the results from approximately 1200 tests. The model's applicability was demonstrated through a comparison of TBM penetration data from 26 tunnel projects globally. Results indicated that the predicted penetration rates by the RCM test model were in good agreement with actual rates for the majority of cases. However, further investigation is necessary for softer rock types, which will be conducted in the future using concrete blocks.

Originality/value

The originality of the research lies in the development of Hyundai Engineering and Construction’s advanced full-scale laboratory rotary cutting machine (RCM), which accurately replicates the excavation process of hard-rock tunnel boring machines (TBMs). The study provides valuable insights into cutting forces, chip size, specific energy, RPM and excavation modes, enhancing understanding and decision-making in hard-rock excavation processes. The research also presents a new RCM model validated against TBM penetration data, demonstrating its practical applicability and predictive accuracy.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 27 February 2023

Guanxiong Wang, Xiaojian Hu and Ting Wang

By introducing the mass customization service mode into the cloud logistics environment, this paper studies the joint optimization of service provider selection and customer order…

205

Abstract

Purpose

By introducing the mass customization service mode into the cloud logistics environment, this paper studies the joint optimization of service provider selection and customer order decoupling point (CODP) positioning based on the mass customization service mode to provide customers with more diversified and personalized service content with lower total logistics service cost.

Design/methodology/approach

This paper addresses the general process of service composition optimization based on the mass customization mode in a cloud logistics service environment and constructs a joint decision model for service provider selection and CODP positioning. In the model, the two objective functions of minimum service cost and most satisfactory delivery time are considered, and the Pareto optimal solution of the model is obtained via the NSGA-II algorithm. Then, a numerical case is used to verify the superiority of the service composition scheme based on the mass customization mode over the general scheme and to verify the significant impact of the scale effect coefficient on the optimal CODP location.

Findings

(1) Under the cloud logistics mode, the implementation of the logistics service mode based on mass customization can not only reduce the total cost of logistics services by means of the scale effect of massive orders on the cloud platform but also make more efficient use of a large number of logistics service providers gathered on the cloud platform to provide customers with more customized and diversified service content. (2) The scale effect coefficient directly affects the total cost of logistics services and significantly affects the location of the CODP. Therefore, before implementing the mass customization logistics service mode, the most reasonable clustering of orders on the cloud logistics platform is very important for the follow-up service combination.

Originality/value

The originality of this paper includes two aspects. One is to introduce the mass customization mode in the cloud logistics service environment for the first time and summarize the operation process of implementing the mass customization mode in the cloud logistics environment. Second, in order to solve the joint decision optimization model of provider selection and CODP positioning, this paper designs a method for solving a mixed-integer nonlinear programming model using a multi-layer coding genetic algorithm.

Details

Kybernetes, vol. 53 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 24 October 2022

Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…

Abstract

Purpose

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.

Design/methodology/approach

This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.

Findings

The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.

Research limitations/implications

The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.

Practical implications

To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.

Social implications

To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.

Originality/value

The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

1 – 10 of over 2000