Search results

1 – 10 of 328
Open Access
Article
Publication date: 28 April 2022

Krzysztof Jakub Stojek, Jan Felba, Damian Nowak, Karol Malecha, Szymon Kaczmarek and Patryk Tomasz Tomasz Andrzejak

This paper aims to perform thermal and mechanical characterization for silver-based sintered thermal joints. Layer quality affects thermal and mechanical performance, and it is…

Abstract

Purpose

This paper aims to perform thermal and mechanical characterization for silver-based sintered thermal joints. Layer quality affects thermal and mechanical performance, and it is important to achieve information about how materials and process parameters influence them.

Design/methodology/approach

Thermal investigation of the thermal joints analysis method was focused on determination of thermal resistance, where temperature measurements were performed using infrared camera. They were performed in two modes: steady-state analysis and dynamic analysis. Mechanical analysis based on measurements of mechanical shear force. Additional characterizations based on X-ray image analysis (image thresholding), optical microscope of polished cross-section and scanning electron microscope image analysis were proposed.

Findings

Sample surface modification affects thermal resistance. Silver metallization exhibits the lowest thermal resistance and the highest mechanical strength compared to the pure Si surface. The type of dynamic analysis affects the results of the thermal resistance.

Originality/value

Investigation of the layer quality influence on mechanical and thermal performance provided information about different joint types.

Details

Soldering & Surface Mount Technology, vol. 35 no. 1
Type: Research Article
ISSN: 0954-0911

Keywords

Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 16 July 2020

Loris Nanni, Stefano Ghidoni and Sheryl Brahnam

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets…

2299

Abstract

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets of color images. The proposed system represents a very simple yet effective way of boosting the performance of trained CNNs by composing multiple CNNs into an ensemble and combining scores by sum rule. Several types of ensembles are considered, with different CNN topologies along with different learning parameter sets. The proposed system not only exhibits strong discriminative power but also generalizes well over multiple datasets thanks to the combination of multiple descriptors based on different feature types, both learned and handcrafted. Separate classifiers are trained for each descriptor, and the entire set of classifiers is combined by sum rule. Results show that the proposed system obtains state-of-the-art performance across four different bioimage and medical datasets. The MATLAB code of the descriptors will be available at https://github.com/LorisNanni.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Open Access
Article
Publication date: 4 December 2019

Kyrill Goosseff

To identify the Transcendental Essence of Humanity, the purpose of this paper is to describe in brief what kind of research became possible when the theory of, e.g. autopoiesis…

2156

Abstract

Purpose

To identify the Transcendental Essence of Humanity, the purpose of this paper is to describe in brief what kind of research became possible when the theory of, e.g. autopoiesis, Husserl’s Transcendental Consciousness and the theory of Rhodes and Thame came together to form a “transcendental” interview methodology.

Design/methodology/approach

Critical conceptual implications are drawn to form a new research method to explore a de-subjectified inner domain and to search for a possible common essence of humanity.

Findings

A Transcendental Emotional Reference was found practically alien to contemporary perspectives. Still, the reference governs the emotional structure of human experience. This different perspective answers basic questions of morality, organization theory and leadership.

Research limitations/implications

The findings of the new research open a new and transparent perspective answering Grey’s question: “What is it to be human?” (Grey, p. 47, 2014.) A perspective shedding new light on the humanities. A research limitation is the number of respondents. Still, being transcendental the findings are theoretically valid for all.

Originality/value

The paper is based on a unique research enabling 32+ (ongoing research) respondents to explore their own and universally shared Transcendental domain.

Details

Journal of Organizational Change Management, vol. 33 no. 4
Type: Research Article
ISSN: 0953-4814

Keywords

Open Access
Article
Publication date: 24 October 2018

Samuel Evans, Eric Jones, Peter Fox and Chris Sutcliffe

This paper aims to introduce a novel method for the analysis of open cell porous components fabricated by laser-based powder bed metal additive manufacturing (AM) for the purpose…

1134

Abstract

Purpose

This paper aims to introduce a novel method for the analysis of open cell porous components fabricated by laser-based powder bed metal additive manufacturing (AM) for the purpose of quality control. This method uses photogrammetric analysis, the extraction of geometric information from an image through the use of algorithms. By applying this technique to porous AM components, a rapid, low-cost inspection of geometric properties such as material thickness and pore size is achieved. Such measurements take on greater importance, as the production of porous additive manufactured orthopaedic devices increases in number, causing other, slower and more expensive methods of analysis to become impractical.

Design/methodology/approach

Here the development of the photogrammetric method is discussed and compared to standard techniques including scanning electron microscopy, micro computed tomography scanning and the recently developed focus variation (FV) imaging. The system is also validated against test graticules and simple wire geometries of known size, prior to the more complex orthopaedic structures.

Findings

The photogrammetric method shows an ability to analyse the variability in build fidelity of AM porous structures for use in inspection purposes to compare component properties. While measured values for material thickness and pore size differed from those of other techniques, the new photogrammetric technique demonstrated a low deviation when repeating measurements, and was able to analyse components at a much faster rate and lower cost than the competing systems, with less requirement for specific expertise or training.

Originality/value

The advantages demonstrated by the image-based technique described indicate the system to be suitable for implementation as a means of in-line process control for quality and inspection applications, particularly for high-volume production where existing methods would be impractical.

Details

Rapid Prototyping Journal, vol. 24 no. 8
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 25 February 2020

Zsolt Tibor Kosztyán, Tibor Csizmadia, Zoltán Kovács and István Mihálcz

The purpose of this paper is to generalize the traditional risk evaluation methods and to specify a multi-level risk evaluation framework, in order to prepare customized risk…

3627

Abstract

Purpose

The purpose of this paper is to generalize the traditional risk evaluation methods and to specify a multi-level risk evaluation framework, in order to prepare customized risk evaluation and to enable effectively integrating the elements of risk evaluation.

Design/methodology/approach

A real case study of an electric motor manufacturing company is presented to illustrate the advantages of this new framework compared to the traditional and fuzzy failure mode and effect analysis (FMEA) approaches.

Findings

The essence of the proposed total risk evaluation framework (TREF) is its flexible approach that enables the effective integration of firms’ individual requirements by developing tailor-made organizational risk evaluation.

Originality/value

Increasing product/service complexity has led to increasingly complex yet unique organizational operations; as a result, their risk evaluation is a very challenging task. Distinct structures, characteristics and processes within and between organizations require a flexible yet robust approach of evaluating risks efficiently. Most recent risk evaluation approaches are considered to be inadequate due to the lack of flexibility and an inappropriate structure for addressing the unique organizational demands and contextual factors. To address this challenge effectively, taking a crucial step toward customization of risk evaluation.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 4 May 2022

Andreas Edström, Beatrice Nylander, Jonas Molin, Zahra Ahmadi and Patrik Sörqvist

The service recovery paradox (SRP) is the phenomenon that happens when customer satisfaction level post-service failure and recovery surpasses the customer satisfaction level…

4077

Abstract

Purpose

The service recovery paradox (SRP) is the phenomenon that happens when customer satisfaction level post-service failure and recovery surpasses the customer satisfaction level achieved at error-free service. The aim of this study was to identify how large the size of compensation has to be at recovery for customer satisfaction to surpass that of error-free service (i.e. to identify a threshold value for SRP). The purpose of this is to inform managers how to restore customer satisfaction yet avoid overcompensation.

Design/methodology/approach

The paper covers two studies. Study 1 used the novel approach of asking participants who had experienced a service failure in the hotel industry what amount of money (recovery) would make them more satisfied than in the case of error-free service. Study 2 then tested the compensation levels expressed by Study 1 participants to be sufficient for the service recovery paradox to occur.

Findings

Study 1 indicated that the threshold for the SRP was (on average) around 1,204 SEK, or just over 80% of the original room reservation price of 1,500 SEK (approx. $180). Study 2 found that (on average) the customer satisfaction of participants who received 1,204 SEK in compensation for service failure marked the point where it surpassed that of error-free service. Participants who received 633 SEK were less satisfied; participants who received 1,774 SEK were more satisfied.

Research limitations/implications

The findings are context-specific. Future research should test the findings' generalizability.

Practical implications

The approach used in this paper could provide managers with a tool to guide their service recovery efforts. The findings could help hotel managers to make strategic decisions to restore customer satisfaction yet avoid overcompensation, given a legitimate service failure in which the organization is at fault.

Originality/value

Numerous previous studies have investigated the occurrence or absence of the SRP at predetermined compensation levels. This paper used a novel approach to find a quantitative threshold at which the magnitude of the recovery effort makes customer satisfaction surpass that of error-free service.

Details

Journal of Service Theory and Practice, vol. 32 no. 7
Type: Research Article
ISSN: 2055-6225

Keywords

Open Access
Article
Publication date: 30 July 2020

Alaa Tharwat

Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of…

32808

Abstract

Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of such metrics and its significance must be interpreted correctly for evaluating different learning algorithms. Most of these measures are scalar metrics and some of them are graphical methods. This paper introduces a detailed overview of the classification assessment measures with the aim of providing the basics of these measures and to show how it works to serve as a comprehensive source for researchers who are interested in this field. This overview starts by highlighting the definition of the confusion matrix in binary and multi-class classification problems. Many classification measures are also explained in details, and the influence of balanced and imbalanced data on each metric is presented. An illustrative example is introduced to show (1) how to calculate these measures in binary and multi-class classification problems, and (2) the robustness of some measures against balanced and imbalanced data. Moreover, some graphical measures such as Receiver operating characteristics (ROC), Precision-Recall, and Detection error trade-off (DET) curves are presented with details. Additionally, in a step-by-step approach, different numerical examples are demonstrated to explain the preprocessing steps of plotting ROC, PR, and DET curves.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 25 April 2024

Da Huo, Rihui Ouyang, Aidi Tang, Wenjia Gu and Zhongyuan Liu

This paper delves into cross-border E-business, unraveling its intricate dynamics and forecasting its future trajectory.

Abstract

Purpose

This paper delves into cross-border E-business, unraveling its intricate dynamics and forecasting its future trajectory.

Design/methodology/approach

This paper projects the prospective market size of cross-border E-business in China for the year 2023 using the GM (1,1) gray forecasting model. Furthermore, to enhance the analysis, the paper attempts to simulate and forecast the size of China’s cross-border E-business sector using the GM (1,3) gray model. This extended model considers not only the historical trends of cross-border E-business but also the growth patterns of GDP and the digital economy.

Findings

The forecast indicates a market size of 18,760 to 18,934 billion RMB in 2023, aligning with the consistent growth observed in previous years. This suggests a sustained positive trajectory for cross-border E-business.

Originality/value

Cross-border e-commerce critically shapes China’s global integration and traditional industry development. The research in this paper provides insights beyond statistical trends, contributing to a nuanced understanding of the pivotal role played by cross-border e-commerce in shaping China’s economic future.

Details

Journal of Internet and Digital Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2752-6356

Keywords

Open Access
Article
Publication date: 5 December 2022

Giuseppe Nicolò, Diana Ferullo, Natalia Aversano and Nadia Ardito

The present study aims to extend the knowledge of intellectual capital disclosure (ICD) disclosure practices in the Italian Healthcare Organisations (HCOs) context. The ultimate…

1510

Abstract

Purpose

The present study aims to extend the knowledge of intellectual capital disclosure (ICD) disclosure practices in the Italian Healthcare Organisations (HCOs) context. The ultimate goal of the study is to provide fresh insight into the possible explanatory factors that may drive the extent of ICD provided by Italian HCOs via the web.

Design/methodology/approach

The present study applies a manual content analysis on the websites of a sample of 158 HCOs to determine the level of voluntary ICD. A multivariate regression model is estimated to test the association between different variables – size, gender diversity in top governance positions, financial performance and indebtedness – and the level of ICD provided by sampled HCOs through their official websites.

Findings

Content analysis results reveal that – in the absence of mandatory requirements – Italian HCOs tend to use websites to disclose information about IC. Particular attention is devoted to Structural and Relational Capital. The statistical analysis pinpoints that size and indebtedness negatively influence the level of ICD. In contrast, the presence of a female General Manager (GM) positively drives ICD. Also, it is observed that Research and University HCOs and those located in the Italian Northern Regions are particularly prone to discharge accountability on IC through websites.

Originality/value

To the best of the authors’ knowledge, this is the first study that examines voluntary ICD practices through websites in the Italian HCOs' context. Also, since prior studies on IC in the healthcare context are mainly descriptive or normative, this is the first study examining the potential determinants of ICD provided by HCOs in terms of size, gender diversity in top governance positions, financial performance and indebtedness.

Details

International Journal of Public Sector Management, vol. 36 no. 1
Type: Research Article
ISSN: 0951-3558

Keywords

1 – 10 of 328