Search results
1 – 10 of 217Jorge Arteaga-Fonseca, Yi (Elaine) Zhang and Per Bylund
In this paper, the authors suggest that Central Americans can use entrepreneurship to solve economic uncertainty in their home country and that entrepreneurship can contribute to…
Abstract
Purpose
In this paper, the authors suggest that Central Americans can use entrepreneurship to solve economic uncertainty in their home country and that entrepreneurship can contribute to reducing the number of undocumented migrants to the USA.
Design/methodology/approach
The authors first illustrate the context of Central American illegal migration to the USA from a transitional entrepreneurship perspective, the authors address the economic drivers of illegal migration from Central America, which results in marginalization in the USA. Second, the authors build a theoretical model that suggests that Central Americans can improve their entrepreneurial abilities through the entrepreneurial cognitive adjustment mechanism.
Findings
Central Americans at risk of illegally migrating to the USA have high entrepreneurial aptitudes. Entrepreneurship can help them avoid the economic uncertainty that drives Central Americans to illegally migrate to the USA and become part of a marginalized community of undocumented immigrants. This conceptual paper introduces an entrepreneurial cognitive adjustment mechanism as a tool for Central Americans to reshape their personalities and increase their entrepreneurial abilities in their home countries. In particular, entrepreneurial intentions reshape the personality characteristics of individuals (in terms of high agreeableness and openness to experiences, as well as low neuroticism) through the entrepreneurial cognitive adjustment mechanism, which consists of reflective action in sensemaking, cognitive frameworks in pattern recognition and coping in positive affect.
Originality/value
This paper studies Central Americans at risk of illegal migration using the lens of transitional entrepreneurship, which advances the understanding of the antecedents to marginalized immigrant communities in the USA and suggests a possible solution for this phenomenon. Besides, the authors build a cognitive mechanism to facilitate the transitional process starting from entrepreneurial intention to reshaping individuals' personality, which further opens individuals' minds to entrepreneurial opportunities. Since entrepreneurial intention applies the same way to all entrepreneurs, the authors' aim of constructing the entrepreneurial intention unfolding process will go beyond transitional entrepreneurship and contribute to intention-action knowledge generation (Donaldson et al., 2021). Moreover, the conceptual study contributes to public policy such that international and local agencies can better utilize resources and implement long-term solutions to the drivers of illegal migration from Central America to the USA.
Details
Keywords
For many pattern recognition problems, the relation between the sample vectors and the class labels are known during the data acquisition procedure. However, how to find the…
Abstract
Purpose
For many pattern recognition problems, the relation between the sample vectors and the class labels are known during the data acquisition procedure. However, how to find the useful rules or knowledge hidden in the data is very important and challengeable. Rule extraction methods are very useful in mining the important and heuristic knowledge hidden in the original high-dimensional data. It can help us to construct predictive models with few attributes of the data so as to provide valuable model interpretability and less training times.
Design/methodology/approach
In this paper, a novel rule extraction method with the application of biclustering algorithm is proposed.
Findings
To choose the most significant biclusters from the huge number of detected biclusters, a specially modified information entropy calculation method is also provided. It will be shown that all of the important knowledge is in practice hidden in these biclusters.
Originality/value
The novelty of the new method lies in the detected biclusters can be conveniently translated into if-then rules. It provides an intuitively explainable and comprehensive approach to extract rules from high-dimensional data while keeping high classification accuracy.
Details
Keywords
Xiaodong Zhang, Ping Li, Xiaoning Ma and Yanjun Liu
The operating wagon records were produced from distinct railway information systems, which resulted in the wagon routing record with the same oriental destination (OD) was…
Abstract
Purpose
The operating wagon records were produced from distinct railway information systems, which resulted in the wagon routing record with the same oriental destination (OD) was different. This phenomenon has brought considerable difficulties to the railway wagon flow forecast. Some were because of poor data quality, which misled the actual prediction, while others were because of the existence of another actual wagon routings. This paper aims at finding all the wagon routing locus patterns from the history records, and thus puts forward an intelligent recognition method for the actual routing locus pattern of railway wagon flow based on SST algorithm.
Design/methodology/approach
Based on the big data of railway wagon flow records, the routing metadata model is constructed, and the historical data and real-time data are fused to improve the reliability of the path forecast results in the work of railway wagon flow forecast. Based on the division of spatial characteristics and the reduction of dimension in the distributary station, the improved Simhash algorithm is used to calculate the routing fingerprint. Combined with Squared Error Adjacency Matrix Clustering algorithm and Tarjan algorithm, the fingerprint similarity is calculated, the spatial characteristics are clustering and identified, the routing locus mode is formed and then the intelligent recognition of the actual wagon flow routing locus is realized.
Findings
This paper puts forward a more realistic method of railway wagon routing pattern recognition algorithm. The problem of traditional railway wagon routing planning is converted into the routing locus pattern recognition problem, and the wagon routing pattern of all OD streams is excavated from the historical data results. The analysis is carried out from three aspects: routing metadata, routing locus fingerprint and routing locus pattern. Then, the intelligent recognition SST-based algorithm of railway wagon routing locus pattern is proposed, which combines the history data and instant data to improve the reliability of the wagon routing selection result. Finally, railway wagon routing locus could be found out accurately, and the case study tests the validity of the algorithm.
Practical implications
Before the forecasting work of railway wagon flow, it needs to know how many kinds of wagon routing locus exist in a certain OD. Mining all the OD routing locus patterns from the railway wagon operating records is helpful to forecast the future routing combined with the wagon characteristics. The work of this paper is the basis of the railway wagon routing forecast.
Originality/value
As the basis of the railway wagon routing forecast, this research not only improves the accuracy and efficiency for the railway wagon routing forecast but also provides the further support of decision-making for the railway freight transportation organization.
Details
Keywords
M'hamed Bilal Abidine, Mourad Oussalah, Belkacem Fergani and Hakim Lounis
Mobile phone-based human activity recognition (HAR) consists of inferring user’s activity type from the analysis of the inertial mobile sensor data. This paper aims to mainly…
Abstract
Purpose
Mobile phone-based human activity recognition (HAR) consists of inferring user’s activity type from the analysis of the inertial mobile sensor data. This paper aims to mainly introduce a new classification approach called adaptive k-nearest neighbors (AKNN) for intelligent HAR using smartphone inertial sensors with a potential real-time implementation on smartphone platform.
Design/methodology/approach
The proposed method puts forward several modification on AKNN baseline by using kernel discriminant analysis for feature reduction and hybridizing weighted support vector machines and KNN to tackle imbalanced class data set.
Findings
Extensive experiments on a five large scale daily activity recognition data set have been performed to demonstrate the effectiveness of the method in terms of error rate, recall, precision, F1-score and computational/memory resources, with several comparison with state-of-the art methods and other hybridization modes. The results showed that the proposed method can achieve more than 50% improvement in error rate metric and up to 5.6% in F1-score. The training phase is also shown to be reduced by a factor of six compared to baseline, which provides solid assets for smartphone implementation.
Practical implications
This work builds a bridge to already growing work in machine learning related to learning with small data set. Besides, the availability of systems that are able to perform on flight activity recognition on smartphone will have a significant impact in the field of pervasive health care, supporting a variety of practical applications such as elderly care, ambient assisted living and remote monitoring.
Originality/value
The purpose of this study is to build and test an accurate offline model by using only a compact training data that can reduce the computational and memory complexity of the system. This provides grounds for developing new innovative hybridization modes in the context of daily activity recognition and smartphone-based implementation. This study demonstrates that the new AKNN is able to classify the data without any training step because it does not use any model for fitting and only uses memory resources to store the corresponding support vectors.
Details
Keywords
The authors aim to develop a conceptual framework for longitudinal estimation of stress-related states in the wild (IW), based on the machine learning (ML) algorithms that use…
Abstract
Purpose
The authors aim to develop a conceptual framework for longitudinal estimation of stress-related states in the wild (IW), based on the machine learning (ML) algorithms that use physiological and non-physiological bio-sensor data.
Design/methodology/approach
The authors propose a conceptual framework for longitudinal estimation of stress-related states consisting of four blocks: (1) identification; (2) validation; (3) measurement and (4) visualization. The authors implement each step of the proposed conceptual framework, using the example of Gaussian mixture model (GMM) and K-means algorithm. These ML algorithms are trained on the data of 18 workers from the public administration sector who wore biometric devices for about two months.
Findings
The authors confirm the convergent validity of a proposed conceptual framework IW. Empirical data analysis suggests that two-cluster models achieve five-fold cross-validation accuracy exceeding 70% in identifying stress. Coefficient of accuracy decreases for three-cluster models achieving around 45%. The authors conclude that identification models may serve to derive longitudinal stress-related measures.
Research limitations/implications
Proposed conceptual framework may guide researchers in creating validated stress-related indicators. At the same time, physiological sensing of stress through identification models is limited because of subject-specific reactions to stressors.
Practical implications
Longitudinal indicators on stress allow estimation of long-term impact coming from external environment on stress-related states. Such stress-related indicators can become an integral part of mobile/web/computer applications supporting stress management programs.
Social implications
Timely identification of excessive stress may improve individual well-being and prevent development stress-related diseases.
Originality/value
The study develops a novel conceptual framework for longitudinal estimation of stress-related states using physiological and non-physiological bio-sensor data, given that scientific knowledge on validated longitudinal indicators of stress is in emergent state.
Details
Keywords
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…
Abstract
Purpose
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.
Design/methodology/approach
The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.
Findings
On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.
Originality/value
A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.
Details
Keywords
Xisto L. Travassos, Sérgio L. Avila and Nathan Ida
Ground Penetrating Radar is a multidisciplinary Nondestructive Evaluation technique that requires knowledge of electromagnetic wave propagation, material properties and antenna…
Abstract
Ground Penetrating Radar is a multidisciplinary Nondestructive Evaluation technique that requires knowledge of electromagnetic wave propagation, material properties and antenna theory. Under some circumstances this tool may require auxiliary algorithms to improve the interpretation of the collected data. Detection, location and definition of target’s geometrical and physical properties with a low false alarm rate are the objectives of these signal post-processing methods. Basic approaches are focused in the first two objectives while more robust and complex techniques deal with all objectives at once. This work reviews the use of Artificial Neural Networks and Machine Learning for data interpretation of Ground Penetrating Radar surveys. We show that these computational techniques have progressed GPR forward from locating and testing to imaging and diagnosis approaches.
Details
Keywords
David Holger Schmidt, Dirk van Dierendonck and Ulrike Weber
This study focuses on leadership in organizations where big data analytics (BDA) is an essential component of corporate strategy. While leadership researchers have conducted…
Abstract
Purpose
This study focuses on leadership in organizations where big data analytics (BDA) is an essential component of corporate strategy. While leadership researchers have conducted promising studies in the field of digital transformation, the impact of BDA on leadership is still unexplored.
Design/methodology/approach
This study is based on semi-structured interviews with 33 organizational leaders and subject-matter experts from various industries. Using a grounded theory approach, a framework is provided for the emergent field of BDA in leadership research.
Findings
The authors present a conceptual model comprising foundational competencies and higher order roles that are data analytical skills, data self-efficacy, problem spotter, influencer, knowledge facilitator, visionary and team leader.
Research limitations/implications
This study focuses on BDA competency research emerging as an intersection between leadership research and information systems research. The authors encourage a longitudinal study to validate the findings.
Practical implications
The authors provide a competency framework for organizational leaders. It serves as a guideline for leaders to best support the BDA initiatives of the organization. The competency framework can support recruiting, selection and leader promotion.
Originality/value
This study provides a novel BDA leadership competency framework with a unique combination of competencies and higher order roles.
Details
Keywords
Xiaomei Jiang, Shuo Wang, Wenjian Liu and Yun Yang
Traditional Chinese medicine (TCM) prescriptions have always relied on the experience of TCM doctors, and machine learning(ML) provides a technical means for learning these…
Abstract
Purpose
Traditional Chinese medicine (TCM) prescriptions have always relied on the experience of TCM doctors, and machine learning(ML) provides a technical means for learning these experiences and intelligently assists in prescribing. However, in TCM prescription, there are the main (Jun) herb and the auxiliary (Chen, Zuo and Shi) herb collocations. In a prescription, the types of auxiliary herbs are often more than the main herb and the auxiliary herbs often appear in other prescriptions. This leads to different frequencies of different herbs in prescriptions, namely, imbalanced labels (herbs). As a result, the existing ML algorithms are biased, and it is difficult to predict the main herb with less frequency in the actual prediction and poor performance. In order to solve the impact of this problem, this paper proposes a framework for multi-label traditional Chinese medicine (ML-TCM) based on multi-label resampling.
Design/methodology/approach
In this work, a multi-label learning framework is proposed that adopts and compares the multi-label random resampling (MLROS), multi-label synthesized resampling (MLSMOTE) and multi-label synthesized resampling based on local label imbalance (MLSOL), three multi-label oversampling techniques to rebalance the TCM data.
Findings
The experimental results show that after resampling, the less frequent but important herbs can be predicted more accurately. The MLSOL method is shown to be the best with over 10% improvements on average because it balances the data by considering both features and labels when resampling.
Originality/value
The authors first systematically analyzed the label imbalance problem of different sampling methods in the field of TCM and provide a solution. And through the experimental results analysis, the authors proved the feasibility of this method, which can improve the performance by 10%−30% compared with the state-of-the-art methods.
Details