Search results
1 – 10 of over 1000Kyle Dillon Feuz and Diane J. Cook
The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require…
Abstract
Purpose
The purpose of this paper is to study heterogeneous transfer learning for activity recognition using heuristic search techniques. Many pervasive computing applications require information about the activities currently being performed, but activity recognition algorithms typically require substantial amounts of labeled training data for each setting. One solution to this problem is to leverage transfer learning techniques to reuse available labeled data in new situations.
Design/methodology/approach
This paper introduces three novel heterogeneous transfer learning techniques that reverse the typical transfer model and map the target feature space to the source feature space and apply them to activity recognition in a smart apartment. This paper evaluates the techniques on data from 18 different smart apartments located in an assisted-care facility and compares the results against several baselines.
Findings
The three transfer learning techniques are all able to outperform the baseline comparisons in several situations. Furthermore, the techniques are successfully used in an ensemble approach to achieve even higher levels of accuracy.
Originality/value
The techniques in this paper represent a considerable step forward in heterogeneous transfer learning by removing the need to rely on instance – instance or feature – feature co-occurrence data.
Details
Keywords
Annikki Roos and Turid Hedlund
The purpose of this paper is to analyze the information practices of the researchers in biomedicine using the domain analytical approach.
Abstract
Purpose
The purpose of this paper is to analyze the information practices of the researchers in biomedicine using the domain analytical approach.
Design/methodology/approach
The domain analytical research approach used in the study of the scientific domain of biomedicine leads to studies into the organization of sciences. By using Whitley’s dimensions of “mutual dependence” and “task uncertainty” in scientific work as a starting point the authors were able to reanalyze previously collected data. By opening up these concepts in the biomedical research work context, the authors analyzed the distinguishing features of the biomedical domain and the way these features affected researchers’ information practices.
Findings
Several indicators representing “task uncertainty” and “mutual dependence” in the scientific domain of biomedicine were identified. This study supports the view that in biomedicine the task uncertainty is low and researchers are mutually highly dependent on each other. Hard competition seems to be one feature, which is behind the explosion of the data and publications in this domain. This fact, on its part is directly related to the ways information is searched, followed, used and produced. The need for new easy to use services or tools for searching and following information in so called “hot” topics came apparent.
Originality/value
The study highlights new information about information practices in the biomedical domain. Whitley’s theory enabled a thorough analysis of the cultural and social nature of the biomedical domain and it proved to be useful in the examination of researchers’ information practices.
Details
Keywords
This paper aims to introduce a new phenomenon related to creative motivation called creative resolve response (CRR). CRR predicts how creative motivation will vary during problem…
Abstract
Purpose
This paper aims to introduce a new phenomenon related to creative motivation called creative resolve response (CRR). CRR predicts how creative motivation will vary during problem solving.
Design/methodology/approach
In total, 66 MBA students were asked to respond at random intervals during different class problem‐solving activities. Participants were asked to rate on two preset scales their perceived certainty of solving the problem successfully and creativity level required. Mean creativity required responses were calculated for subgroups with different cognitive style ranges at each outcome certainty level. T‐tests were used to determine significant differences between various means.
Findings
The results suggest that creative motivation will vary systematically as a problem solver's perception of problem solving progress increases in a wax‐wane‐wax pattern.
Research limitations/implications
Post hoc analysis suggested that potentially confounding effects related to problem heterogeneity, learning effects, environment, group interaction and interviewer response bias were not significant. However the relatively small sample size and limited scope of the problem activities suggests that further research is required to establish the extent to which the findings can be generalised.
Practical implications
CRR promises a new form of extrinsic control for managers to enhance creativity via extrinsic motivation. The author makes suggestions on how managers may enhance creativity by influencing employees to reconsider their perceived level of problem‐solving progress.
Originality/value
This paper links expectancy theory, cognitive style and creative motivation, and provides an alternative approach to trying directly to motivate employees to be more creative.
Details
Keywords
Hao He, Dongfang Yang, Shicheng Wang, Shuyang Wang and Xing Liu
The purpose of this paper is to study the road segmentation problem of cross-modal remote sensing images.
Abstract
Purpose
The purpose of this paper is to study the road segmentation problem of cross-modal remote sensing images.
Design/methodology/approach
First, the baseline network based on the U-net is trained under a large-scale dataset of remote sensing imagery. Then, the cross-modal training data are used to fine-tune the first two convolutional layers of the pre-trained network to achieve the adaptation to the local features of the cross-modal data. For the cross-modal data of different band, an autoencoder is designed to achieve data conversion and local feature extraction.
Findings
The experimental results show the effectiveness and practicability of the proposed method. Compared with the ordinary method, the proposed method gets much better metrics.
Originality/value
The originality is the transfer learning strategy that fine-tunes the low-level layers for the cross-modal data application. The proposed method can achieve satisfied road segmentation with a small amount of cross-modal training data, so that is has a good application value. Still, for the similar application of cross-modal data, the idea provided by this paper is helpful.
Details
Keywords
Sihem Cherif, Raoudha Ben Djemaa and Ikram Amous
This paper aims to propose an approach for the self-adaptation of the Web composition called SAMIWA. The SAMIWA framework helps users during the search, invocation and composition…
Abstract
Purpose
This paper aims to propose an approach for the self-adaptation of the Web composition called SAMIWA. The SAMIWA framework helps users during the search, invocation and composition of the appropriate Web service.
Design/methodology/approach
The authors’ approach allows expressing requirements by taking into account potential users’ context in addition to the functional one.
Findings
In this paper, the authors introduce a new context-aware approach that provides a dynamic adaptation of service compositions.
Originality/value
The author has implemented a Web application that enables selection and composition of the most appropriate composite service.
Details
Keywords
Bilal Abu-Salih, Pornpit Wongthongtham and Chan Yan Kit
This paper aims to obtain the domain of the textual content generated by users of online social network (OSN) platforms. Understanding a users’ domain (s) of interest is a…
Abstract
Purpose
This paper aims to obtain the domain of the textual content generated by users of online social network (OSN) platforms. Understanding a users’ domain (s) of interest is a significant step towards addressing their domain-based trustworthiness through an accurate understanding of their content in their OSNs.
Design/methodology/approach
This study uses a Twitter mining approach for domain-based classification of users and their textual content. The proposed approach incorporates machine learning modules. The approach comprises two analysis phases: the time-aware semantic analysis of users’ historical content incorporating five commonly used machine learning classifiers. This framework classifies users into two main categories: politics-related and non-politics-related categories. In the second stage, the likelihood predictions obtained in the first phase will be used to predict the domain of future users’ tweets.
Findings
Experiments have been conducted to validate the mechanism proposed in the study framework, further supported by the excellent performance of the harnessed evaluation metrics. The experiments conducted verify the applicability of the framework to an effective domain-based classification for Twitter users and their content, as evident in the outstanding results of several performance evaluation metrics.
Research limitations/implications
This study is limited to an on/off domain classification for content of OSNs. Hence, we have selected a politics domain because of Twitter’s popularity as an opulent source of political deliberations. Such data abundance facilitates data aggregation and improves the results of the data analysis. Furthermore, the currently implemented machine learning approaches assume that uncertainty and incompleteness do not affect the accuracy of the Twitter classification. In fact, data uncertainty and incompleteness may exist. In the future, the authors will formulate the data uncertainty and incompleteness into fuzzy numbers which can be used to address imprecise, uncertain and vague data.
Practical implications
This study proposes a practical framework comprising significant implications for a variety of business-related applications, such as the voice of customer/voice of market, recommendation systems, the discovery of domain-based influencers and opinion mining through tracking and simulation. In particular, the factual grasp of the domains of interest extracted at the user level or post level enhances the customer-to-business engagement. This contributes to an accurate analysis of customer reviews and opinions to improve brand loyalty, customer service, etc.
Originality/value
This paper fills a gap in the existing literature by presenting a consolidated framework for Twitter mining that aims to uncover the deficiency of the current state-of-the-art approaches to topic distillation and domain discovery. The overall approach is promising in the fortification of Twitter mining towards a better understanding of users’ domains of interest.
Details
Keywords
Shiva Sumanth Reddy and C. Nandini
The present research work is carried out for determining haemoprotozoan diseases in cattle and breast cancer diseases in humans at early stage. The combination of LeNet and…
Abstract
Purpose
The present research work is carried out for determining haemoprotozoan diseases in cattle and breast cancer diseases in humans at early stage. The combination of LeNet and bidirectional long short-term memory (Bi-LSTM) model is used for the classification of heamoprotazoan samples into three classes such as theileriosis, babesiosis and anaplasmosis. Also, BreaKHis dataset image samples are classified into two major classes as malignant and benign. The hyperparameter optimization is used for selecting the prominent features. The main objective of this approach is to overcome the manual identification and classification of samples into different haemoprotozoan diseases in cattle. The traditional laboratory approach of identification is time-consuming and requires human expertise. The proposed methodology will help to identify and classify the heamoprotozoan disease in early stage without much of human involvement.
Design/methodology/approach
LeNet-based Bi-LSTM model is used for the classification of pathology images into babesiosis, anaplasmosis, theileriosis and breast images classified into malignant or benign. An optimization-based super pixel clustering algorithm is used for segmentation once the normalization of histopathology images is conducted. The edge information in the normalized images is considered for identifying the irregular shape regions of images, which are structurally meaningful. Also, it is compared with another segmentation approach circular Hough Transform (CHT). The CHT is used to separate the nuclei from non-nuclei. The Canny edge detection and gaussian filter is used for extracting the edges before sending to CHT.
Findings
The existing methods such as artificial neural network (ANN), convolution neural network (CNN), recurrent neural network (RNN), LSTM and Bi-LSTM model have been compared with the proposed hyperparameter optimization approach with LeNET and Bi-LSTM. The results obtained by the proposed hyperparameter optimization-Bi-LSTM model showed the accuracy of 98.99% when compared to existing models like Ensemble of Deep Learning Models of 95.29% and Modified ReliefF Algorithm of 95.94%.
Originality/value
In contrast to earlier research done using Modified ReliefF, the suggested LeNet with Bi-LSTM model, there is an improvement in accuracy, precision and F-score significantly. The real time data set is used for the heamoprotozoan disease samples. Also, for anaplasmosis and babesiosis, the second set of datasets were used which are coloured datasets obtained by adding a chemical acetone and stain.
Details
Keywords
Vishakha Pareek, Santanu Chaudhury and Sanjay Singh
The electronic nose is an array of chemical or gas sensors and associated with a pattern-recognition framework competent in identifying and classifying odorant or non-odorant and…
Abstract
Purpose
The electronic nose is an array of chemical or gas sensors and associated with a pattern-recognition framework competent in identifying and classifying odorant or non-odorant and simple or complex gases. Despite more than 30 years of research, the robust e-nose device is still limited. Most of the challenges towards reliable e-nose devices are associated with the non-stationary environment and non-stationary sensor behaviour. Data distribution of sensor array response evolves with time, referred to as non-stationarity. The purpose of this paper is to provide a comprehensive introduction to challenges related to non-stationarity in e-nose design and to review the existing literature from an application, system and algorithm perspective to provide an integrated and practical view.
Design/methodology/approach
The authors discuss the non-stationary data in general and the challenges related to the non-stationarity environment in e-nose design or non-stationary sensor behaviour. The challenges are categorised and discussed with the perspective of learning with data obtained from the sensor systems. Later, the e-nose technology is reviewed with the system, application and algorithmic point of view to discuss the current status.
Findings
The discussed challenges in e-nose design will be beneficial for researchers, as well as practitioners as it presents a comprehensive view on multiple aspects of non-stationary learning, system, algorithms and applications for e-nose. The paper presents a review of the pattern-recognition techniques, public data sets that are commonly referred to as olfactory research. Generic techniques for learning in the non-stationary environment are also presented. The authors discuss the future direction of research and major open problems related to handling non-stationarity in e-nose design.
Originality/value
The authors first time review the existing literature related to learning with e-nose in a non-stationary environment and existing generic pattern-recognition algorithms for learning in the non-stationary environment to bridge the gap between these two. The authors also present details of publicly available sensor array data sets, which will benefit the upcoming researchers in this field. The authors further emphasise several open problems and future directions, which should be considered to provide efficient solutions that can handle non-stationarity to make e-nose the next everyday device.
Details
Keywords
Rohit Pethe, Thomas Heuzé and Laurent Stainier
The purpose of this paper is to present a variational mesh h-adaption approach for strongly coupled thermomechanical problems.
Abstract
Purpose
The purpose of this paper is to present a variational mesh h-adaption approach for strongly coupled thermomechanical problems.
Design/methodology/approach
The mesh is adapted by local subdivision controlled by an energy criterion. Thermal and thermomechanical problems are of interest here. In particular, steady and transient purely thermal problems, transient strongly coupled thermoelasticity and thermoplasticity problems are investigated.
Findings
Different test cases are performed to test the robustness of the algorithm for the problems listed above. It is found that a better cost-effectiveness can be obtained with that approach compared to a uniform refining procedure. Because the algorithm is based on a set of tolerance parameters, parametric analyses and a study of their respective influence on the mesh adaption are carried out. This detailed analysis is performed on unidimensional problems, and a final example is provided in two dimensions.
Originality/value
This work presents an original approach for independent h-adaption of a mechanical and a thermal mesh in strongly coupled problems, based on an incremental variational formulation. The approach does not rely on (or attempt to provide) error estimation in the classical sense. It could merely be considered to provide an error indicator. Instead, it provides a practical methodology to adapt the mesh on the basis of the variational structure of the underlying mathematical problem.
Details
Keywords
Bilge Yigit Ozkan, Marco Spruit, Roland Wondolleck and Verónica Burriel Coll
This paper presents a method for adapting an Information Security Focus Area Maturity (ISFAM) model to the organizational characteristics (OCs) of a small- and medium-sized…
Abstract
Purpose
This paper presents a method for adapting an Information Security Focus Area Maturity (ISFAM) model to the organizational characteristics (OCs) of a small- and medium-sized enterprise (SME) cluster. The purpose of this paper is to provide SMEs with a tailored maturity model enabling them to capture and improve their information security capabilities.
Design/methodology/approach
Design Science Research was followed to design and evaluate the method as a design artifact.
Findings
The method has successfully been used to adapt the ISFAM model to a group of SMEs within a regional cluster resulting in a model that is aligned with the OCs of the cluster. Areas for further investigation and improvements were identified.
Research limitations/implications
The study is based on applying the proposed method for the SMEs active in the transport, logistics and packaging sector in the Port of Rotterdam. Future research can focus on different sectors and regions. The method can be used for adapting other focus area maturity models.
Practical implications
The resulting adapted maturity model can facilitate the creation and further development of a base of common or shared knowledge in the cluster. The adapted maturity model can cut the cost of over implementation of information security capabilities for the SMEs with scarce resources.
Originality/value
The resulting adapted maturity model can facilitate the creation and further development of a base of common or shared knowledge in the cluster. The adapted maturity model can cut the cost of over implementation of information security capabilities for the SMEs with scarce resources.
Details