Search results

1 – 10 of 137
To view the access options for this content please click here
Article
Publication date: 7 August 2017

Shenglan Liu, Muxin Sun, Xiaodong Huang, Wei Wang and Feilong Wang

Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion…

Abstract

Purpose

Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion (FGF) for robot recognition.

Design/methodology/approach

The feature fusion utilizes red green blue (RGB) and depth information to construct fused feature from Kinect. FGF involves multi-Jaccard similarity to compute a robust graph and word embedding method to enhance the recognition results.

Findings

The authors also collect DUT RGB-Depth (RGB-D) face data set and a benchmark data set to evaluate the effectiveness and efficiency of this method. The experimental results illustrate that FGF is robust and effective to face and object data sets in robot applications.

Originality/value

The authors first utilize Jaccard similarity to construct a graph of RGB and depth images, which indicates the similarity of pair-wise images. Then, fusion feature of RGB and depth images can be computed by the Extended Jaccard Graph using word embedding method. The FGF can get better performance and efficiency in RGB-D sensor for robots.

Details

Assembly Automation, vol. 37 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 1 July 2014

Byung-Won On, Gyu Sang Choi and Soo-Mok Jung

The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs…

Abstract

Purpose

The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case study of the name authority control problem in DLs.

Design/methodology/approach

To find a sample of name variants across DLs (e.g. DBLP and ACM) and in a single DL (e.g. ACM), the approach is based on two bipartite matching algorithms: Maximum Weighted Bipartite Matching and Maximum Cardinality Bipartite Matching.

Findings

First, the authors validated the effectiveness and efficiency of the bipartite matching algorithms. The authors also studied the nature of real cases of author name variants that had been found across DLs (e.g. ACM, CiteSeer and DBLP) and in a single DL.

Originality/value

To the best of the authors knowledge, there is less research effort to understand the nature of author name variants shown in DLs. A thorough analysis can help focus research effort on real problems that arise when the authors perform duplicate detection methods.

Details

Program, vol. 48 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 2002

Hassan M. Selim

The design of a cellular manufacturing system requires that a machine population be partitioned into machine groups called manufacturing cells. A new graph partitioning…

Abstract

The design of a cellular manufacturing system requires that a machine population be partitioned into machine groups called manufacturing cells. A new graph partitioning heuristic is proposed to solve the manufacturing cell formation problem (MCFP). In the proposed heuristic, The MCFP is represented by a graph whose node set represents the machine cluster and edge set represents the machine‐pair association weights. A graph partitioning approach is used to form the manufacturing cells. This approach offers improved design flexibility by allowing a variety of design parameters to be controlled during cell formation. The effectiveness of the heuristic is demonstrated by comparing it to two MCFP published solution methods using several problems from the literature.

Details

Industrial Management & Data Systems, vol. 102 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Article
Publication date: 23 March 2021

Ulya Bayram, Runia Roy, Aqil Assalil and Lamia BenHiba

The COVID-19 pandemic has sparked a remarkable volume of research literature, and scientists are increasingly in need of intelligent tools to cut through the noise and…

Abstract

Purpose

The COVID-19 pandemic has sparked a remarkable volume of research literature, and scientists are increasingly in need of intelligent tools to cut through the noise and uncover relevant research directions. As a response, the authors propose a novel framework. In this framework, the authors develop a novel weighted semantic graph model to compress the research studies efficiently. Also, the authors present two analyses on this graph to propose alternative ways to uncover additional aspects of COVID-19 research.

Design/methodology/approach

The authors construct the semantic graph using state-of-the-art natural language processing (NLP) techniques on COVID-19 publication texts (>100,000 texts). Next, the authors conduct an evolutionary analysis to capture the changes in COVID-19 research across time. Finally, the authors apply a link prediction study to detect novel COVID-19 research directions that are so far undiscovered.

Findings

Findings reveal the success of the semantic graph in capturing scientific knowledge and its evolution. Meanwhile, the prediction experiments provide 79% accuracy on returning intelligible links, showing the reliability of the methods for predicting novel connections that could help scientists discover potential new directions.

Originality/value

To the authors’ knowledge, this is the first study to propose a holistic framework that includes encoding the scientific knowledge in a semantic graph, demonstrates an evolutionary examination of past and ongoing research and offers scientists with tools to generate new hypotheses and research directions through predictive modeling and deep machine learning techniques.

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 2016

Yuxian Eugene Liang and Soe-Tsyr Daphne Yuan

What makes investors tick? Largely counter-intuitive compared to the findings of most past research, this study explores the possibility that funding investors invest in…

Abstract

Purpose

What makes investors tick? Largely counter-intuitive compared to the findings of most past research, this study explores the possibility that funding investors invest in companies based on social relationships, which could be positive or negative, similar or dissimilar. The purpose of this paper is to build a social network graph using data from CrunchBase, the largest public database with profiles about companies. The authors combine social network analysis with the study of investing behavior in order to explore how similarity between investors and companies affects investing behavior through social network analysis.

Design/methodology/approach

This study crawls and analyzes data from CrunchBase and builds a social network graph which includes people, companies, social links and funding investment links. The problem is then formalized as a link (or relationship) prediction task in a social network to model and predict (across various machine learning methods and evaluation metrics) whether an investor will create a link to a company in the social network. Various link prediction techniques such as common neighbors, shortest path, Jaccard Coefficient and others are integrated to provide a holistic view of a social network and provide useful insights as to how a pair of nodes may be related (i.e., whether the investor will invest in the particular company at a time) within the social network.

Findings

This study finds that funding investors are more likely to invest in a particular company if they have a stronger social relationship in terms of closeness, be it direct or indirect. At the same time, if investors and companies share too many common neighbors, investors are less likely to invest in such companies.

Originality/value

The author’s study is among the first to use data from the largest public company profile database of CrunchBase as a social network for research purposes. The author ' s also identify certain social relationship factors that can help prescribe the investor funding behavior. Authors prediction strategy based on these factors and modeling it as a link prediction problem generally works well across the most prominent learning algorithms and perform well in terms of aggregate performance as well as individual industries. In other words, this study would like to encourage companies to focus on social relationship factors in addition to other factors when seeking external funding investments.

Details

Internet Research, vol. 26 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 28 July 2020

Sathyaraj R, Ramanathan L, Lavanya K, Balasubramanian V and Saira Banu J

The innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the…

Abstract

Purpose

The innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of the imbalance data in the massive data sets is a major constraint to the research industry.

Design/methodology/approach

The purpose of the paper is to introduce a big data classification technique using the MapReduce framework based on an optimization algorithm. The big data classification is enabled using the MapReduce framework, which utilizes the proposed optimization algorithm, named chicken-based bacterial foraging (CBF) algorithm. The proposed algorithm is generated by integrating the bacterial foraging optimization (BFO) algorithm with the cat swarm optimization (CSO) algorithm. The proposed model executes the process in two stages, namely, training and testing phases. In the training phase, the big data that is produced from different distributed sources is subjected to parallel processing using the mappers in the mapper phase, which perform the preprocessing and feature selection based on the proposed CBF algorithm. The preprocessing step eliminates the redundant and inconsistent data, whereas the feature section step is done on the preprocessed data for extracting the significant features from the data, to provide improved classification accuracy. The selected features are fed into the reducer for data classification using the deep belief network (DBN) classifier, which is trained using the proposed CBF algorithm such that the data are classified into various classes, and finally, at the end of the training process, the individual reducers present the trained models. Thus, the incremental data are handled effectively based on the training model in the training phase. In the testing phase, the incremental data are taken and split into different subsets and fed into the different mappers for the classification. Each mapper contains a trained model which is obtained from the training phase. The trained model is utilized for classifying the incremental data. After classification, the output obtained from each mapper is fused and fed into the reducer for the classification.

Findings

The maximum accuracy and Jaccard coefficient are obtained using the epileptic seizure recognition database. The proposed CBF-DBN produces a maximal accuracy value of 91.129%, whereas the accuracy values of the existing neural network (NN), DBN, naive Bayes classifier-term frequency–inverse document frequency (NBC-TFIDF) are 82.894%, 86.184% and 86.512%, respectively. The Jaccard coefficient of the proposed CBF-DBN produces a maximal Jaccard coefficient value of 88.928%, whereas the Jaccard coefficient values of the existing NN, DBN, NBC-TFIDF are 75.891%, 79.850% and 81.103%, respectively.

Originality/value

In this paper, a big data classification method is proposed for categorizing massive data sets for meeting the constraints of huge data. The big data classification is performed on the MapReduce framework based on training and testing phases in such a way that the data are handled in parallel at the same time. In the training phase, the big data is obtained and partitioned into different subsets of data and fed into the mapper. In the mapper, the features extraction step is performed for extracting the significant features. The obtained features are subjected to the reducers for classifying the data using the obtained features. The DBN classifier is utilized for the classification wherein the DBN is trained using the proposed CBF algorithm. The trained model is obtained as an output after the classification. In the testing phase, the incremental data are considered for the classification. New data are first split into subsets and fed into the mapper for classification. The trained models obtained from the training phase are used for the classification. The classified results from each mapper are fused and fed into the reducer for the classification of big data.

Details

Data Technologies and Applications, vol. 55 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

To view the access options for this content please click here
Article
Publication date: 16 February 2021

Zhongjun Tang, Tingting Wang, Junfu Cui, Zhongya Han and Bo He

Because of short life cycle and fluctuating greatly in total sales volumes (TSV), it is difficult to accumulate enough sales data and mine an attribute set reflecting the…

Abstract

Purpose

Because of short life cycle and fluctuating greatly in total sales volumes (TSV), it is difficult to accumulate enough sales data and mine an attribute set reflecting the common needs of all consumers for a kind of experiential product with short life cycle (EPSLC). Methods for predicting TSV of long-life-cycle products may not be suitable for EPSLC. Furthermore, point prediction cannot obtain satisfactory prediction results because information available before production is inadequate. Thus, this paper aims at proposing and verifying a novel interval prediction method (IPM).

Design/methodology/approach

Because interval prediction may satisfy requirements of preproduction investment decision-making, interval prediction was adopted, and then the prediction difficult was converted into a classification problem. The classification was designed by comparing similarities in attribute relationship patterns between a new EPSLC and existing product groups. The product introduction may be written or obtained before production and thus was designed as primary source information. IPM was verified by using data of crime movies released in China from 2013 to 2017.

Findings

The IPM is valid, which uses product introduction as input, classifies existing products into three groups with different TSV intervals, mines attribute relationship patterns using content and association analyses and compares similarities in attribute relationship patterns – to predict TSV interval of a new EPSLC before production.

Originality/value

Different from other studies, the IPM uses product introduction to mine attribute relationship patterns and compares similarities in attribute relationship patterns to predict the interval values. It has a strong applicability in data content and structure and may realize rolling prediction.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

To view the access options for this content please click here
Article
Publication date: 7 August 2017

Roghiyeh Hajizadeh and Nima Jafari Navimipour

Cloud services have become very popular among researchers and people recently. In such a scenario, identifying reliable cloud services has become very important. The trust…

Abstract

Purpose

Cloud services have become very popular among researchers and people recently. In such a scenario, identifying reliable cloud services has become very important. The trust value plays a significant role in recognizing reliable providers. The purpose of this paper is to propose a new method to evaluate the trust metric among the cloud providers. The main goal is to increase the precision and accuracy of the trust evaluation method in the cloud environments.

Design/methodology/approach

This paper evaluates the trust metric among the cloud providers and entities by grouping the services and using a behavioral graph. Four parameters, availability, reliability, interaction evolution and identity, are used for evaluating the trust value. The performance of the proposed method is assessed using a simulator which is programmed in the cloud Azure 2013 based on C# codes.

Findings

The method is evaluated through various experiments in terms of precision, recall, error-hit, reliability and availability. The obtained results show that the proposed method has better reliability and availability than the FIFO and QoS models. Also, the results show that increasing the number of groups leads to increasing values of trust, precision and availability, and decreasing values of error-hit.

Originality/value

This paper proposes a trust evaluation method in the cloud environment by grouping the services and using a behavioral graph for improving the amount of availability, error-hit, precision and reliability values.

Details

Kybernetes, vol. 46 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 10 January 2020

Khawla Asmi, Dounia Lotfi and Mohamed El Marraki

The state-of-the-art methods designed for overlapping community detection are limited by their high execution time as in CPM or the need to provide some parameters like…

Abstract

Purpose

The state-of-the-art methods designed for overlapping community detection are limited by their high execution time as in CPM or the need to provide some parameters like the number of communities in Bigclam and Nise_sph, which is a nontrivial information. Hence, there is a need to develop the accuracy that represents the primordial goal, where the actual state-of-the-art methods do not succeed to achieve high correspondence with the ground truth for many instances of networks. The paper aims to discuss this issue.

Design/methodology/approach

The authors offer a new method that explore the union of all maximum spanning trees (UMST) and models the strength of links between nodes. Also, each node in the UMST is linked with its most similar neighbor. From this model, the authors extract local community for each node, and then they combine the produced communities according to their number of shared nodes.

Findings

The experiments on eight real-world data sets and four sets of artificial networks show that the proposed method achieves obvious improvements over four state-of-the-art (BigClam, OSLOM, Demon, SE, DMST and ST) methods in terms of the F-score and ONMI for the networks with ground truth (Amazon, Youtube, LiveJournal and Orkut). Also, for the other networks, it provides communities with a good overlapping modularity.

Originality/value

In this paper, the authors investigate the UMST for the overlapping community detection.

To view the access options for this content please click here
Article
Publication date: 30 August 2011

Naoki Shibata, Yuya Kajikawa and Ichiro Sakata

This paper seeks to propose a method of discovering uncommercialized research fronts by comparing scientific papers and patents. A comparative study was performed to

Abstract

Purpose

This paper seeks to propose a method of discovering uncommercialized research fronts by comparing scientific papers and patents. A comparative study was performed to measure the semantic similarity between academic papers and patents in order to discover research fronts that do not correspond to any patents.

Design/methodology/approach

The authors compared structures of citation networks of scientific publications with those of patents by citation analysis and measured the similarity between sets of academic papers and sets of patents by natural language processing. After the documents (papers/patents) in each layer were categorized by a citation‐based method, the authors compared three semantic similarity measurements between a set of academic papers and a set of patents: Jaccard coefficient, cosine similarity of term frequency‐inverse document frequency (tfidf) vector, and cosine similarity of log‐tfidf vector. A case study was performed in solar cells.

Findings

As a result, the cosine similarity of tfidf was found to be the best way of discovering corresponding relationships.

Social implications

This proposed approach makes it possible to obtain candidates of unexplored research fronts, where academic researches exist but patents do not. This methodology can be immediately applied to support the decision making of R&D investment by both R&D managers in companies and policy makers in government.

Originality/value

This paper enables comparison of scientific outcomes and patents in more detail by citation analysis and natural language processing than previous studies which just count the direct linkage from patents to papers.

1 – 10 of 137