Search results

1 – 10 of 423
Article
Publication date: 7 September 2015

M. V. A. Raju Bahubalendruni, Bibhuti Bhusan Biswal, Manish Kumar and Radharani Nayak

The purpose of this paper is to find out the significant influence of assembly predicate consideration on optimal assembly sequence generation (ASG) in terms of search space…

Abstract

Purpose

The purpose of this paper is to find out the significant influence of assembly predicate consideration on optimal assembly sequence generation (ASG) in terms of search space, computational time and possibility of resulting practically not feasible assembly sequences. An appropriate assembly sequence results in minimal lead time and low cost of assembly. ASG is a complex combinatorial optimisation problem which deals with several assembly predicates to result an optimal assembly sequence. The consideration of each assembly predicate highly influences the search space and thereby computational time to achieve valid assembly sequence. Often, the ignoring an assembly predicate leads to inappropriate assembly sequence, which may not be physically possible, sometimes predicate assumption drastic ally raises the search space with high computational time.

Design/methodology/approach

The influence of assuming and considering different assembly predicates on optimal assembly sequence generation have been clearly illustrated with examples using part concatenation method.

Findings

The presence of physical attachments and type of assembly liaisons decide the consideration of assembly predicate to reduce the complexity of the problem formulation and overall computational time.

Originality/value

Most of the times, assembly predicates are ignored to reduce the computational time without considering their impact on the assembly sequence problem irrespective of assembly attributes. The current research proposes direction towards predicate considerations based on the assembly configurations for effective and efficient ASG.

Article
Publication date: 14 July 2020

Hongjuan Yang, Jiwen Chen, Chen Wang, Jiajia Cui and Wensheng Wei

The implied assembly constraints of a computer-aided design (CAD) model (e.g. hierarchical constraints, geometric constraints and topological constraints) represent an important…

Abstract

Purpose

The implied assembly constraints of a computer-aided design (CAD) model (e.g. hierarchical constraints, geometric constraints and topological constraints) represent an important basis for product assembly sequence intelligent planning. Assembly prior knowledge contains factual assembly knowledge and experience assembly knowledge, which are important factors for assembly sequence intelligent planning. This paper aims to improve monotonous assembly sequence planning for a rigid product, intelligent planning of product assembly sequences based on spatio-temporal semantic knowledge is proposed.

Design/methodology/approach

A spatio-temporal semantic assembly information model is established. The internal data of the CAD model are accessed to extract spatio-temporal semantic assembly information. The knowledge system for assembly sequence intelligent planning is built using an ontology model. The assembly sequence for the sub-assembly and assembly is generated via attribute retrieval and rule reasoning of spatio-temporal semantic knowledge. The optimal assembly sequence is achieved via a fuzzy comprehensive evaluation.

Findings

The proposed spatio-temporal semantic information model and knowledge system can simultaneously express CAD model knowledge and prior knowledge for intelligent planning of product assembly sequences. Attribute retrieval and rule reasoning of spatio-temporal semantic knowledge can be used to generate product assembly sequences.

Practical implications

The assembly sequence intelligent planning example of linear motor highlights the validity of intelligent planning of product assembly sequences based on spatio-temporal semantic knowledge.

Originality/value

The spatio-temporal semantic information model and knowledge system are built to simultaneously express CAD model knowledge and assembly prior knowledge. The generation algorithm via attribute retrieval and rule reasoning of spatio-temporal semantic knowledge is given for intelligent planning of product assembly sequences in this paper. The proposed method is efficient because of the small search space.

Details

Assembly Automation, vol. 40 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 17 May 2021

Guoyuan Shi, Yingjie Zhang and Manni Zeng

Workpiece sorting is a key link in industrial production lines. The vision-based workpiece sorting system is non-contact and widely applicable. The detection and recognition of…

206

Abstract

Purpose

Workpiece sorting is a key link in industrial production lines. The vision-based workpiece sorting system is non-contact and widely applicable. The detection and recognition of workpieces are the key technologies of the workpiece sorting system. To introduce deep learning algorithms into workpiece detection and improve detection accuracy, this paper aims to propose a workpiece detection algorithm based on the single-shot multi-box detector (SSD).

Design/methodology/approach

Propose a multi-feature fused SSD network for fast workpiece detection. First, the multi-view CAD rendering images of the workpiece are used as deep learning data sets. Second, the visual geometry group network was trained for workpiece recognition to identify the category of the workpiece. Third, this study designs a multi-level feature fusion method to improve the detection accuracy of SSD (especially for small objects); specifically, a feature fusion module is added, which uses “element-wise sum” and “concatenation operation” to combine the information of shallow features and deep features.

Findings

Experimental results show that the actual workpiece detection accuracy of the method can reach 96% and the speed can reach 41 frames per second. Compared with the original SSD, the method improves the accuracy by 7% and improves the detection performance of small objects.

Originality/value

This paper innovatively introduces the SSD detection algorithm into workpiece detection in industrial scenarios and improves it. A feature fusion module has been added to combine the information of shallow features and deep features. The multi-feature fused SSD network proves the feasibility and practicality of introducing deep learning algorithms into workpiece sorting.

Details

Engineering Computations, vol. 38 no. 10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 12 January 2024

Priya Mishra and Aleena Swetapadma

Sleep arousal detection is an important factor to monitor the sleep disorder.

41

Abstract

Purpose

Sleep arousal detection is an important factor to monitor the sleep disorder.

Design/methodology/approach

Thus, a unique nth layer one-dimensional (1D) convolutional neural network-based U-Net model for automatic sleep arousal identification has been proposed.

Findings

The proposed method has achieved area under the precision–recall curve performance score of 0.498 and area under the receiver operating characteristics performance score of 0.946.

Originality/value

No other researchers have suggested U-Net-based detection of sleep arousal.

Research limitations/implications

From the experimental results, it has been found that U-Net performs better accuracy as compared to the state-of-the-art methods.

Practical implications

Sleep arousal detection is an important factor to monitor the sleep disorder. Objective of the work is to detect the sleep arousal using different physiological channels of human body.

Social implications

It will help in improving mental health by monitoring a person's sleep.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 October 2003

Yves Chiricota

We propose a three‐dimensional (3D) geometrical modelling algorithm based on the mapping of 2D objects on a 3D model. Our methodology can be applied to the automatic modelling of…

Abstract

We propose a three‐dimensional (3D) geometrical modelling algorithm based on the mapping of 2D objects on a 3D model. Our methodology can be applied to the automatic modelling of many “secondary” garment parts like collars, waist bands, pockets, etc. The results obtained are accurate in relation to the original flat patterns. Our approach is oriented towards the automation of the process of 3D garment modelling from flat patterns. An underlying constraint behind our approach consists in minimizing user intervention in the modelling process. Our method leads to an intuitive interface for novice users.

Details

International Journal of Clothing Science and Technology, vol. 15 no. 5
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 19 October 2023

Huaxiang Song

Classification of remote sensing images (RSI) is a challenging task in computer vision. Recently, researchers have proposed a variety of creative methods for automatic recognition…

Abstract

Purpose

Classification of remote sensing images (RSI) is a challenging task in computer vision. Recently, researchers have proposed a variety of creative methods for automatic recognition of RSI, and feature fusion is a research hotspot for its great potential to boost performance. However, RSI has a unique imaging condition and cluttered scenes with complicated backgrounds. This larger difference from nature images has made the previous feature fusion methods present insignificant performance improvements.

Design/methodology/approach

This work proposed a two-convolutional neural network (CNN) fusion method named main and branch CNN fusion network (MBC-Net) as an improved solution for classifying RSI. In detail, the MBC-Net employs an EfficientNet-B3 as its main CNN stream and an EfficientNet-B0 as a branch, named MC-B3 and BC-B0, respectively. In particular, MBC-Net includes a long-range derivation (LRD) module, which is specially designed to learn the dependence of different features. Meanwhile, MBC-Net also uses some unique ideas to tackle the problems coming from the two-CNN fusion and the inherent nature of RSI.

Findings

Extensive experiments on three RSI sets prove that MBC-Net outperforms the other 38 state-of-the-art (STOA) methods published from 2020 to 2023, with a noticeable increase in overall accuracy (OA) values. MBC-Net not only presents a 0.7% increased OA value on the most confusing NWPU set but also has 62% fewer parameters compared to the leading approach that ranks first in the literature.

Originality/value

MBC-Net is a more effective and efficient feature fusion approach compared to other STOA methods in the literature. Given the visualizations of grad class activation mapping (Grad-CAM), it reveals that MBC-Net can learn the long-range dependence of features that a single CNN cannot. Based on the tendency stochastic neighbor embedding (t-SNE) results, it demonstrates that the feature representation of MBC-Net is more effective than other methods. In addition, the ablation tests indicate that MBC-Net is effective and efficient for fusing features from two CNNs.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 30 November 2018

Sudarsana Desul, Madurai Meenachi N., Thejas Venkatesh, Vijitha Gunta, Gowtham R. and Magapu Sai Baba

Ontology of a domain mainly consists of a set of concepts and their semantic relations. It is typically constructed and maintained by using ontology editors with substantial human…

Abstract

Purpose

Ontology of a domain mainly consists of a set of concepts and their semantic relations. It is typically constructed and maintained by using ontology editors with substantial human intervention. It is desirable to perform the task automatically, which has led to the development of ontology learning techniques. One of the main challenges of ontology learning from the text is to identify key concepts from the documents. A wide range of techniques for key concept extraction have been proposed but are having the limitations of low accuracy, poor performance, not so flexible and applicability to a specific domain. The propose of this study is to explore a new method to extract key concepts and to apply them to literature in the nuclear domain.

Design/methodology/approach

In this article, a novel method for key concept extraction is proposed and applied to the documents from the nuclear domain. A hybrid approach was used, which includes a combination of domain, syntactic name entity knowledge and statistical based methods. The performance of the developed method has been evaluated from the data obtained using two out of three voting logic from three domain experts by using 120 documents retrieved from SCOPUS database.

Findings

The work reported pertains to extracting concepts from the set of selected documents and aids the search for documents relating to given concepts. The results of a case study indicated that the method developed has demonstrated better metrics than Text2Onto and CFinder. The method described has the capability of extracting valid key concepts from a set of candidates with long phrases.

Research limitations/implications

The present study is restricted to literature coming out in the English language and applied to the documents from nuclear domain. It has the potential to extend to other domains also.

Practical implications

The work carried out in the current study has the potential of leading to updating International Nuclear Information System thesaurus for ontology in the nuclear domain. This can lead to efficient search methods.

Originality/value

This work is the first attempt to automatically extract key concepts from the nuclear documents. The proposed approach will address and fix the most of the problems that are existed in the current methods and thereby increase the performance.

Details

The Electronic Library, vol. 37 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Book part
Publication date: 27 July 2018

Claire Laurier Decoteau

This chapter suggests that moving beyond positivism entails a recognition that the social world is made up of complex phenomena that are heterogeneous, and events are caused by…

Abstract

This chapter suggests that moving beyond positivism entails a recognition that the social world is made up of complex phenomena that are heterogeneous, and events are caused by contingent conjunctures of causal mechanisms. To theorize the social world as heterogeneous is to recognize that social causes, categories, and groups combine different kinds of phenomena and processes at various levels and scales across time. To speak of conjunctural causation implies not only that events are caused by concatenations of multiple, intersecting forces but also that these combinations are historically unique and nonrepeatable. Both the historical materialist conception of the “conjuncture” and the poststructuralist theory of “assemblages” take heterogeneity and multicausality seriously. I compare and contrast these formulations across three dimensions: the structure of the apparatus, causation, and temporality. I argue that these theories offer useful tools to social scientists seeking to engage in complex, multicausal explanations. I end the article with an example of how to use these concepts in analyzing a complex historical case.

Details

Critical Realism, History, and Philosophy in the Social Sciences
Type: Book
ISBN: 978-1-78756-604-0

Keywords

Article
Publication date: 28 November 2022

Anuraj Mohan, Karthika P.V., Parvathi Sankar, K. Maya Manohar and Amala Peter

Money laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the…

Abstract

Purpose

Money laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the illicit origin of funds using a variety of methods. The most simplified form of bitcoin money laundering leans hard on the fact that transactions made in cryptocurrencies are pseudonymous, but open data gives more power to investigators and enables the crowdsourcing of forensic analysis. With the motive to curb these illegal activities, there exist various rules, policies and technologies collectively known as anti-money laundering (AML) tools. When properly implemented, AML restrictions reduce the negative effects of illegal economic activity while also promoting financial market integrity and stability, but these bear high costs for institutions. The purpose of this work is to motivate the opportunity to reconcile the cause of safety with that of financial inclusion, bearing in mind the limitations of the available data. The authors use the Elliptic dataset; to the best of the authors' knowledge, this is the largest labelled transaction dataset publicly available in any cryptocurrency.

Design/methodology/approach

AML in bitcoin can be modelled as a node classification task in dynamic networks. In this work, graph convolutional decision forest will be introduced, which combines the potentialities of evolving graph convolutional network and deep neural decision forest (DNDF). This model will be used to classify the unknown transactions in the Elliptic dataset. Additionally, the application of knowledge distillation (KD) over the proposed approach gives finest results compared to all the other experimented techniques.

Findings

The importance of utilising a concatenation between dynamic graph learning and ensemble feature learning is demonstrated in this work. The results show the superiority of the proposed model to classify the illicit transactions in the Elliptic dataset. Experiments also show that the results can be further improved when the system is fine-tuned using a KD framework.

Originality/value

Existing works used either ensemble learning or dynamic graph learning to tackle the problem of AML in bitcoin. The proposed model provides a novel view to combine the power of random forest with dynamic graph learning methods. Furthermore, the work also demonstrates the advantage of KD in improving the performance of the whole system.

Details

Data Technologies and Applications, vol. 57 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 March 1975

HANS W. GOTTINGER

Complexity is worth studying as a subject for its own sake. We intend to investigate complexity in the context of a fairly broad class of dynamical systems (known as sequential…

Abstract

Complexity is worth studying as a subject for its own sake. We intend to investigate complexity in the context of a fairly broad class of dynamical systems (known as sequential machines). A sequential machine is a perfectly good model of an organization (or organism) which strives for survival, acting under resource and time constraints. For a given complexity level of the system design we could find the level of contact indicating—roughly—the level of understanding of the system transformation. We call this the control complexity. The “information technology” is generated by the lattice of partitions of the state space of the system realized by a serial‐parallel decomposition into component systems.

Details

Kybernetes, vol. 4 no. 3
Type: Research Article
ISSN: 0368-492X

1 – 10 of 423