Search results

1 – 10 of 14
Article
Publication date: 16 August 2023

Anish Khobragade, Shashikant Ghumbre and Vinod Pachghare

MITRE and the National Security Agency cooperatively developed and maintained a D3FEND knowledge graph (KG). It provides concepts as an entity from the cybersecurity…

Abstract

Purpose

MITRE and the National Security Agency cooperatively developed and maintained a D3FEND knowledge graph (KG). It provides concepts as an entity from the cybersecurity countermeasure domain, such as dynamic, emulated and file analysis. Those entities are linked by applying relationships such as analyze, may_contains and encrypt. A fundamental challenge for collaborative designers is to encode knowledge and efficiently interrelate the cyber-domain facts generated daily. However, the designers manually update the graph contents with new or missing facts to enrich the knowledge. This paper aims to propose an automated approach to predict the missing facts using the link prediction task, leveraging embedding as representation learning.

Design/methodology/approach

D3FEND is available in the resource description framework (RDF) format. In the preprocessing step, the facts in RDF format converted to subject–predicate–object triplet format contain 5,967 entities and 98 relationship types. Progressive distance-based, bilinear and convolutional embedding models are applied to learn the embeddings of entities and relations. This study presents a link prediction task to infer missing facts using learned embeddings.

Findings

Experimental results show that the translational model performs well on high-rank results, whereas the bilinear model is superior in capturing the latent semantics of complex relationship types. However, the convolutional model outperforms 44% of the true facts and achieves a 3% improvement in results compared to other models.

Research limitations/implications

Despite the success of embedding models to enrich D3FEND using link prediction under the supervised learning setup, it has some limitations, such as not capturing diversity and hierarchies of relations. The average node degree of D3FEND KG is 16.85, with 12% of entities having a node degree less than 2, especially there are many entities or relations with few or no observed links. This results in sparsity and data imbalance, which affect the model performance even after increasing the embedding vector size. Moreover, KG embedding models consider existing entities and relations and may not incorporate external or contextual information such as textual descriptions, temporal dynamics or domain knowledge, which can enhance the link prediction performance.

Practical implications

Link prediction in the D3FEND KG can benefit cybersecurity countermeasure strategies in several ways, such as it can help to identify gaps or weaknesses in the existing defensive methods and suggest possible ways to improve or augment them; it can help to compare and contrast different defensive methods and understand their trade-offs and synergies; it can help to discover novel or emerging defensive methods by inferring new relations from existing data or external sources; and it can help to generate recommendations or guidance for selecting or deploying appropriate defensive methods based on the characteristics and objectives of the system or network.

Originality/value

The representation learning approach helps to reduce incompleteness using a link prediction that infers possible missing facts by using the existing entities and relations of D3FEND.

Details

International Journal of Web Information Systems, vol. 19 no. 3/4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 13 May 2020

Zhijie Wen, Qikun Zhao and Lining Tong

The purpose of this paper is to present a novel method for minor fabric defects detection.

Abstract

Purpose

The purpose of this paper is to present a novel method for minor fabric defects detection.

Design/methodology/approach

This paper proposes a PETM-CNN algorithm. PETM-CNN is designed based on self-similar estimation algorithm and Convolutional Neural Network. The PE (Patches Extractor) algorithm extracts patches that are possible to be defective patches to preprocess the fabric image. Then a TM-CNN (Triplet Metric CNN) method is designed to predict labels of the patches and the final label of the image. The TM-CNN can perform better than normal CNN.

Findings

This algorithm is superior to other algorithms on the data set of fabric images with minor defects. The proposed method achieves accurate classification of fabric images whether it has minor defects or not. The experimental results show that the approach is effective.

Originality/value

Traditional fabric defects detection is not effective as minor defects detection, so this paper develops a method of minor fabric images classification based on self-similar estimation and CNN. This paper offers the first investigation of minor fabric defects.

Details

International Journal of Clothing Science and Technology, vol. 33 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 7 November 2022

T. Sree Lakshmi, M. Govindarajan and Asadi Srinivasulu

A proper understanding of malware characteristics is necessary to protect massive data generated because of the advances in Internet of Things (IoT), big data and the cloud…

Abstract

Purpose

A proper understanding of malware characteristics is necessary to protect massive data generated because of the advances in Internet of Things (IoT), big data and the cloud. Because of the encryption techniques used by the attackers, network security experts struggle to develop an efficient malware detection technique. Though few machine learning-based techniques are used by researchers for malware detection, large amounts of data must be processed and detection accuracy needs to be improved for efficient malware detection. Deep learning-based methods have gained significant momentum in recent years for the accurate detection of malware. The purpose of this paper is to create an efficient malware detection system for the IoT using Siamese deep neural networks.

Design/methodology/approach

In this work, a novel Siamese deep neural network system with an embedding vector is proposed. Siamese systems have generated significant interest because of their capacity to pick up a significant portion of the input. The proposed method is efficient in malware detection in the IoT because it learns from a few records to improve forecasts. The goal is to determine the evolution of malware similarity in emerging domains of technology.

Findings

The cloud platform is used to perform experiments on the Malimg data set. ResNet50 was pretrained as a component of the subsystem that established embedding. Each system reviews a set of input documents to determine whether they belong to the same family. The results of the experiments show that the proposed method outperforms existing techniques in terms of accuracy and efficiency.

Originality/value

The proposed work generates an embedding for each input. Each system examined a collection of data files to determine whether they belonged to the same family. Cosine proximity is also used to estimate the vector similarity in a high-dimensional area.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 22 July 2021

Zirui Guo, Huimin Lu, Qinghua Yu, Ruibin Guo, Junhao Xiao and Hongshan Yu

This paper aims to design a novel feature descriptor to improve the performance of feature matching in challenge scenes, such as low texture and wide-baseline scenes. Common…

Abstract

Purpose

This paper aims to design a novel feature descriptor to improve the performance of feature matching in challenge scenes, such as low texture and wide-baseline scenes. Common descriptors are not suitable for low texture scenes and other challenging scenes mainly owing to encoding only one kind of features. The proposed feature descriptor considers multiple features and their locations, which is more expressive.

Design/methodology/approach

A graph neural network–based descriptors enhancement algorithm for feature matching is proposed. In this paper, point and line features are the primary concerns. In the graph, commonly used descriptors for points and lines constitute the nodes and the edges are determined by the geometric relationship between points and lines. After the graph convolution designed for incomplete join graph, enhanced descriptors are obtained.

Findings

Experiments are carried out in indoor, outdoor and low texture scenes. The experiments investigate the real-time performance, rotation invariance, scale invariance, viewpoint invariance and noise sensitivity of the descriptors in three types of scenes. The results show that the enhanced descriptors are robust to scene changes and can be used in wide-baseline matching.

Originality/value

A graph structure is designed to represent multiple features in an image. In the process of building graph structure, the geometric relation between multiple features is used to establish the edges. Furthermore, a novel hybrid descriptor for points and lines is obtained using graph convolutional neural network. This enhanced descriptor has the advantages of both point features and line features in feature matching.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 January 2023

Weihao Luo and Yueqi Zhong

The paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation…

Abstract

Purpose

The paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation of clothing and occlusion caused by hair or poses, which leads to loss of details in the try-on results. In this paper, the authors present a details-oriented virtual try-on network (DO-VTON), which allows synthesizing high-fidelity try-on images with preserved characteristics of target clothing.

Design/methodology/approach

The proposed try-on network consists of three modules. The fashion parsing module (FPM) is designed to generate the parsing map of a reference person image. The geometric matching module (GMM) warps the input clothing and matches it with the torso area of the reference person guided by the parsing map. The try-on module (TOM) generates the final try-on image. In both FPM and TOM, attention mechanism is introduced to obtain sufficient features, which enhances the performance of characteristics preservation. In GMM, a two-stage coarse-to-fine training strategy with a grid regularization loss (GR loss) is employed to optimize the clothing warping.

Findings

In this paper, the authors propose a three-stage image-based virtual try-on network, DO-VTON, that aims to generate realistic try-on images with extensive characteristics preserved.

Research limitations/implications

The authors’ proposed algorithm can provide a promising tool for image based virtual try-on.

Practical implications

The authors’ proposed method is a technology for consumers to purchase favored clothes online and to reduce the return rate in e-commerce.

Originality/value

Therefore, the authors’ proposed algorithm can provide a promising tool for image based virtual try-on.

Details

International Journal of Clothing Science and Technology, vol. 35 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Open Access
Article
Publication date: 21 April 2022

Warot Moungsouy, Thanawat Tawanbunjerd, Nutcha Liamsomboon and Worapan Kusakunniran

This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face…

2638

Abstract

Purpose

This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face recognition. So, the proposed solution is developed to recognize human faces on any available facial components which could be varied depending on wearing or not wearing a mask.

Design/methodology/approach

The proposed solution is developed based on the FaceNet framework, aiming to modify the existing facial recognition model to improve the performance of both scenarios of mask-wearing and without mask-wearing. Then, simulated masked-face images are computed on top of the original face images, to be used in the learning process of face recognition. In addition, feature heatmaps are also drawn out to visualize majority of parts of facial images that are significant in recognizing faces under mask-wearing.

Findings

The proposed method is validated using several scenarios of experiments. The result shows an outstanding accuracy of 99.2% on a scenario of mask-wearing faces. The feature heatmaps also show that non-occluded components including eyes and nose become more significant for recognizing human faces, when compared with the lower part of human faces which could be occluded under masks.

Originality/value

The convolutional neural network based solution is tuned up for recognizing human faces under a scenario of mask-wearing. The simulated masks on original face images are augmented for training the face recognition model. The heatmaps are then computed to prove that features generated from the top half of face images are correctly chosen for the face recognition.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 8 July 2022

Chuanming Yu, Zhengang Zhang, Lu An and Gang Li

In recent years, knowledge graph completion has gained increasing research focus and shown significant improvements. However, most existing models only use the structures of…

Abstract

Purpose

In recent years, knowledge graph completion has gained increasing research focus and shown significant improvements. However, most existing models only use the structures of knowledge graph triples when obtaining the entity and relationship representations. In contrast, the integration of the entity description and the knowledge graph network structure has been ignored. This paper aims to investigate how to leverage both the entity description and the network structure to enhance the knowledge graph completion with a high generalization ability among different datasets.

Design/methodology/approach

The authors propose an entity-description augmented knowledge graph completion model (EDA-KGC), which incorporates the entity description and network structure. It consists of three modules, i.e. representation initialization, deep interaction and reasoning. The representation initialization module utilizes entity descriptions to obtain the pre-trained representation of entities. The deep interaction module acquires the features of the deep interaction between entities and relationships. The reasoning component performs matrix manipulations with the deep interaction feature vector and entity representation matrix, thus obtaining the probability distribution of target entities. The authors conduct intensive experiments on the FB15K, WN18, FB15K-237 and WN18RR data sets to validate the effect of the proposed model.

Findings

The experiments demonstrate that the proposed model outperforms the traditional structure-based knowledge graph completion model and the entity-description-enhanced knowledge graph completion model. The experiments also suggest that the model has greater feasibility in different scenarios such as sparse data, dynamic entities and limited training epochs. The study shows that the integration of entity description and network structure can significantly increase the effect of the knowledge graph completion task.

Originality/value

The research has a significant reference for completing the missing information in the knowledge graph and improving the application effect of the knowledge graph in information retrieval, question answering and other fields.

Details

Aslib Journal of Information Management, vol. 75 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 2 April 2024

R.S. Vignesh and M. Monica Subashini

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories…

Abstract

Purpose

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories is different and also, there is insufficiency of high-scale databases for training. The purpose of the study is to provide high security.

Design/methodology/approach

In this research, optimization-assisted federated learning (FL) is introduced for thermoplastic waste segregation and classification. The deep learning (DL) network trained by Archimedes Henry gas solubility optimization (AHGSO) is used for the classification of plastic and resin types. The deep quantum neural networks (DQNN) is used for first-level classification and the deep max-out network (DMN) is employed for second-level classification. This developed AHGSO is obtained by blending the features of Archimedes optimization algorithm (AOA) and Henry gas solubility optimization (HGSO). The entities included in this approach are nodes and servers. Local training is carried out depending on local data and updations to the server are performed. Then, the model is aggregated at the server. Thereafter, each node downloads the global model and the update training is executed depending on the downloaded global and the local model till it achieves the satisfied condition. Finally, local update and aggregation at the server is altered based on the average method. The Data tag suite (DATS_2022) dataset is used for multilevel thermoplastic waste segregation and classification.

Findings

By using the DQNN in first-level classification the designed optimization-assisted FL has gained an accuracy of 0.930, mean average precision (MAP) of 0.933, false positive rate (FPR) of 0.213, loss function of 0.211, mean square error (MSE) of 0.328 and root mean square error (RMSE) of 0.572. In the second level classification, by using DMN the accuracy, MAP, FPR, loss function, MSE and RMSE are 0.932, 0.935, 0.093, 0.068, 0.303 and 0.551.

Originality/value

The multilevel thermoplastic waste segregation and classification using the proposed model is accurate and improves the effectiveness of the classification.

Article
Publication date: 30 August 2021

Jinchao Huang

Multi-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be…

4011

Abstract

Purpose

Multi-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be tracked move rapid or the appearances of moving objects vary dramatically, the conventional MDCNN model will suffer from the model drift problem. To solve such problem in tracking rapid objects under limiting environment for MDCNN model, this paper proposed an auto-attentional mechanism-based MDCNN (AA-MDCNN) model for the rapid moving and changing objects tracking under limiting environment.

Design/methodology/approach

First, to distinguish the foreground object between background and other similar objects, the auto-attentional mechanism is used to selectively aggregate the weighted summation of all feature maps to make the similar features related to each other. Then, the bidirectional gated recurrent unit (Bi-GRU) architecture is used to integrate all the feature maps to selectively emphasize the importance of the correlated feature maps. Finally, the final feature map is obtained by fusion the above two feature maps for object tracking. In addition, a composite loss function is constructed to solve the similar but different attribute sequences tracking using conventional MDCNN model.

Findings

In order to validate the effectiveness and feasibility of the proposed AA-MDCNN model, this paper used ImageNet-Vid dataset to train the object tracking model, and the OTB-50 dataset is used to validate the AA-MDCNN tracking model. Experimental results have shown that the augmentation of auto-attentional mechanism will improve the accuracy rate 2.75% and success rate 2.41%, respectively. In addition, the authors also selected six complex tracking scenarios in OTB-50 dataset; over eleven attributes have been validated that the proposed AA-MDCNN model outperformed than the comparative models over nine attributes. In addition, except for the scenario of multi-objects moving with each other, the proposed AA-MDCNN model solved the majority rapid moving objects tracking scenarios and outperformed than the comparative models on such complex scenarios.

Originality/value

This paper introduced the auto-attentional mechanism into MDCNN model and adopted Bi-GRU architecture to extract key features. By using the proposed AA-MDCNN model, rapid object tracking under complex background, motion blur and occlusion objects has better effect, and such model is expected to be further applied to the rapid object tracking in the real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 June 2019

Wendong Zheng, Huaping Liu, Bowen Wang and Fuchun Sun

For robots to more actively interact with the surrounding environment in object manipulation tasks or walking, they must understand the physical attributes of objects and surface…

Abstract

Purpose

For robots to more actively interact with the surrounding environment in object manipulation tasks or walking, they must understand the physical attributes of objects and surface materials they encounter. Dynamic tactile sensing can effectively capture rich information about material properties. Hence, methods that convey and interpret this tactile information to the user can improve the quality of human–machine interaction. This paper aims to propose a visual-tactile cross-modal retrieval framework to convey tactile information of surface material for perceptual estimation.

Design/methodology/approach

The tactile information of a new unknown surface material can be used to retrieve perceptually similar surface from an available surface visual sample set by associating tactile information to visual information of material surfaces. For the proposed framework, the authors propose an online low-rank similarity learning method, which can effectively and efficiently capture the cross-modal relative similarity between visual and tactile modalities.

Findings

Experimental results conducted on the Technischen Universität München Haptic Texture Database demonstrate the effectiveness of the proposed framework and the method.

Originality/value

This paper provides a visual-tactile cross-modal perception method for recognizing material surface. By the method, a robot can communicate and interpret the conveyed information about the surface material properties to the user; it will further improve the quality of robot interaction.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 14