Search results

1 – 10 of 434
Article
Publication date: 1 June 2012

Kumar S. Ray and Mandrita Mondal

The purpose of this study is to develop a Turing machine or a finite automaton, which scans the input data tape in the form of DNA sequences and inspires the basic design of a DNA…

Abstract

Purpose

The purpose of this study is to develop a Turing machine or a finite automaton, which scans the input data tape in the form of DNA sequences and inspires the basic design of a DNA computer.

Design/methodology/approach

This model based on a splicing system can solve fuzzy reasoning autonomously by using DNA sequences and human assisted protocols. Its hardware consists of class IIS restriction enzyme and T4 DNA ligase while the software consists of double stranded DNA sequences and transition molecules which are capable of encoding fuzzy rules. Upon mixing solutions containing these components, the automaton undergoes a cascade of cleaving and splicing cycles to produce the computational result in form of double stranded DNA sequence representing automaton's final state.

Findings

In this work, the authors have fused the idea of a splicing system with the automata theory to develop fuzzy molecular automaton in which 1,018 processors can work in parallel, requiring a trillion times less space for information storage, is 105 times faster than the existing super computer and 1,019 power operations can be performed using one Joule of energy.

Originality/value

This paper presents a generalized model for biologically inspired computation in nano scale.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 5 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 11 October 2018

Salman Arshad, Bo Kong, Alan Kerstein and Michael Oevermann

The purpose of this numerical work is to present and test a new approach for large-scale scalar advection (splicing) in large eddy simulations (LES) that use the linear eddy…

Abstract

Purpose

The purpose of this numerical work is to present and test a new approach for large-scale scalar advection (splicing) in large eddy simulations (LES) that use the linear eddy sub-grid mixing model (LEM) called the LES-LEM.

Design/methodology/approach

The new splicing strategy is based on an ordered flux of spliced LEM segments. The principle is that low-flux segments have less momentum than high-flux segments and, therefore, are displaced less than high-flux segments. This strategy affects the order of both inflowing and outflowing LEM segments of an LES cell. The new splicing approach is implemented in a pressure-based fluid solver and tested by simulation of passive scalar transport in a co-flowing turbulent rectangular jet, instead of combustion simulation, to perform an isolated investigation of splicing. Comparison of the new splicing with a previous splicing approach is also done.

Findings

The simulation results show that the velocity statistics and passive scalar mixing are correctly predicted using the new splicing approach for the LES-LEM. It is argued that modeling of large-scale advection in the LES-LEM via splicing is reasonable, and the new splicing approach potentially captures the physics better than the old approach. The standard LES sub-grid mixing models do not represent turbulent mixing in a proper way because they do not adequately represent molecular diffusion processes and counter gradient effects. Scalar mixing in turbulent flow consists of two different processes, i.e. turbulent mixing that increases the interface between unmixed species and molecular diffusion. It is crucial to model these two processes individually at their respective time scales. The LEM explicitly includes both of these processes and has been used successfully as a sub-grid scalar mixing model (McMurtry et al., 1992; Sone and Menon, 2003). Here, the turbulent mixing capabilities of the LES-LEM with a modified splicing treatment are examined.

Originality/value

The splicing strategy proposed for the LES-LEM is original and has not been investigated before. Also, it is the first LES-LEM implementation using unstructured grids.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 28 no. 10
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 14 February 2022

Arslan Akram, Saba Ramzan, Akhtar Rasool, Arfan Jaffar, Usama Furqan and Wahab Javed

This paper aims to propose a novel splicing detection method using a discriminative robust local binary pattern (DRLBP) with a support vector machine (SVM). Reliable detection of…

Abstract

Purpose

This paper aims to propose a novel splicing detection method using a discriminative robust local binary pattern (DRLBP) with a support vector machine (SVM). Reliable detection of image splicing is of growing interest due to the extensive utilization of digital images as a communication medium and the availability of powerful image processing tools. Image splicing is a commonly used forgery technique in which a region of an image is copied and pasted to a different image to hide the original contents of the image.

Design/methodology/approach

The structural changes caused due to splicing are robustly described by DRLBP. The changes caused by image forgery are localized, so as a first step, localized description is divided into overlapping blocks by providing an image as input. DRLBP descriptor is calculated for each block, and the feature vector is created by concatenation. Finally, features are passed to the SVM classifier to predict whether the image is genuine or forged.

Findings

The performance and robustness of the method are evaluated on public domain benchmark data sets and achieved 98.95% prediction accuracy. The results are compared with state-of-the-art image splicing finding approaches, and it shows that the performance of the proposed method is improved using the given technique.

Originality/value

The proposed method is using DRLBP, an efficient texture descriptor, which combines both corner and inside design detail in a single representation. It produces discriminative and compact features in such a way that there is no need for the feature selection process to drop the redundant and insignificant features.

Details

World Journal of Engineering, vol. 19 no. 4
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 1 September 2004

Elisa Juholin

The study suggests that the prominent driving force behind corporate social responsibility (CSR) is companies’ long‐term profitability, supported by company leadership and…

14487

Abstract

The study suggests that the prominent driving force behind corporate social responsibility (CSR) is companies’ long‐term profitability, supported by company leadership and efficiency, competitiveness, and the ability to anticipate the future. The long evolution of Finnish companies since the 18th century has created fertile ground for responsibility. Despite the absence of significant moral or ethical guidance, the thinking of the participating companies was for the most part business‐oriented. The management and organization of CSR appeared to be professional and efficient. CSR was found to be optimal at the highest level of the organizations studied, and the commitment of the top management unquestionable. The present status of CSR seemed to exist more on the theoretical than the practical level. Implementation was seen as a major challenge for the future. The jungle of standards and measurement instruments is a serious problem. Communication was narrowly viewed and technical, and the prevailing paradigm was rather mechanistic.

Details

Corporate Governance: The international journal of business in society, vol. 4 no. 3
Type: Research Article
ISSN: 1472-0701

Keywords

Article
Publication date: 3 May 2023

Bin Wang, Fanghong Gao, Le Tong, Qian Zhang and Sulei Zhu

Traffic flow prediction has always been a top priority of intelligent transportation systems. There are many mature methods for short-term traffic flow prediction. However, the…

Abstract

Purpose

Traffic flow prediction has always been a top priority of intelligent transportation systems. There are many mature methods for short-term traffic flow prediction. However, the existing methods are often insufficient in capturing long-term spatial-temporal dependencies. To predict long-term dependencies more accurately, in this paper, a new and more effective traffic flow prediction model is proposed.

Design/methodology/approach

This paper proposes a new and more effective traffic flow prediction model, named channel attention-based spatial-temporal graph neural networks. A graph convolutional network is used to extract local spatial-temporal correlations, a channel attention mechanism is used to enhance the influence of nearby spatial-temporal dependencies on decision-making and a transformer mechanism is used to capture long-term dependencies.

Findings

The proposed model is applied to two common highway datasets: METR-LA collected in Los Angeles and PEMS-BAY collected in the California Bay Area. This model outperforms the other five in terms of performance on three performance metrics a popular model.

Originality/value

(1) Based on the spatial-temporal synchronization graph convolution module, a spatial-temporal channel attention module is designed to increase the influence of proximity dependence on decision-making by enhancing or suppressing different channels. (2) To better capture long-term dependencies, the transformer module is introduced.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 30 April 2024

Anjali Bansal, C. Lakshman, Marco Romano, Shivinder Nijjer and Rekha Attri

Research on leaders’ knowledge management systems focuses exclusively on how leaders gather and disseminate knowledge in collaboration with external actors. Not much is known…

Abstract

Purpose

Research on leaders’ knowledge management systems focuses exclusively on how leaders gather and disseminate knowledge in collaboration with external actors. Not much is known about how leaders address the psychological aspects of employees and strategize internal communication. In addition, while previous work has treated high uncertainty as a default feature of crisis, this study aims to propose that perceived uncertainty varies in experience/meaning and has a crucial bearing on the relative balance of cognitive/emotional load on the leader and behavioral/psychological responses.

Design/methodology/approach

The authors contribute by qualitatively examining the role of leader knowledge systems in designing communication strategies in the context of the COVID-19 crisis by investigating communication characteristics, style, modes and the relatively unaddressed role of compassion/persuasion. In this pursuit, the authors interviewed 21 C-suite leaders, including chief executive officers, chief marketing officers, chief financial officers, chief human resource officers and founders, and analyzed their data using open, axial and selective coding, which were later extracted for representative themes and overarching dimensions.

Findings

Drawing from grounded theory research, the authors present a framework of knowledge systems and their resultant communication with employees in high uncertain and low uncertain crises. The authors highlight interactions of a set of concepts – leaders’ preparedness, leaders’ support to employees tailored communication adapted to perceived uncertainty, leading to enhanced trust – in the achievement of outcomes related to balancing operational and relational systems with employees. The findings suggest that a structured process of communication helps employees mitigate any concern related to uncertainty and feel confident in their leadership.

Originality/value

The research has implications for leaders in managing their knowledge systems, for human esources practitioners in designing effective internal communication programs, as well as for scholars in knowledge management, communication and leadership.

Details

Journal of Knowledge Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1367-3270

Keywords

Article
Publication date: 4 July 2024

Weijiang Wu, Heping Tan and Yifeng Zheng

Community detection is a key factor in analyzing the structural features of complex networks. However, traditional dynamic community detection methods often fail to effectively…

Abstract

Purpose

Community detection is a key factor in analyzing the structural features of complex networks. However, traditional dynamic community detection methods often fail to effectively solve the problems of deep network information loss and computational complexity in hyperbolic space. To address this challenge, a hyperbolic space-based dynamic graph neural network community detection model (HSDCDM) is proposed.

Design/methodology/approach

HSDCDM first projects the node features into the hyperbolic space and then utilizes the hyperbolic graph convolution module on the Poincaré and Lorentz models to realize feature fusion and information transfer. In addition, the parallel optimized temporal memory module ensures fast and accurate capture of time domain information over extended periods. Finally, the community clustering module divides the community structure by combining the node characteristics of the space domain and the time domain. To evaluate the performance of HSDCDM, experiments are conducted on both artificial and real datasets.

Findings

Experimental results on complex networks demonstrate that HSDCDM significantly enhances the quality of community detection in hierarchical networks. It shows an average improvement of 7.29% in NMI and a 9.07% increase in ARI across datasets compared to traditional methods. For complex networks with non-Euclidean geometric structures, the HSDCDM model incorporating hyperbolic geometry can better handle the discontinuity of the metric space, provides a more compact embedding that preserves the data structure, and offers advantages over methods based on Euclidean geometry methods.

Originality/value

This model aggregates the potential information of nodes in space through manifold-preserving distribution mapping and hyperbolic graph topology modules. Moreover, it optimizes the Simple Recurrent Unit (SRU) on the hyperbolic space Lorentz model to effectively extract time series data in hyperbolic space, thereby enhancing computing efficiency by eliminating the reliance on tangent space.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 November 1981

There is little doubt that current bids now being pursued by Lucas Aerospace will provide the mainstay of work for the Company's factories in the 1990's.

Abstract

There is little doubt that current bids now being pursued by Lucas Aerospace will provide the mainstay of work for the Company's factories in the 1990's.

Details

Aircraft Engineering and Aerospace Technology, vol. 53 no. 11
Type: Research Article
ISSN: 0002-2667

Article
Publication date: 1 September 2001

Elisa Juholin

This study examined reasons why Elisabeth Rehn – labelled by the media as the queen of the polls – lost her lead position a month before the presidential election. Rehn’s campaign…

1161

Abstract

This study examined reasons why Elisabeth Rehn – labelled by the media as the queen of the polls – lost her lead position a month before the presidential election. Rehn’s campaign had two approaches – traditional and marketing oriented. On one hand it represented a voter‐driven campaign with various non‐political and political professionals at its disposal but on the other hand the candidate was closer to the traditional party‐driven or ideology‐driven concept holding to traditional political themes. The main internal weaknesses of Rehn’s campaign were four factors: the weakness of the (civic) organisation, the lack of resources, the candidate’s credibility problems and the wrong themes. The relevant external factors were: the line‐up of the candidates with two strong right‐wing female candidates and overwhelming resources of the competitive organisations. The study provides evidence that most of the theoretical factors based on previous research to be relevant also in this campaign and emphasises the meaning of the candidate’s credibility. The indicators of the credibility were her competence and appearance (external credibility), and personality and commitment (internal credibility) even though in the very beginning the candidate was evaluated by the campaign workers as the most competent and experienced candidate. The polls and the media were considered to be factors that strengthened the result more than created it.

Details

Corporate Communications: An International Journal, vol. 6 no. 3
Type: Research Article
ISSN: 1356-3289

Keywords

Article
Publication date: 25 September 2023

Jiaxin Li, Zhiyuan Zhu, Zhiwei Li, Yonggang Zhao, Yun Lei, Xuping Su, Changjun Wu and Haoping Peng

Gallic acid is a substance that is widely found in nature. Initially, it was only used as a corrosion inhibitor to retard the rate of corrosion of metals. In recent years, with…

Abstract

Purpose

Gallic acid is a substance that is widely found in nature. Initially, it was only used as a corrosion inhibitor to retard the rate of corrosion of metals. In recent years, with intensive research by scholars, the modification of coatings containing gallic acid has become a hot topic in the field of metal protection. This study aims to summarize the various preparation methods of gallic acid and its research progress in corrosion inhibitors and coatings, as well as related studies using quantum chemical methods to assess the predicted corrosion inhibition effects and to systematically describe the prospects and current status of gallic acid applications in the field of metal corrosion inhibition and protection.

Design/methodology/approach

First, the various methods of preparation of gallic acid in industry are understood. Second, the corrosion inhibition principles and research progress of gallic acid as a metal corrosion inhibitor are presented. Then, the corrosion inhibition principles and research progress of gallic acid involved in the synthesis and modification of various rust conversion coatings, nano-coatings and organic resin coatings are described. After that, studies related to the evaluation and prediction of gallic acid corrosion inhibition on metals by quantum chemical methods are presented. Finally, new research ideas on gallic acid in the field of corrosion inhibition and protection of metals are summarized.

Findings

Gallic acid can be used as a corrosion inhibitor or coating in metal protection.

Research limitations/implications

There is a lack of research on the synergistic improvement of gallic acid and other substances.

Practical implications

The specific application of gallic acid in the field of metal protection was summarized, and the future research focus was put forward.

Originality/value

To the best of the authors’ knowledge, this paper systematically expounds on the research progress of gallic acid in the field of metal protection for the first time and provides new ideas and directions for future research.

Details

Anti-Corrosion Methods and Materials, vol. 70 no. 6
Type: Research Article
ISSN: 0003-5599

Keywords

1 – 10 of 434