Search results

1 – 10 of 25
Article
Publication date: 8 May 2024

Charalampos Alexopoulos and Stuti Saxena

This paper aims to further the understanding of Open Government Data (OGD) adoption by the government by invoking two quantum physics theories – percolation theory and expander…

Abstract

Purpose

This paper aims to further the understanding of Open Government Data (OGD) adoption by the government by invoking two quantum physics theories – percolation theory and expander graph theory.

Design/methodology/approach

Extant research on the barriers to adoption and rollout of OGD is reviewed to drive home the research question for the present study. Both the theories are summarized, and lessons are derived therefrom for answering the research question.

Findings

The percolation theory solves the riddle of why the OGD initiatives find it difficult to seep across the hierarchical and geographical levels of any administrative division. The expander graph theory builds the understanding of the need for having networking among and within the key government personnel for bolstering the motivation and capacity building of the operational personnel linked with the OGD initiative. The theoretical understanding also aids in the implementation and institutionalization of OGD in general.

Originality/value

Intersectionality of domains for conducting research on any theme is always a need. Given the fact that there are innumerable challenges regarding the adoption of OGD by the governments across the world, the application of the two theories of quantum physics might solve the quandary in a befitting way.

Details

foresight, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 9 May 2024

Yufeng Zhang and Lizhen Wang

Fractional Fokker-Planck equation (FFPE) and time fractional coupled Boussinesq-Burger equations (TFCBBEs) play important roles in the fields of solute transport, fluid dynamics…

Abstract

Purpose

Fractional Fokker-Planck equation (FFPE) and time fractional coupled Boussinesq-Burger equations (TFCBBEs) play important roles in the fields of solute transport, fluid dynamics, respectively. Although there are many methods for solving the approximate solution, simple and effective methods are more preferred. This paper aims to utilize Laplace Adomian decomposition method (LADM) to construct approximate solutions for these two types of equations and gives some examples of numerical calculations, which can prove the validity of LADM by comparing the error between the calculated results and the exact solution.

Design/methodology/approach

This paper analyzes and investigates the time-space fractional partial differential equations based on the LADM method in the sense of Caputo fractional derivative, which is a combination of the Laplace transform and the Adomian decomposition method. LADM method was first proposed by Khuri in 2001. Many partial differential equations which can describe the physical phenomena are solved by applying LADM and it has been used extensively to solve approximate solutions of partial differential and fractional partial differential equations.

Findings

This paper obtained an approximate solution to the FFPE and TFCBBEs by using the LADM. A number of numerical examples and graphs are used to compare the errors between the results and the exact solutions. The results show that LADM is a simple and effective mathematical technique to construct the approximate solutions of nonlinear time-space fractional equations in this work.

Originality/value

This paper verifies the effectiveness of this method by using the LADM to solve the FFPE and TFCBBEs. In addition, these two equations are very meaningful, and this paper will be helpful in the study of atmospheric diffusion, shallow water waves and other areas. And this paper also generalizes the drift and diffusion terms of the FFPE equation to the general form, which provides a great convenience for our future studies.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 April 2024

Xiaoxian Yang, Zhifeng Wang, Qi Wang, Ke Wei, Kaiqi Zhang and Jiangang Shi

This study aims to adopt a systematic review approach to examine the existing literature on law and LLMs.It involves analyzing and synthesizing relevant research papers, reports…

Abstract

Purpose

This study aims to adopt a systematic review approach to examine the existing literature on law and LLMs.It involves analyzing and synthesizing relevant research papers, reports and scholarly articles that discuss the use of LLMs in the legal domain. The review encompasses various aspects, including an analysis of LLMs, legal natural language processing (NLP), model tuning techniques, data processing strategies and frameworks for addressing the challenges associated with legal question-and-answer (Q&A) systems. Additionally, the study explores potential applications and services that can benefit from the integration of LLMs in the field of intelligent justice.

Design/methodology/approach

This paper surveys the state-of-the-art research on law LLMs and their application in the field of intelligent justice. The study aims to identify the challenges associated with developing Q&A systems based on LLMs and explores potential directions for future research and development. The ultimate goal is to contribute to the advancement of intelligent justice by effectively leveraging LLMs.

Findings

To effectively apply a law LLM, systematic research on LLM, legal NLP and model adjustment technology is required.

Originality/value

This study contributes to the field of intelligent justice by providing a comprehensive review of the current state of research on law LLMs.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 29 April 2024

Surath Ghosh

Financial mathematics is one of the most rapidly evolving fields in today’s banking and cooperative industries. In the current study, a new fractional differentiation operator…

Abstract

Purpose

Financial mathematics is one of the most rapidly evolving fields in today’s banking and cooperative industries. In the current study, a new fractional differentiation operator with a nonsingular kernel based on the Robotnov fractional exponential function (RFEF) is considered for the Black–Scholes model, which is the most important model in finance. For simulations, homotopy perturbation and the Laplace transform are used and the obtained solutions are expressed in terms of the generalized Mittag-Leffler function (MLF).

Design/methodology/approach

The homotopy perturbation method (HPM) with the help of the Laplace transform is presented here to check the behaviours of the solutions of the Black–Scholes model. HPM is well known for its accuracy and simplicity.

Findings

In this attempt, the exact solutions to a famous financial market problem, namely, the BS option pricing model, are obtained using homotopy perturbation and the LT method, where the fractional derivative is taken in a new YAC sense. We obtained solutions for each financial market problem in terms of the generalized Mittag-Leffler function.

Originality/value

The Black–Scholes model is presented using a new kind of operator, the Yang-Abdel-Aty-Cattani (YAC) operator. That is a new concept. The revised model is solved using a well-known semi-analytic technique, the homotopy perturbation method (HPM), with the help of the Laplace transform. Also, the obtained solutions are compared with the exact solutions to prove the effectiveness of the proposed work. The different characteristics of the solutions are investigated for different values of fractional-order derivatives.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 20 March 2024

Nisha, Neha Puri, Namita Rajput and Harjit Singh

The purpose of this study is to analyse and compile the literature on various option pricing models (OPM) or methodologies. The report highlights the gaps in the existing…

17

Abstract

Purpose

The purpose of this study is to analyse and compile the literature on various option pricing models (OPM) or methodologies. The report highlights the gaps in the existing literature review and builds recommendations for potential scholars interested in the subject area.

Design/methodology/approach

In this study, the researchers used a systematic literature review procedure to collect data from Scopus. Bibliometric and structured network analyses were used to examine the bibliometric properties of 864 research documents.

Findings

As per the findings of the study, publication in the field has been increasing at a rate of 6% on average. This study also includes a list of the most influential and productive researchers, frequently used keywords and primary publications in this subject area. In particular, Thematic map and Sankey’s diagram for conceptual structure and for intellectual structure co-citation analysis and bibliographic coupling were used.

Research limitations/implications

Based on the conclusion presented in this paper, there are several potential implications for research, practice and society.

Practical implications

This study provides useful insights for future research in the area of OPM in financial derivatives. Researchers can focus on impactful authors, significant work and productive countries and identify potential collaborators. The study also highlights the commonly used OPMs and emerging themes like machine learning and deep neural network models, which can inform practitioners about new developments in the field and guide the development of new models to address existing limitations.

Social implications

The accurate pricing of financial derivatives has significant implications for society, as it can impact the stability of financial markets and the wider economy. The findings of this study, which identify the most commonly used OPMs and emerging themes, can help improve the accuracy of pricing and risk management in the financial derivatives sector, which can ultimately benefit society as a whole.

Originality/value

It is possibly the initial effort to consolidate the literature on calibration on option price by evaluating and analysing alternative OPM applied by researchers to guide future research in the right direction.

Details

Qualitative Research in Financial Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1755-4179

Keywords

Article
Publication date: 30 April 2024

Jacqueline Humphries, Pepijn Van de Ven, Nehal Amer, Nitin Nandeshwar and Alan Ryan

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored…

Abstract

Purpose

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored using lasers. However, lasers cannot distinguish between human and non-human objects in the robot’s path. Stopping or slowing down the robot when non-human objects approach is unproductive. This research contribution addresses that inefficiency by showing how computer-vision techniques can be used instead of lasers which improve up-time of the robot.

Design/methodology/approach

A computer-vision safety system is presented. Image segmentation, 3D point clouds, face recognition, hand gesture recognition, speed and trajectory tracking and a digital twin are used. Using speed and separation, the robot’s speed is controlled based on the nearest location of humans accurate to their body shape. The computer-vision safety system is compared to a traditional laser measure. The system is evaluated in a controlled test, and in the field.

Findings

Computer-vision and lasers are shown to be equivalent by a measure of relationship and measure of agreement. R2 is given as 0.999983. The two methods are systematically producing similar results, as the bias is close to zero, at 0.060 mm. Using Bland–Altman analysis, 95% of the differences lie within the limits of maximum acceptable differences.

Originality/value

In this paper an original model for future computer-vision safety systems is described which is equivalent to existing laser systems, identifies and adapts to particular humans and reduces the need to slow and stop systems thereby improving efficiency. The implication is that computer-vision can be used to substitute lasers and permit adaptive robotic control in human–robot collaboration systems.

Details

Technological Sustainability, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-1312

Keywords

Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 2 May 2024

Ali Hashemi Baghi and Jasmin Mansour

Fused Filament Fabrication (FFF) is one of the growing technologies in additive manufacturing, that can be used in a number of applications. In this method, process parameters can…

Abstract

Purpose

Fused Filament Fabrication (FFF) is one of the growing technologies in additive manufacturing, that can be used in a number of applications. In this method, process parameters can be customized and their simultaneous variation has conflicting impacts on various properties of printed parts such as dimensional accuracy (DA) and surface finish. These properties could be improved by optimizing the values of these parameters.

Design/methodology/approach

In this paper, four process parameters, namely, print speed, build orientation, raster width, and layer height which are referred to as “input variables” were investigated. The conflicting influence of their simultaneous variations on the DA of printed parts was investigated and predicated. To achieve this goal, a hybrid Genetic Algorithm – Artificial Neural Network (GA-ANN) model, was developed in C#.net, and three geometries, namely, U-shape, cube and cylinder were selected. To investigate the DA of printed parts, samples were printed with a central through hole. Design of Experiments (DoE), specifically the Rotational Central Composite Design method was adopted to establish the number of parts to be printed (30 for each selected geometry) and also the value of each input process parameter. The dimensions of printed parts were accurately measured by a shadowgraph and were used as an input data set for the training phase of the developed ANN to predict the behavior of process parameters. Then the predicted values were used as input to the Desirability Function tool which resulted in a mathematical model that optimizes the input process variables for selected geometries. The mean square error of 0.0528 was achieved, which is indicative of the accuracy of the developed model.

Findings

The results showed that print speed is the most dominant input variable compared to others, and by increasing its value, considerable variations resulted in DA. The inaccuracy increased, especially with parts of circular cross section. In addition, if there is no need to print parts in vertical position, the build orientation should be set at 0° to achieve the highest DA. Finally, optimized values of raster width and layer height improved the DA especially when the print speed was set at a high value.

Originality/value

By using ANN, it is possible to investigate the impact of simultaneous variations of FFF machines’ input process parameters on the DA of printed parts. By their optimization, parts of highly accurate dimensions could be printed. These findings will be of significant value to those industries that need to produce parts of high DA on FFF machines.

Details

Rapid Prototyping Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 21 December 2023

Majid Rahi, Ali Ebrahimnejad and Homayun Motameni

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is…

Abstract

Purpose

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is important. Unfortunately, the traditional use of water by humans for agricultural purposes contradicts the concept of optimal consumption. Therefore, designing and implementing a mechanized irrigation system is of the highest importance. This system includes hardware equipment such as liquid altimeter sensors, valves and pumps which have a failure phenomenon as an integral part, causing faults in the system. Naturally, these faults occur at probable time intervals, and the probability function with exponential distribution is used to simulate this interval. Thus, before the implementation of such high-cost systems, its evaluation is essential during the design phase.

Design/methodology/approach

The proposed approach included two main steps: offline and online. The offline phase included the simulation of the studied system (i.e. the irrigation system of paddy fields) and the acquisition of a data set for training machine learning algorithms such as decision trees to detect, locate (classification) and evaluate faults. In the online phase, C5.0 decision trees trained in the offline phase were used on a stream of data generated by the system.

Findings

The proposed approach is a comprehensive online component-oriented method, which is a combination of supervised machine learning methods to investigate system faults. Each of these methods is considered a component determined by the dimensions and complexity of the case study (to discover, classify and evaluate fault tolerance). These components are placed together in the form of a process framework so that the appropriate method for each component is obtained based on comparison with other machine learning methods. As a result, depending on the conditions under study, the most efficient method is selected in the components. Before the system implementation phase, its reliability is checked by evaluating the predicted faults (in the system design phase). Therefore, this approach avoids the construction of a high-risk system. Compared to existing methods, the proposed approach is more comprehensive and has greater flexibility.

Research limitations/implications

By expanding the dimensions of the problem, the model verification space grows exponentially using automata.

Originality/value

Unlike the existing methods that only examine one or two aspects of fault analysis such as fault detection, classification and fault-tolerance evaluation, this paper proposes a comprehensive process-oriented approach that investigates all three aspects of fault analysis concurrently.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 18 December 2023

Ranjan Chaudhuri, Balakrishna Grandhi, Demetris Vrontis and Sheshadri Chatterjee

The purpose of this study is to assess the significance of employee work flexibility and the policy of the organization for survival during any crisis. This study also…

Abstract

Purpose

The purpose of this study is to assess the significance of employee work flexibility and the policy of the organization for survival during any crisis. This study also investigates the moderating role of leadership support (LS) during such turbulent conditions.

Design/methodology/approach

This study has used literature from the fields of organization performance, human resources and organization policy (OP), along with the theories of resource-based view (RBV) and dynamic capability view (DCV) to develop a conceptual model. Later, the conceptual model is validated using the structural equation modeling technique. The study used a survey method with a sample of 311 participants. These participants are employed as human resource managers (HRM) and other supportive workforce at different levels in the organizations.

Findings

The study shows that innovativeness and employee flexibility (EFL) are critical toward organizations’ survival during any crisis. Also, the study highlights the importance of OP and LS for the survival of organizations during and after any turbulent condition.

Research limitations/implications

This study provides valuable inputs to the leadership teams of organizations, especially HRM. This research also provides food for thought for policymakers and researchers in the field of organizational performance. This study also contributes to the overall body of literature on organization analysis and extends the literature on RBV and DCV.

Originality/value

The study adds value to the overall body of literature on organization performance and capabilities along with human resource management. Few studies have nurtured issues on EFL during turbulent conditions. Also, there are limited studies in the areas of OP such as favorable and unfavorable policies toward employees. Thus, this study can be considered unique. Moreover, the study investigates the moderating role of LS which adds value toward the body of literature on organizational leadership capability.

Details

International Journal of Organizational Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1934-8835

Keywords

1 – 10 of 25