Search results

1 – 10 of 15
Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 18 March 2022

Shengtao Lin and Zhengcai Zhao

Complex and exquisite patterns are sculpted on the surface to beautify the parts. Due to the thin-walled nature, the blank of the part is often deformed by the forming and…

Abstract

Purpose

Complex and exquisite patterns are sculpted on the surface to beautify the parts. Due to the thin-walled nature, the blank of the part is often deformed by the forming and clamping processes, disabling the nominal numerical control (NC) sculpting programs. To address this problem, a fast adaptive sculpting method of the complex surface is proposed.

Design/methodology/approach

The geometry of the blank surface is measured using on-machine measurement (OMM). The real blank surface is reconstructed using the non-uniform rational basis spline (NURBS) method. The angle-based flattening (ABF) algorithm is used to flatten the reconstructed blank surface. The dense points are extracted from the pattern on the image using the OpenCV library. Then, the dense points are quickly located on the complex surfaces to generate the tool paths.

Findings

By flattening the reconstructed surface and creating the mapping between the contour points and the planar mesh triangular patches, the tool paths can be regenerated to keep the contour of the pattern on the deformed thin-walled surface.

Originality/value

The proposed method can adjust the tool paths according to the deformation of the thin-walled part. The consistency of sculpting patterns is improved.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 3 no. 1
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 4 August 2020

Mohamed Boudchiche and Azzeddine Mazroui

We have developed in this paper a morphological disambiguation hybrid system for the Arabic language that identifies the stem, lemma and root of a given sentence words. Following…

Abstract

We have developed in this paper a morphological disambiguation hybrid system for the Arabic language that identifies the stem, lemma and root of a given sentence words. Following an out-of-context analysis performed by the morphological analyser Alkhalil Morpho Sys, the system first identifies all the potential tags of each word of the sentence. Then, a disambiguation phase is carried out to choose for each word the right solution among those obtained during the first phase. This problem has been solved by equating the disambiguation issue with a surface optimization problem of spline functions. Tests have shown the interest of this approach and the superiority of its performances compared to those of the state of the art.

Details

Applied Computing and Informatics, vol. 20 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 17 October 2022

Jesús Miguel Chacón, Javier Sánchez-Reyes, Javier Vallejo and Pedro José Núñez

Non-uniform rational B-splines (NURBSs) are the de facto standard for representing objects in computer-aided design (CAD). The purpose of this paper is to discuss how to stick to…

1564

Abstract

Purpose

Non-uniform rational B-splines (NURBSs) are the de facto standard for representing objects in computer-aided design (CAD). The purpose of this paper is to discuss how to stick to this standard in all phases of the additive manufacturing (AM) workflow, from the CAD object to the final G-code, bypassing unnecessary polygonal approximations.

Design/methodology/approach

The authors use a commercial CAD system (Rhino3D along with its programming environment Grasshopper) for direct slicing of the model, offset generation and trimming. Circular arcs are represented as quadratic NURBSs and free-form geometry as quadratic or cubic polynomial B-splines. Therefore, circular arcs are directly expressible as G2/G3 G-code commands, whereas free-form paths are rewritten as a succession of cubic Bézier curves, thereby admitting exact translation into G5 commands, available in firmware for AM controllers, such as Marlin.

Findings

Experimental results of this paper confirm a considerable improvement in quality over the standard AM workflow, consisting of an initial polygonization of the object (e.g. via standard tessellation language), slicing this polygonal approximation, offsetting the polygonal sections and, finally, generating G-code made up of polyline trajectories (G1 commands).

Originality/value

A streamlined AM workflow is obtained, with a seamless transfer from the initial CAD description to the final G-code. By adhering to the NURBS standard at all steps, the authors avoid multiple representations and associated errors resulting from approximations.

Details

Rapid Prototyping Journal, vol. 28 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 8 June 2023

Tadej Dobravec, Boštjan Mavrič, Rizwan Zahoor and Božidar Šarler

This study aims to simulate the dendritic growth in Stokes flow by iteratively coupling a domain and boundary type meshless method.

Abstract

Purpose

This study aims to simulate the dendritic growth in Stokes flow by iteratively coupling a domain and boundary type meshless method.

Design/methodology/approach

A preconditioned phase-field model for dendritic solidification of a pure supercooled melt is solved by the strong-form space-time adaptive approach based on dynamic quadtree domain decomposition. The domain-type space discretisation relies on monomial augmented polyharmonic splines interpolation. The forward Euler scheme is used for time evolution. The boundary-type meshless method solves the Stokes flow around the dendrite based on the collocation of the moving and fixed flow boundaries with the regularised Stokes flow fundamental solution. Both approaches are iteratively coupled at the moving solid–liquid interface. The solution procedure ensures computationally efficient and accurate calculations. The novel approach is numerically implemented for a 2D case.

Findings

The solution procedure reflects the advantages of both meshless methods. Domain one is not sensitive to the dendrite orientation and boundary one reduces the dimensionality of the flow field solution. The procedure results agree well with the reference results obtained by the classical numerical methods. Directions for selecting the appropriate free parameters which yield the highest accuracy and computational efficiency are presented.

Originality/value

A combination of boundary- and domain-type meshless methods is used to simulate dendritic solidification with the influence of fluid flow efficiently.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 12 April 2018

Oliver Hutt, Kate Bowers, Shane Johnson and Toby Davies

The purpose of this paper is to use an evaluation of a micro-place-based hot-spot policing implementation to highlight the potential issues raised by data quality standards in the…

6639

Abstract

Purpose

The purpose of this paper is to use an evaluation of a micro-place-based hot-spot policing implementation to highlight the potential issues raised by data quality standards in the recording and measurement of crime data and police officer movements.

Design/methodology/approach

The study focusses on an area of London (UK) which used a predictive algorithm to designate micro-place patrol zones for each police shift over a two-month period. Police officer movements are measured using GPS data from officer-worn radios. Descriptive statistics regarding the crime data commonly used to evaluate this type of implementation are presented, and simple analyses are presented to examine the effects of officer patrol duration (dosage) on crime in micro-place hot-spots.

Findings

The results suggest that patrols of 10-20 minutes in a given police shift have a significant impact on reducing crime; however, patrols of less than about 10 minutes and more than about 20 minutes are ineffective at deterring crime.

Research limitations/implications

Due to the sparseness of officer GPS data, their paths have to be interpolated which could introduce error to the estimated patrol dosages. Similarly, errors and uncertainty in recorded crime data could have substantial impact on the designation of micro-place interventions and evaluations of their effectiveness.

Originality/value

This study is one of the first to use officer GPS data to estimate patrol dosage and places particular emphasis on the issue of data quality when evaluating micro-place interventions.

Details

Policing: An International Journal, vol. 41 no. 3
Type: Research Article
ISSN: 1363-951X

Keywords

Open Access
Article
Publication date: 3 October 2019

Lin Qi, Wenbo Zhang, Ronglai Sun and Fang Liu

Giant orthogonal grid barrel vault is generated by deleting members in the inessential force transfer path of the two-layer lattice barrel vault. Consisting of members in the…

1605

Abstract

Purpose

Giant orthogonal grid barrel vault is generated by deleting members in the inessential force transfer path of the two-layer lattice barrel vault. Consisting of members in the essential transfer path only, giant orthogonal grid barrel vault is a new type of structure with clear mechanical behavior and efficient material utilization. The paper aims to discuss this issue.

Design/methodology/approach

The geometrical configuration of this structure is analyzed, and the geometrical modeling method is proposed. When necessary parameters are determined, such as the structural span, length, vault rise, longitudinal and lateral giant grid number and section height to top chord length ratio of the lattice member, the structure geometrical model can be generated.

Findings

Numerical models of giant orthogonal grid barrel vaults with different rise–span ratios are built using the member model that can simulate the pre-buckling and post-buckling behavior. So the possible member buckle-straighten process and the plastic hinge form–disappear process of the structure under strong earthquake can be simulated.

Originality/value

Seismic analysis results indicate that when the structure damages under strong earthquake there are a large number of buckling members and few endpoint plastic hinges in the structure. Dynamic damage of giant orthogonal grid barrel vault under strong earthquake is caused by buckling members that weaken the structural bearing capacity.

Details

International Journal of Structural Integrity, vol. 11 no. 1
Type: Research Article
ISSN: 1757-9864

Keywords

Open Access
Article
Publication date: 23 March 2023

María Belén Prados-Peña, George Pavlidis and Ana García-López

This study aims to analyze the impact of Artificial Intelligence (AI) and Machine Learning (ML) on heritage conservation and preservation, and to identify relevant future research…

1007

Abstract

Purpose

This study aims to analyze the impact of Artificial Intelligence (AI) and Machine Learning (ML) on heritage conservation and preservation, and to identify relevant future research trends, by applying scientometrics.

Design/methodology/approach

A total of 1,646 articles, published between 1985 and 2021, concerning research on the application of ML and AI in cultural heritage were collected from the Scopus database and analyzed using bibliometric methodologies.

Findings

The findings of this study have shown that although there is a very important increase in academic literature in relation to AI and ML, publications that specifically deal with these issues in relation to cultural heritage and its conservation and preservation are significantly limited.

Originality/value

This study enriches the academic outline by highlighting the limited literature in this context and therefore the need to advance the study of AI and ML as key elements that support heritage researchers and practitioners in conservation and preservation work.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-1266

Keywords

Open Access
Article
Publication date: 20 March 2024

Guijian Xiao, Tangming Zhang, Yi He, Zihan Zheng and Jingzhe Wang

The purpose of this review is to comprehensively consider the material properties and processing of additive titanium alloy and provide a new perspective for the robotic grinding…

Abstract

Purpose

The purpose of this review is to comprehensively consider the material properties and processing of additive titanium alloy and provide a new perspective for the robotic grinding and polishing of additive titanium alloy blades to ensure the surface integrity and machining accuracy of the blades.

Design/methodology/approach

At present, robot grinding and polishing are mainstream processing methods in blade automatic processing. This review systematically summarizes the processing characteristics and processing methods of additive manufacturing (AM) titanium alloy blades. On the one hand, the unique manufacturing process and thermal effect of AM have created the unique processing characteristics of additive titanium alloy blades. On the other hand, the robot grinding and polishing process needs to incorporate the material removal model into the traditional processing flow according to the processing characteristics of the additive titanium alloy.

Findings

Robot belt grinding can solve the processing problem of additive titanium alloy blades. The complex surface of the blade generates a robot grinding trajectory through trajectory planning. The trajectory planning of the robot profoundly affects the machining accuracy and surface quality of the blade. Subsequent research is needed to solve the problems of high machining accuracy of blade profiles, complex surface material removal models and uneven distribution of blade machining allowance. In the process parameters of the robot, the grinding parameters, trajectory planning and error compensation affect the surface quality of the blade through the material removal method, grinding force and grinding temperature. The machining accuracy of the blade surface is affected by robot vibration and stiffness.

Originality/value

This review systematically summarizes the processing characteristics and processing methods of aviation titanium alloy blades manufactured by AM. Combined with the material properties of additive titanium alloy, it provides a new idea for robot grinding and polishing of aviation titanium alloy blades manufactured by AM.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 5 no. 1
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 12 July 2024

Stiven Agusta, Fuad Rakhman, Jogiyanto Hartono Mustakini and Singgih Wijayana

The study aims to explore how integrating recent fundamental values (RFVs) from conventional accounting studies enhances the accuracy of a machine learning (ML) model for…

Abstract

Purpose

The study aims to explore how integrating recent fundamental values (RFVs) from conventional accounting studies enhances the accuracy of a machine learning (ML) model for predicting stock return movement in Indonesia.

Design/methodology/approach

The study uses multilayer perceptron (MLP) analysis, a deep learning model subset of the ML method. The model utilizes findings from conventional accounting studies from 2019 to 2021 and samples from 10 firms in the Indonesian stock market from September 2018 to August 2019.

Findings

Incorporating RFVs improves predictive accuracy in the MLP model, especially in long reporting data ranges. The accuracy of the RFVs is also higher than that of raw data and common accounting ratio inputs.

Research limitations/implications

The study uses Indonesian firms as its sample. We believe our findings apply to other emerging Asian markets and add to the existing ML literature on stock prediction. Nevertheless, expanding to different samples could strengthen the results of this study.

Practical implications

Governments can regulate RFV-based artificial intelligence (AI) applications for stock prediction to enhance decision-making about stock investment. Also, practitioners, analysts and investors can be inspired to develop RFV-based AI tools.

Originality/value

Studies in the literature on ML-based stock prediction find limited use for fundamental values and mainly apply technical indicators. However, this study demonstrates that including RFV in the ML model improves investors’ decision-making and minimizes unethical data use and artificial intelligence-based fraud.

Details

Asian Journal of Accounting Research, vol. 9 no. 4
Type: Research Article
ISSN: 2459-9700

Keywords

1 – 10 of 15