Search results

1 – 10 of over 117000
Article
Publication date: 29 October 2021

Yanchao Rao and Ken Huijin Guo

The US Securities and Exchange Commission (SEC) requires public companies to file structured data in eXtensible Business Reporting Language (XBRL). One of the key arguments behind…

Abstract

Purpose

The US Securities and Exchange Commission (SEC) requires public companies to file structured data in eXtensible Business Reporting Language (XBRL). One of the key arguments behind the XBRL mandate is that the technical standard can help improve processing efficiency for data aggregators. This paper aims to empirically test the data processing efficiency hypothesis.

Design/methodology/approach

To test the data processing efficiency hypothesis, the authors adopt a two-sample research design by using data from Compustat: a pooled sample (N = 61,898) and a quasi-experimental sample (N = 564). The authors measure data processing efficiency as the time lag between the dates of 10-K filings on the SEC’s EDGAR system and the dates of related data finalized in the Compustat database.

Findings

The statistical results show that after controlling for potential effects of firm size, age, fiscal year and industry, XBRL has a non-significant impact on data efficiency. It suggests that the data processing efficiency benefit may have been overestimated.

Originality/value

This study provides some timely empirical evidence to the debate as to whether XBRL can improve data processing efficiency. The non-significant results suggest that it may be necessary to revisit the mandate of XBRL reporting in the USA and many other countries.

Details

International Journal of Accounting & Information Management, vol. 30 no. 1
Type: Research Article
ISSN: 1834-7649

Keywords

Article
Publication date: 23 March 2012

Rajiv Dandotiya and Jan Lundberg

Wear life of mill liners is an important parameter concerning maintenance decision for mill liners. Variations in process parameters such as different ore properties due to the…

Abstract

Purpose

Wear life of mill liners is an important parameter concerning maintenance decision for mill liners. Variations in process parameters such as different ore properties due to the use of multiple ore types influence the wear life of mill liners whereas random order of processing, processing time and monetary value of different ore types leads to variation in mill profitability. The purpose of the present paper is to develop an economic decision model considering the variations in process parameters and maintenance parameters for making more cost‐effective maintenance decisions.

Design/methodology/approach

Correlation studies, experimental results and experience of industry experts are used for wear life modeling whereas simulation is used for maximizing mill profit to develop economic decision model. The weighting approach and simulation have been considered to emphasize the contribution of parameters such as ore value and processing time of a specific ore type to a final result.

Findings

A model for estimating lifetime of mill liners has been developed based on ore properties. The lifetime model is combined with a replacement interval model to determine the optimum replacement interval for the mill liners which considers process parameters of multiple ore types. The finding of the combined model results leads to a significant improvement in mill profit. The proposed combined model also shows that an optimum maintenance policy can not only reduce the downtime costs, but also affect the process performance, which leads to significant improvement in the savings of the ore dressing mill.

Practical implications

The proposed economic decision model is practically feasible and can be implemented within the ore dressing mill industries. Using the model, the cost‐effective maintenance decision can increase the profit of the organization significantly.

Originality/value

The novelty is that the new combined model is applicable and useful in replacement decision making for grinding mill liners, in complex environment, e.g. processing multiple ore types, different monetary value of the ore type and random order of ore processing.

Details

Journal of Quality in Maintenance Engineering, vol. 18 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 12 June 2017

Kehe Wu, Yayun Zhu, Quan Li and Ziwei Wu

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities…

Abstract

Purpose

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand.

Design/methodology/approach

First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework.

Findings

Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable.

Originality/value

This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 5 June 2024

Anabela Costa Silva, José Machado and Paulo Sampaio

In the context of the journey toward digital transformation and the realization of a fully connected factory, concepts such as data science, artificial intelligence (AI), machine…

Abstract

Purpose

In the context of the journey toward digital transformation and the realization of a fully connected factory, concepts such as data science, artificial intelligence (AI), machine learning (ML) and even predictive models emerge as indispensable pillars. Given the relevance of these topics, the present study focused on the analysis of customer complaint data, employing ML techniques to anticipate complaint accountability. The primary objective was to enhance data accessibility, harnessing the potential of ML models to optimize the complaint handling process and thereby positively contribute to data-driven decision-making. This approach aimed not only to reduce the number of units to be analyzed and customer response time but also to underscore the pressing need for a paradigm shift in quality management. The application of AI techniques sought to enhance not only the efficiency of the complaint handling process and data accessibility but also to demonstrate how the integration of these innovative approaches could profoundly transform the way quality is conceived and managed within organizations.

Design/methodology/approach

To conduct this study, real customer complaint data from an automotive company was utilized. Our main objective was to highlight the importance of artificial intelligence (AI) techniques in the context of quality. To achieve this, we adopted a methodology consisting of 10 distinct phases: business analysis and understanding; project plan definition; sample definition; data exploration; data processing and pre-processing; feature selection; acquisition of predictive models; evaluation of the models; presentation of the results; and implementation. This methodology was adapted from data mining methodologies referenced in the literature, taking into account the specific reality of the company under study. This ensured that the obtained results were applicable and replicable across different fields, thereby strengthening the relevance and generalizability of our research findings.

Findings

The achieved results not only demonstrated the ability of ML models to predict complaint accountability with an accuracy of 64%, but also underscored the significance of the adopted approach within the context of Quality 4.0 (Q4.0). This study served as a proof of concept in complaint analysis, enabling process automation and the development of a guide applicable across various areas of the company. The successful integration of AI techniques and Q4.0 principles highlighted the pressing need to apply concepts of digitization and artificial intelligence in quality management. Furthermore, it emphasized the critical importance of data, its organization, analysis and availability in driving digital transformation and enhancing operational efficiency across all company domains. In summary, this work not only showcased the advancements achieved through ML application but also emphasized the pivotal role of data and digitization in the ongoing evolution of Quality 4.0.

Originality/value

This study presents a significant contribution by exploring complaint data within the organization, an area lacking investigation in real-world contexts, particularly focusing on practical applications. The development of standardized processes for data handling and the application of predictions for classification models not only demonstrated the viability of this approach but also provided a valuable proof of concept for the company. Most importantly, this work was designed to be replicable in other areas of the factory, serving as a fundamental basis for the company’s data scientists. Until then, limited data access and lack of automation in its treatment and analysis represented significant challenges. In the context of Quality 4.0, this study highlights not only the immediate advantages for decision-making and predicting complaint outcomes but also the long-term benefits, including clearer and standardized processes, data-driven decision-making and improved analysis time. Thus, this study not only underscores the importance of data and the application of AI techniques in the era of quality but also fills a knowledge gap by providing an innovative and replicable approach to complaint analysis within the organization. In terms of originality, this article stands out for addressing an underexplored area and providing a tangible and applicable solution for the company, highlighting the intrinsic value of aligning quality with AI and digitization.

Details

The TQM Journal, vol. 36 no. 9
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 7 May 2024

Gangting Huang, Qichen Wu, Youbiao Su, Yunfei Li and Shilin Xie

In order to improve the computation efficiency of the four-point rainflow algorithm, a new fast four-point rainflow cycle counting algorithm (FFRA) using a novel loop iteration…

Abstract

Purpose

In order to improve the computation efficiency of the four-point rainflow algorithm, a new fast four-point rainflow cycle counting algorithm (FFRA) using a novel loop iteration mode is proposed.

Design/methodology/approach

In this new algorithm, the loop iteration mode is simplified by reducing the number of iterations, tests and deletions. The high efficiency of the new algorithm makes it a preferable candidate in fatigue life online estimation of structural health monitoring systems.

Findings

The extensive simulation results show that the extracted cycles by the new FFRA are the same as those by the four-point rainflow cycle counting algorithm (FRA) and the three-point rainflow cycle counting algorithm (TRA). Especially, the simulation results indicate that the computation efficiency of the FFRA has improved an average of 12.4 times compared to the FRA and an average of 8.9 times compared to the TRA. Moreover, the equivalence of cycle extraction results between the FFRA and the FRA is proved mathematically by utilizing some fundamental properties of the rainflow algorithm. Theoretical proof of the efficiency improvement of the FFRA in comparison to the FRA is also given.

Originality/value

This merit makes the FFRA preferable in online monitoring systems of structures where fatigue life estimation needs to be accomplished online based on massive measured data. It is noticeable that the high efficiency of the FFRA attributed to the simple loop iteration, which provides beneficial guidance to improve the efficiency of existing algorithms.

Article
Publication date: 23 November 2021

Feifei Sun and Guohong Shi

This paper aims to effectively explore the application effect of big data techniques based on an α-support vector machine-stochastic gradient descent (SVMSGD) algorithm in…

Abstract

Purpose

This paper aims to effectively explore the application effect of big data techniques based on an α-support vector machine-stochastic gradient descent (SVMSGD) algorithm in third-party logistics, obtain the valuable information hidden in the logistics big data and promote the logistics enterprises to make more reasonable planning schemes.

Design/methodology/approach

In this paper, the forgetting factor is introduced without changing the algorithm's complexity and proposed an algorithm based on the forgetting factor called the α-SVMSGD algorithm. The algorithm selectively deletes or retains the historical data, which improves the adaptability of the classifier to the real-time new logistics data. The simulation results verify the application effect of the algorithm.

Findings

With the increase of training times, the test error percentages of gradient descent (GD) algorithm, gradient descent support (SGD) algorithm and the α-SVMSGD algorithm decrease gradually; in the process of logistics big data processing, the α-SVMSGD algorithm has the efficiency of SGD algorithm while ensuring that the GD direction approaches the optimal solution direction and can use a small amount of data to obtain more accurate results and enhance the convergence accuracy.

Research limitations/implications

The threshold setting of the forgetting factor still needs to be improved. Setting thresholds for different data types in self-learning has become a research direction. The number of forgotten data can be effectively controlled through big data processing technology to improve data support for the normal operation of third-party logistics.

Practical implications

It can effectively reduce the time-consuming of data mining, realize the rapid and accurate convergence of sample data without increasing the complexity of samples, improve the efficiency of logistics big data mining, reduce the redundancy of historical data, and has a certain reference value in promoting the development of logistics industry.

Originality/value

The classification algorithm proposed in this paper has feasibility and high convergence in third-party logistics big data mining. The α-SVMSGD algorithm proposed in this paper has a certain application value in real-time logistics data mining, but the design of the forgetting factor threshold needs to be improved. In the future, the authors will continue to study how to set different data type thresholds in self-learning.

Details

Journal of Enterprise Information Management, vol. 35 no. 4/5
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 25 May 2023

Lingling Huang, Chengqiang Zhao, Shijie Chen and Liujing Zeng

Technical advantages embraced by blockchain, such as distributed ledger, P2P networks, consensus mechanisms and smart contracts, are highly compatible with addressing the security…

Abstract

Purpose

Technical advantages embraced by blockchain, such as distributed ledger, P2P networks, consensus mechanisms and smart contracts, are highly compatible with addressing the security issues of transferring and storing judicial documents and obtaining the feedback and evaluation of judicial translation services in cases with foreign elements. Therefore, based on this, a consortium blockchain-based model for supervising the overall process of judicial translation services in cases with foreign elements is proposed.

Design/methodology/approach

Some judicial documents are required to be translated when there are language barriers in cases with foreign elements. The purpose of this paper is expected to address security issues, which is ignored, in the process of translating judicial documents.

Findings

The experimental results show that the model constructed in this paper can effectively guarantee the security and privacy of transferring and storing translated judicial documents in cases with foreign elements, and realize the credibility and traceability of feedbacks and evaluations of judicial translation services. In addition, the underlying network communications is stable and the speed for processing data can meet the requirements of practical application.

Originality/value

The research in this paper provides an innovative scheme for judicial translation services in cases with foreign elements. The model constructed is conducive to protecting the security of the transfer and storage of judicial documents and improving the efficiency and modernization ability of hearing cases with foreign elements.

Details

Aslib Journal of Information Management, vol. 76 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 29 February 2024

Atefeh Hemmati, Mani Zarei and Amir Masoud Rahmani

Big data challenges and opportunities on the Internet of Vehicles (IoV) have emerged as a transformative paradigm to change intelligent transportation systems. With the growth of…

Abstract

Purpose

Big data challenges and opportunities on the Internet of Vehicles (IoV) have emerged as a transformative paradigm to change intelligent transportation systems. With the growth of data-driven applications and the advances in data analysis techniques, the potential for data-adaptive innovation in IoV applications becomes an outstanding development in future IoV. Therefore, this paper aims to focus on big data in IoV and to provide an analysis of the current state of research.

Design/methodology/approach

This review paper uses a systematic literature review methodology. It conducts a thorough search of academic databases to identify relevant scientific articles. By reviewing and analyzing the primary articles found in the big data in the IoV domain, 45 research articles from 2019 to 2023 were selected for detailed analysis.

Findings

This paper discovers the main applications, use cases and primary contexts considered for big data in IoV. Next, it documents challenges, opportunities, future research directions and open issues.

Research limitations/implications

This paper is based on academic articles published from 2019 to 2023. Therefore, scientific outputs published before 2019 are omitted.

Originality/value

This paper provides a thorough analysis of big data in IoV and considers distinct research questions corresponding to big data challenges and opportunities in IoV. It also provides valuable insights for researchers and practitioners in evolving this field by examining the existing fields and future directions for big data in the IoV ecosystem.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 5 November 2020

Nan Zhang, Lichao Zhang, Senlin Wang, Shifeng Wen and Yusheng Shi

In the implementation of large-size additive manufacturing (AM), the large printing area can be established by using the tiled and fixed multiple printing heads or the single…

Abstract

Purpose

In the implementation of large-size additive manufacturing (AM), the large printing area can be established by using the tiled and fixed multiple printing heads or the single dynamic printing head moving in the xy plane, which requires a layer decomposition after the mesh slicing to generate segmented infill areas. The data processing flow of these schemes is redundant and inefficient to some extent, especially for the processing of complex stereolithograph (STL) models. It is of great importance in improving the overall efficiency of large-size AM technics software by simplifying the redundant steps. This paper aims to address these issues.

Design/methodology/approach

In this paper, a method of directly generating segmented layered infill areas is proposed for AM. Initially, a vertices–mesh hybrid representation of STL models is constructed based on a divide-and-conquer strategy. Then, a trimming–mapping procedure is performed on sliced contours acquired from partial surfaces. Finally, to link trimmed open contours and inside-signal square corners as segmented infill areas, a region-based open contour closing algorithm is carried out in virtue of the developed data structures.

Findings

In virtue of the proposed approach, the segmented layered infill areas can be directly generated from STL models. Experimental results indicate that the approach brings us the good property of efficiency, especially for complex STL models.

Practical implications

The proposed approach can generate segmented layered infill areas efficiently in some cases.

Originality/value

The region-based layered infill area generation approach discussed here will be a supplement to current data process technologies in large-size AM, which is very suitable for parallel processing and enables us to improve the efficiency of large-size AM technics software.

Details

Rapid Prototyping Journal, vol. 27 no. 1
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 17 February 2021

Yinying Wang

Artificial intelligence (AI) refers to a type of algorithms or computerized systems that resemble human mental processes of decision-making. This position paper looks beyond the…

3235

Abstract

Purpose

Artificial intelligence (AI) refers to a type of algorithms or computerized systems that resemble human mental processes of decision-making. This position paper looks beyond the sensational hyperbole of AI in teaching and learning. Instead, this paper aims to explore the role of AI in educational leadership.

Design/methodology/approach

To explore the role of AI in educational leadership, I synthesized the literature that intersects AI, decision-making, and educational leadership from multiple disciplines such as computer science, educational leadership, administrative science, judgment and decision-making and neuroscience. Grounded in the intellectual interrelationships between AI and educational leadership since the 1950s, this paper starts with conceptualizing decision-making, including both individual decision-making and organizational decision-making, as the foundation of educational leadership. Next, I elaborated on the symbiotic role of human-AI decision-making.

Findings

With its efficiency in collecting, processing, analyzing data and providing real-time or near real-time results, AI can bring in analytical efficiency to assist educational leaders in making data-driven, evidence-informed decisions. However, AI-assisted data-driven decision-making may run against value-based moral decision-making. Taken together, both leaders' individual decision-making and organizational decision-making are best handled by using a blend of data-driven, evidence-informed decision-making and value-based moral decision-making. AI can function as an extended brain in making data-driven, evidence-informed decisions. The shortcomings of AI-assisted data-driven decision-making can be overcome by human judgment guided by moral values.

Practical implications

The paper concludes with two recommendations for educational leadership practitioners' decision-making and future scholarly inquiry: keeping a watchful eye on biases and minding ethically-compromised decisions.

Originality/value

This paper brings together two fields of educational leadership and AI that have been growing up together since the 1950s and mostly growing apart till the late 2010s. To explore the role of AI in educational leadership, this paper starts with the foundation of leadership—decision-making, both leaders' individual decisions and collective organizational decisions. The paper then synthesizes the literature that intersects AI, decision-making and educational leadership from multiple disciplines to delineate the role of AI in educational leadership.

Details

Journal of Educational Administration, vol. 59 no. 3
Type: Research Article
ISSN: 0957-8234

Keywords

1 – 10 of over 117000