Search results
1 – 10 of over 1000Yanhui Song and Jiayi Cao
The purpose of this paper is to predict bibliometric indicators based on ARIMA models and to study the short-term trends of bibliometric indicators.
Abstract
Purpose
The purpose of this paper is to predict bibliometric indicators based on ARIMA models and to study the short-term trends of bibliometric indicators.
Design/methodology/approach
This paper establishes a non-stationary time series ARIMA (p, d, q) model for forecasting based on the bibliometric index data of 13 journals in the library intelligence category selected from the Chinese Social Sciences Citation Index (CSSCI) as the data source database for the period 1998–2018, and uses ACF and PACF methods for parameter estimation to predict the development trend of the bibliometric index in the next 5 years. The predicted model was also subjected to error analysis.
Findings
ARIMA models are feasible for predicting bibliometric indicators. The model predicted the trend of the four bibliometric indicators in the next 5 years, in which the number of publications showed a decreasing trend and the H-value, average citations and citations showed an increasing trend. Error analysis of the model data showed that the average absolute percentage error of the four bibliometric indicators was within 5%, indicating that the model predicted well.
Research limitations/implications
This study has some limitations. 13 Chinese journals were selected in the field of Library and Information Science as the research objects. However, the scope of research based on bibliometric indicators of Chinese journals is relatively small and cannot represent the evolution trend of the entire discipline. Therefore, in the future, the authors will select different fields and different sources for further research.
Originality/value
This study predicts the trend changes of bibliometric indicators in the next 5 years to understand the trend of bibliometric indicators, which is beneficial for further in-depth research. At the same time, it provides a new and effective method for predicting bibliometric indicators.
Details
Keywords
Jiandong Zhou, Xiang Li, Xiande Zhao and Liang Wang
The purpose of this paper is to deal with the practical challenge faced by modern logistics enterprises to accurately evaluate driving performance with high computational…
Abstract
Purpose
The purpose of this paper is to deal with the practical challenge faced by modern logistics enterprises to accurately evaluate driving performance with high computational efficiency under the disturbance of road smoothness and to identify significantly associated performance influence factors.
Design/methodology/approach
The authors cooperate with a logistics server (G7) and establish a driving grading system by constructing real-time inertial navigation data-enabled indicators for both driving behaviour (times of aggressive speed change and times of lane change) and road smoothness (average speed and average vibration times of the vehicle body).
Findings
The developed driving grading system demonstrates highly accurate evaluations in practical use. Data analytics on the constructed indicators prove the significances of both driving behaviour heterogeneity and the road smoothness effect on objective driving grading. The methodologies are validated with real-life tests on different types of vehicles, and are confirmed to be quite effective in practical tests with 95% accuracy according to prior benchmarks. Data analytics based on the grading system validate the hypotheses of the driving fatigue effect, daily traffic periods impact and transition effect. In addition, the authors empirically distinguish the impact strength of external factors (driving time, rainfall and humidity, wind speed, and air quality) on driving performance.
Practical implications
This study has good potential for providing objective driving grading as required by the modern logistics industry to improve transparent management efficiency with real-time vehicle data.
Originality/value
This study contributes to the existing research by comprehensively measuring both road smoothness and driving performance in the driving grading system in the modern logistics industry.
Details
Keywords
KyoungOk Kim, Sho Sonehara and Masayuki Takatera
– The purpose of this paper is to quantitatively evaluate the effect of adhesive interlining on the appearance of tailored jackets with different rigidity.
Abstract
Purpose
The purpose of this paper is to quantitatively evaluate the effect of adhesive interlining on the appearance of tailored jackets with different rigidity.
Design/methodology/approach
Four tailored jackets having the same pattern and fabric and three different adhesive interlinings or no adhesive interlining were prepared as experimental samples. Criteria and characteristics for assessing jacket appearance were investigated in sensory tests. A paired comparison of the jacket appearance was conducted using a ranking method. Smoothness and constriction values were proposed and obtained using three-dimensional shape data. The smoothness value refers to the degree of wrinkling on the jacket surface and the constriction value refers to the degree of constriction of the waistline. A quantitative assessment model of jacket appearance was proposed using multiple regression analysis.
Findings
The sensory test reveals that the number of wrinkles, acceptability of wrinkling and degree of constriction of the waist are important criteria in the assessment of jacket appearance. The smoothness value for the front body and the constriction value of the waist partially agreed with the normal scores of sensory test results. Sensory evaluation values for the entire jacket appearance were estimated employing multiple regression analysis with the constriction and smoothness values. The values of jacket appearance estimated using multiple regression analysis were in good agreement with the sensory test results.
Originality/value
Criteria and characteristics to be used in the assessment of the appearance of a jacket with adhesive interlining were clarified. Employing the proposed methodology, it is possible to predict jacket appearance for different adhesive interlinings, quantitatively.
Details
Keywords
Cheng Zhi Jiang, Yong Wei and Jun Ling
The purpose of this paper is to discuss the necessary condition of the relative error between continuous function transformation after inverse transformation and original sequence…
Abstract
Purpose
The purpose of this paper is to discuss the necessary condition of the relative error between continuous function transformation after inverse transformation and original sequence is not larger than the relative error between transformed sequence and its corresponding simulation sequence.
Design/methodology/approach
First, explore the function transformation feature of after inverse transformation the relative error not enlarged, then combine this feature with the function transformation feature of not enlarge the class ratio dispersion, not reduce the smoothness which author have got, and obtain a kind of special transformation that not enlarge class ratio dispersion, not reduce the smoothness and after inverse transformation keep the relative not enlarged. Meanwhile, offer the concrete form of this special function type to monotone increasing continuous function transformation and monotone decreasing continuous function transformation, respectively, and study its properties.
Findings
This paper finds the concise and important feature of monotonically increasing function transformation after inverse transformation whether the relative error enlarge or not is at first, the concise and important feature of monotonically decreasing function transformation after inverse transformation relative error not enlarged is. And find that the ideal function transformation which both reduces class ratio dispersion strictly and keeps error of inverse transformation not enlarged is non-exist for monotone increasing function transformation and monotone decreasing function transformation.
Practical implications
Use the necessary condition given by this paper, it may use to judge whether function transformation can keep relative error of inverse transformation not enlarged by easy data calculation before build modeling, therefore, choose the best function transformation. These results tell the authors: the paper cannot treat any functions as the same that whether the relative error of inverse transformation will not enlarge (or not reduced), but the authors should divide them into two parts to discuss that it will be expanded in some range or be reduced in some range. It will affect the future direction of the research, not to find the function transform both satisfies the class ratio dispersion reduced and keep the relative error of inverse transformation not enlarged, but to study which kind of function transform will narrow class ratio dispersion in some range, after the modeling accuracy improvement, but after the inverse transformation the relative error enlarged, and at this time the simulation accuracy is still higher than the simulation accuracy of original data modeling directly. Which kind of function transform will expand class ratio dispersion in some range, after the modeling accuracy diminution, but after the inverse transformation the relative error not enlarged, and now the simulation accuracy is still higher than the simulation accuracy of original data modeling directly, too.
Originality/value
Let peers no longer spend energy in seeking the function transformation which both reduce class ratio dispersion and keep relative error of inverse transformation not enlarged. At the same time, also remind peers that even if a function transformation reduces class ratio dispersion greatly, the data modeling accuracy improves a lot after transformation, but the error of inverse transformation is may quite large, still. Besides, even if function transformation increases class ratio dispersion, the data modeling accuracy is not good after transformation, the ideal situation after inverse transformation would occur, and the possibility cannot be excluded.
Details
Keywords
Hongyu Zhao, Zhelong Wang, Qin Gao, Mohammad Mehedi Hassan and Abdulhameed Alelaiwi
The purpose of this paper is to develop an online smoothing zero-velocity-update (ZUPT) method that helps achieve smooth estimation of human foot motion for the ZUPT-aided…
Abstract
Purpose
The purpose of this paper is to develop an online smoothing zero-velocity-update (ZUPT) method that helps achieve smooth estimation of human foot motion for the ZUPT-aided inertial pedestrian navigation system.
Design/methodology/approach
The smoothing ZUPT is based on a Rauch–Tung–Striebel (RTS) smoother, using a six-state Kalman filter (KF) as the forward filter. The KF acts as an indirect filter, which allows the sensor measurement error and position error to be excluded from the error state vector, so as to reduce the modeling error and computational cost. A threshold-based strategy is exploited to verify the detected ZUPT periods, with the threshold parameter determined by a clustering algorithm. A quantitative index is proposed to give a smoothness estimate of the position data.
Findings
Experimental results show that the proposed method can improve the smoothness, robustness, efficiency and accuracy of pedestrian navigation.
Research limitations/implications
Because of the chosen smoothing algorithm, a delay no longer than one gait cycle is introduced. Therefore, the proposed method is suitable for applications with soft real-time constraints.
Practical implications
The paper includes implications for the smooth estimation of most types of pedal locomotion that are achieved by legged motion, by using a sole foot-mounted commercial-grade inertial sensor.
Originality/value
This paper helps realize smooth transitions between swing and stance phases, helps enable continuous correction of navigation errors during the whole gait cycle, helps achieve robust detection of gait phases and, more importantly, requires lower computational cost.
Details
Keywords
Masayuki Takatera, Ran Yoshida, Julie Peiffer, Moe Yamazaki, Kenya Yashima, KyoungOk Kim and Keiko Miyatake
The purpose of this paper is to create a fabric retrieval system for designers that is based on a database that includes designers’ criteria and Kansei (sense and feeling…
Abstract
Purpose
The purpose of this paper is to create a fabric retrieval system for designers that is based on a database that includes designers’ criteria and Kansei (sense and feeling) information, designed for the selection of a fabric from a wide range in e-commerce.
Design/methodology/approach
The database included sensory expressions for each type of fabric taken from fashion journals and values of smoothness, softness, luster and thinness (referred to as Kansei values) for each fabric. The Kansei values were determined by a Japanese expert designer using standard fabric samples of a fabric type. The system uses two search methods to find the desired type of fabric: a category search method and a free word search method. After finding appropriate types of fabric, the user further narrows down the fabrics of the selected type to more suitable fabrics using the Kansei values. The validity of the Kansei values and the effectiveness of the system were verified by 11 professional designers from Japan and Sweden.
Findings
The Japanese and Swedish designers were satisfied with the fabrics retrieved for specific items and found that the system was effective. The Kansei values were similar among fashion designers and shown to be effective for fabric retrieval.
Originality/value
The system will allow designers to find appropriate types of fabric and to narrow their search for fabrics among selected types to find candidate fabrics easily and quickly with their Kansei values and experience without technical knowledge of fabrics.
Details
Keywords
According to the fact that the single function transformation which can both reduce the class ratio dispersion and keep the relative error no enlargement after the inverse…
Abstract
Purpose
According to the fact that the single function transformation which can both reduce the class ratio dispersion and keep the relative error no enlargement after the inverse transformation does not exist, this paper provides the separable binary function transformation
Design/methodology/approach
First of all, to meet that the sequence reduces the class ratio dispersion after binary function transformation, the sufficient and necessary condition of binary function transformation with reduced class ratio dispersion is obtained. Secondly, to meet the condition that the inverse transformation relative error is not enlarged, the necessary condition of separable binary function transformation is obtained respectively for monotonically increasing and monotonically decreasing function
Findings
The sufficient and necessary condition of binary function transformation with reduced class ratio dispersion and the necessary condition of separable binary function transformation with the inverse transformation relative error no enlargement.
Practical implications
According to the properties of separable binary function transformation provided in this paper, the grey prediction function model is established, which can improve the modeling accuracy.
Originality/value
This paper provides a binary function transformation, and researches the sufficient and necessary condition of binary function transformation with reduced class ratio dispersion and the necessary condition of separable binary function transformation with the inverse transformation relative error no enlargement. It is easy for scholars to carry out the pretest before selecting the separable binary function transformation. The binary function transformation is the further extension of single function transformation, which broadens and enriches the choice of function transformation.
Details
Keywords
Constrained clustering is an important recent development in clustering literature. The goal of an algorithm in constrained clustering research is to improve the quality of…
Abstract
Purpose
Constrained clustering is an important recent development in clustering literature. The goal of an algorithm in constrained clustering research is to improve the quality of clustering by making use of background knowledge. The purpose of this paper is to suggest a new perspective for constrained clustering, by finding an effective transformation of data into target space on the reference of background knowledge given in the form of pairwise must- and cannot-link constraints.
Design/methodology/approach
Most of existing methods in constrained clustering are limited to learn a distance metric or kernel matrix from the background knowledge while looking for transformation of data in target space. Unlike previous efforts, the author presents a non-linear method for constraint clustering, whose basic idea is to use different non-linear functions for each dimension in target space.
Findings
The outcome of the paper is a novel non-linear method for constrained clustering which uses different non-linear functions for each dimension in target space. The proposed method for a particular case is formulated and explained for quadratic functions. To reduce the number of optimization parameters, the proposed method is modified to relax the quadratic function and approximate it by a factorized version that is easier to solve. Experimental results on synthetic and real-world data demonstrate the efficacy of the proposed method.
Originality/value
This study proposes a new direction to the problem of constrained clustering by learning a non-linear transformation of data into target space without using kernel functions. This work will assist researchers to start development of new methods based on the proposed framework which will potentially provide them with new research topics.
Details
Keywords
The purpose of this paper is to find automatic post‐processing scheme to give textures and motion data to three dimensional (3D) body scan data.
Abstract
Purpose
The purpose of this paper is to find automatic post‐processing scheme to give textures and motion data to three dimensional (3D) body scan data.
Design/methodology/approach
Semi‐implicit particle‐based method was applied to post‐processing of 3D body scan data. The template avatar mesh was draped onto the target scan data and the texture/motion data were transferred to regenerated body. Automatic body feature detection was used to correlate the template body with the target body.
Findings
Using semi‐implicit particle method, there are advantages in both computational stability and accuracy. The calculation is done in a few minutes and even data with many holes could be used.
Originality/value
There are several researches for body feature detection and scan body regeneration but this paper aims for fully automatic method which needs no human intervention. The semi‐implicit particle method, which is popularly used for cloth simulation, is applied to body data regeneration. The conventional 3D body scan data, which had no colors and motions can be given textures and motions with this approach. And even the face can be freely interchanged with the use of external face generation software.
Details
Keywords
Elcio M. Tachizawa, María J. Alvarez-Gil and María J. Montes-Sancho
The purpose of this paper is to analyze the impact of smart city initiatives and big data on supply chain management (SCM). More specifically, the connections between smart…
Abstract
Purpose
The purpose of this paper is to analyze the impact of smart city initiatives and big data on supply chain management (SCM). More specifically, the connections between smart cities, big data and supply network characteristics (supply network structure and governance mechanisms) are investigated.
Design/methodology/approach
An integrative framework is proposed, grounded on a literature review on smart cities, big data and supply networks. Then, the relationships between these constructs are analyzed, using the proposed integrative framework.
Findings
Smart cities have different implications to network structure (complexity, density and centralization) and governance mechanisms (formal vs informal). Moreover, this work highlights and discusses the future research directions relating to smart cities and SCM.
Research limitations/implications
The relationships between smart cities, big data and supply networks cannot be described simply by using a linear, cause-and-effect framework. Accordingly, an integrative framework that can be used in future empirical studies to analyze smart cities and big data implications on SCM has been proposed.
Practical implications
Smart cities and big data alone have limited capacity of improving SCM processes, but combined they can support improvement initiatives. Nevertheless, smart cities and big data can also suppose some novel obstacles to effective SCM.
Originality/value
Several studies have analyzed information technology innovation adoption in supply chains, but, to the best of our knowledge, no study has focused on smart cities.
Details