Search results
1 – 10 of over 6000Bin Wang, Huifeng Li, Le Tong, Qian Zhang, Sulei Zhu and Tao Yang
This paper aims to address the following issues: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for…
Abstract
Purpose
This paper aims to address the following issues: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preference generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g. user ID and time stamp) in trajectory data and the spatiotemporal relations among nonconsecutive locations.
Design/methodology/approach
The authors propose a novel self-attention network–based model named SanMove to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove uses a self-attention module to capture each user's long-term preference, which can represent her personalized location preference. Meanwhile, the authors use a spatial-temporal guided noninvasive self-attention (STNOVA) module to exploit auxiliary information in the trajectory data to learn the user's short-term preference.
Findings
The authors evaluate SanMove on two real-world datasets. The experimental results demonstrate that SanMove is not only faster than the state-of-the-art recurrent neural network (RNN) based predict model but also outperforms the baselines for next location prediction.
Originality/value
The authors propose a self-attention-based sequential model named SanMove to predict the user's trajectory, which comprised long-term and short-term preference learning modules. SanMove allows full parallel processing of trajectories to improve processing efficiency. They propose an STNOVA module to capture the sequential transitions of current trajectories. Moreover, the self-attention module is used to process historical trajectory sequences in order to capture the personalized location preference of each user. The authors conduct extensive experiments on two check-in datasets. The experimental results demonstrate that the model has a fast training speed and excellent performance compared with the existing RNN-based methods for next location prediction.
Details
Keywords
M. Khoshnevisan, F. Kaymarm, H.P. Singh, R. Singh and F. Smarandache
This paper proposes a class of estimators for population correlation coefficient when information about the population mean and population variance of one of the variables is not…
Abstract
This paper proposes a class of estimators for population correlation coefficient when information about the population mean and population variance of one of the variables is not available but information about these parameters of another variable (auxiliary) is available, in two phase sampling and analyzes its properties. Optimum estimator in the class is identified with its variance formula. The estimators of the class involve unknown constants whose optimum values depend on unknown population parameters. In earlier research it has been shown that when these population parameters are replaced by their consistent estimates the resulting class of estimators has the same asymptotic variance as that of optimum estimator. An empirical study is carried out to demonstrate the performance of the constructed estimators.
Details
Keywords
Jack Allen, Housila P. Singh and Florentin Smarandache
This paper proposes a family of estimators of population mean using information on several auxiliary variables and analyzes its properties in the presence of measurement errors.
Abstract
This paper proposes a family of estimators of population mean using information on several auxiliary variables and analyzes its properties in the presence of measurement errors.
Details
Keywords
R.W. Baines and G.J. Colquhoun
System design methodologies can greatly help the understanding of complex manufacturing situations. The IDEF0 structured analysis method is one of the favoured tools for industry.
G.J. Colquhoun, J.D. Gamble and R.W. Baines
International competition is driving manufacturing executives toplace an ever‐growing importance on the formulation of computerintegrated manufacturing (CIM) strategies as part of…
Abstract
International competition is driving manufacturing executives to place an ever‐growing importance on the formulation of computer integrated manufacturing (CIM) strategies as part of their corporate plans. Structured analysis and design techniques, in particular IDEF (Integrated Computer Aided Manufacturing definition method), are becoming a vital tool in the analysis and implementation of such CIM strategies. This article positively demonstrates the technique and its ability to model the link between design and manufacture in a CIM environment. The approach relates interdependencies of planning for manufacture, design and process planning within a CIM strategy. In particular it establishes the position of computer aided process planning (CAPP) in CIM architecture and evaluates a CAPP package as a potential element of a CIM strategy. The application to which IDEFo, in particular, has been used clearly demonstrates its usefulness to manufacturers as a powerful aid to the development of detailed CIM strategies.
Details
Keywords
Ashok Ranchhod, Cãlin Gurãu and Ray Hackney
Investigates the application of the Internet marketing and information exchange strategies in the Biotechnology sector. The Internet is particularly valuable in this context…
Abstract
Investigates the application of the Internet marketing and information exchange strategies in the Biotechnology sector. The Internet is particularly valuable in this context because not only does it offer instant information about products and services, but it also allows an interactive medium for value added activities such as “virtual” molecular modeling. This type of activity can foster important joint research operations between companies on a worldwide basis.
Details
Keywords
Kai Zheng, Xianjun Yang, Yilei Wang, Yingjie Wu and Xianghan Zheng
The purpose of this paper is to alleviate the problem of poor robustness and over-fitting caused by large-scale data in collaborative filtering recommendation algorithms.
Abstract
Purpose
The purpose of this paper is to alleviate the problem of poor robustness and over-fitting caused by large-scale data in collaborative filtering recommendation algorithms.
Design/methodology/approach
Interpreting user behavior from the probabilistic perspective of hidden variables is helpful to improve robustness and over-fitting problems. Constructing a recommendation network by variational inference can effectively solve the complex distribution calculation in the probabilistic recommendation model. Based on the aforementioned analysis, this paper uses variational auto-encoder to construct a generating network, which can restore user-rating data to solve the problem of poor robustness and over-fitting caused by large-scale data. Meanwhile, for the existing KL-vanishing problem in the variational inference deep learning model, this paper optimizes the model by the KL annealing and Free Bits methods.
Findings
The effect of the basic model is considerably improved after using the KL annealing or Free Bits method to solve KL vanishing. The proposed models evidently perform worse than competitors on small data sets, such as MovieLens 1 M. By contrast, they have better effects on large data sets such as MovieLens 10 M and MovieLens 20 M.
Originality/value
This paper presents the usage of the variational inference model for collaborative filtering recommendation and introduces the KL annealing and Free Bits methods to improve the basic model effect. Because the variational inference training denotes the probability distribution of the hidden vector, the problem of poor robustness and overfitting is alleviated. When the amount of data is relatively large in the actual application scenario, the probability distribution of the fitted actual data can better represent the user and the item. Therefore, using variational inference for collaborative filtering recommendation is of practical value.
Details
Keywords
Hao Chen, Wenli Li, Tu Lyu and Xunan Zheng
The rapid development of the Internet in China has profoundly affected the country's charities, which many people support through online donations (e.g. providing financial help…
Abstract
Purpose
The rapid development of the Internet in China has profoundly affected the country's charities, which many people support through online donations (e.g. providing financial help) and charity information forwarding (a new behavior of participating in online charities via social media). However, the development of online charities has been accompanied by many problems, such as donation fraud and fake charity information, which adversely affect social kindness. The purpose of this paper is to understand people's online donation and forwarding behaviors and to explore the mechanisms of such behaviors from the perspectives of cognitive-based trust and emotional-based empathic concern.
Design/methodology/approach
This study developed a research model based on the elaboration likelihood model (ELM) and stimulus–organism–response (SOR) model. The researchers obtained 287 valid samples via a scenario-based experimental survey and conducted partial least squares structural equation modeling (PLS-SEM) to test the model.
Findings
The results indicated that (1) online donation intention is motivated by rational-based trust and emotional-based empathic concern; (2) online charity information forwarding is triggered only when trust is built, and there is no significant correlation between empathic concern and forwarding intention; and (3) content quality, initiator credibility, and platform reputation are three critical paths to promote trust; in addition, an individual's empathic concern can be motivated by the emotional appeal.
Originality/value
This study highlights the different mechanisms of donation and forwarding behaviors and provided theoretical measures for motiving trust and empathic concern in the online context to promote people's participation in online charity.
Details
Keywords
The purpose of this paper is to construct a digital collection and database of traditional clothing that is convenient for the digital dissemination and application of traditional…
Abstract
Purpose
The purpose of this paper is to construct a digital collection and database of traditional clothing that is convenient for the digital dissemination and application of traditional clothing and provide resources for research on clothing fashion, traditional clothing techniques, clothing culture, history and clothing teaching.
Design/methodology/approach
A real object analysis method was used in this paper, based on 15 core elements of the internationally common DC metadata standard, and with consideration to the characteristics of clothing products and clothing industry application specifications, the core elements of DC are expanded to facilitate the detailed record of the characteristic information of clothing, especially the implicit clothing culture. A code symbol compilation method was developed to give each piece of clothing a unique number, facilitating identification, classification and recording. At last, a metadata construction scheme for traditional clothing was developed. A traditional embroidered children's hat and Mamianqunt serve as examples to demonstrate the metadata elements.
Findings
The clothing meta-database provides a main body of traditional clothing while also paying attention to the collection of cultural elements. It is composed of five layers of classified data, source data, characteristic data, connotation data and management data, as well as 28 data elements, providing ease of sharing and interoperation.
Originality/value
This paper expands the subset of fashion metadata by describing traditional clothing metadata, especially the excavation of clothing cultural elements, and developing code compilation methods so that each clothing product can obtain a unique identification number, thereby building a traditional clothing metadata construction scheme consisting of five data layers and containing 28 data elements. This scheme records the information about each layer of traditional clothing in detail and provides shared data for discipline research and industry applications.
Details
Keywords
Ruan Wang, Jun Deng, Xinhui Guan and Yuming He
With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data…
Abstract
Purpose
With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data visualization techniques to genealogical data is limited. This paper aims to fill this research gap by providing a systematic framework and process guidance for practitioners seeking to uncover hidden knowledge from genealogy.
Design/methodology/approach
Based on a literature review of genealogy's current knowledge reasoning research, the authors constructed an integrated framework for knowledge inference and visualization application using a knowledge graph. Additionally, the authors applied this framework in a case study using “Manchu Clan Genealogy” as the data source.
Findings
The case study shows that the proposed framework can effectively decompose and reconstruct genealogy. It demonstrates the reasoning, discovery, and web visualization application process of implicit information in genealogy. It enhances the effective utilization of Manchu genealogy resources by highlighting the intricate relationships among people, places, and time entities.
Originality/value
This study proposed a framework for genealogy knowledge reasoning and visual analysis utilizing a knowledge graph, including five dimensions: the target layer, the resource layer, the data layer, the inference layer, and the application layer. It helps to gather the scattered genealogy information and establish a data network with semantic correlations while establishing reasoning rules to enable inference discovery and visualization of hidden relationships.
Details