Search results
1 – 10 of 194Yue‐Shi Lee, Show‐Jane Yen and Min‐Chi Hsieh
Web mining is one of the mining technologies, which applies data mining techniques in large amount of web data to improve the web services. Web traversal pattern mining discovers…
Abstract
Web mining is one of the mining technologies, which applies data mining techniques in large amount of web data to improve the web services. Web traversal pattern mining discovers most of the users’ access patterns from web logs. This information can provide the navigation suggestions for web users such that appropriate actions can be adopted. However, the web data will grow rapidly in the short time, and some of the web data may be antiquated. The user behaviors may be changed when the new web data is inserted into and the old web data is deleted from web logs. Besides, it is considerably difficult to select a perfect minimum support threshold during the mining process to find the interesting rules. Even though the experienced experts, they also cannot determine the appropriate minimum support. Thus, we must constantly adjust the minimum support until the satisfactory mining results can be found. The essences of incremental or interactive data mining are that we can use the previous mining results to reduce the unnecessary processes when the minimum support is changed or web logs are updated. In this paper, we propose efficient incremental and interactive data mining algorithms to discover web traversal patterns and make the mining results to satisfy the users’ requirements. The experimental results show that our algorithms are more efficient than the other approaches.
Details
Keywords
Jiangnan Qiu, Zhiqiang Wang and ChuangLing Nian
– The objective of this paper is to propose a practical and operable method to identify and fill organisational knowledge gaps during new product development.
Abstract
Purpose
The objective of this paper is to propose a practical and operable method to identify and fill organisational knowledge gaps during new product development.
Design/methodology/approach
From a microscopic view, this paper introduces the tree-shaped organisational knowledge structure to formalise the knowledge gaps and their internal hierarchical relationships. Based on the organisational knowledge structure, organisational knowledge gaps are identified through tree matching algorithm. The tree-edit-distance method is introduced to calculate the similarity between two organisational knowledge structures for filling knowledge gap.
Findings
The proposed tree-shaped organisational knowledge structure can represent organisations' knowledge and their hierarchy relationships in a structured format, which is useful for identifying and filling organisational knowledge gaps.
Originality/value
The proposed concept of organisational knowledge structure can quantify organisational knowledge. The approach is valuable for strategic decisions regarding new product development. The organisational knowledge gaps identified with this method can provide real-time and accurate guidance for the product development path. More importantly, this method can accelerate the organisational knowledge gap filling process and promote organisational innovation.
Details
Keywords
This paper considers schemaless XML data stored in a column-oriented storage, particularly in C-store. Axes of the XPath language are studied and a design and analysis of…
Abstract
Purpose
This paper considers schemaless XML data stored in a column-oriented storage, particularly in C-store. Axes of the XPath language are studied and a design and analysis of algorithms for processing the XPath fragment XP{*, //, /} are described in detail. The paper aims to discuss these issues.
Design/methodology/approach
A two-level model of C-store based on XML-enabled relational databases is supposed. The axes of XPath language in this environment have been studied by Cástková and Pokorný. The associated algorithms have been used for the implementation of the XPath fragment XP{*, //, /}.
Findings
The main advantage of this approach is algorithms implementing axes evaluations that are mostly of logarithmic complexity in n, where n is the number of nodes of XML tree associated with an XML document. A low-level memory system enables the estimation of the number of two abstract operations providing an interface to an external memory. The algorithms developed are mostly of logarithmic complexity in n, where n is the number of nodes of XML tree associated with an XML document.
Originality/value
The paper extends the approach of querying XML data stored in a column-oriented storage to the XPath fragment using only child and descendant axes and estimates the complexity of evaluating its queries.
Details
Keywords
J.I.U. Rubrico, J. Ota, T. Higashi and H. Tamura
This paper aims to develop a scheduler for multiple picking agents in a warehouse that takes into account distance and loading queue delay minimization within the context of…
Abstract
Purpose
This paper aims to develop a scheduler for multiple picking agents in a warehouse that takes into account distance and loading queue delay minimization within the context of minimizing makespan (i.e. picking time).
Design/methodology/approach
The paper uses tabu search to solve the scheduling problem in a more global sense. Each search iteration is enhanced by a custom local search (LS) procedure that hastens convergence by driving a given schedule configuration quickly to a local minimum. In particular, basic operators transfer demand among agents to balance load and minimize makespan. The new load distribution is further improved by considering a vehicle‐routing problem on the picking assignments of the agents with relocated demands. Loading queue delays that may arise from the reassignments are systematically minimized using a fast scheduling heuristic.
Findings
The proposed tabu scheduler greatly improves over a widely practiced scheduling procedure for the given problem. Variants of the tabu scheduler produce solutions that are roughly of the same quality but exhibit considerable differences in computational time.
Research limitations/implications
The proposed methodology is applicable only to the static scheduling problem where all inputs are known beforehand. Furthermore, of the possible delays during picking, only loading queues are explicitly addressed (although this is justifiable, given that these delays are dominant in the problem).
Practical implications
The proposed approach can significantly increase through‐put and productivity in picking systems that utilize multiple intelligent agents (human pickers included), e.g. in warehouses/distribution centers.
Originality/value
The paper addresses a practical scheduling problem with a high degree of complexity, i.e. scheduler explicitly deals with delays while trying to minimize makespan (generally, delays are ignored in the literature to simplify things). In the tabu implementation, an LS procedure is introduced in the metaheuristic loop that enhances the search process by minimizing non‐productive time of picking agents (travel time and delays).
Details
Keywords
Kumar S. Ray and Arpan Chakraborty
The importance of fuzzy logic (FL) in approximate reasoning, and that of default logic (DL) in reasoning with incomplete information, is well established. Also, the need for a…
Abstract
Purpose
The importance of fuzzy logic (FL) in approximate reasoning, and that of default logic (DL) in reasoning with incomplete information, is well established. Also, the need for a commonsense reasoning framework that handles both these aspects has been widely anticipated. The purpose of this paper is to show that fuzzyfied default logic (FDL) is an attempt at creating such a framework.
Design/methodology/approach
The basic syntax, semantics, unique characteristics and examples of its complex reasoning abilities have been presented in this paper.
Findings
Interestingly, FDL turns out to be a generalization of traditional DL, with even better support for non‐monotonic reasoning.
Originality/value
The paper presents a generalized tool for commonsense reasoning which can be used for inference under incomplete information.
Details
Keywords
Chen Bao, Yongwei Miao, Bingfei Gu, Kaixuan Liu and Zhen Liu
The purpose of this paper is to propose an interactive 2D–3D garment parametric pattern-making and linkage editing scheme that integrates clothing design, simulation and…
Abstract
Purpose
The purpose of this paper is to propose an interactive 2D–3D garment parametric pattern-making and linkage editing scheme that integrates clothing design, simulation and interaction to design 3D garments and 2D patterns. The proposed scheme has the potential to satisfy the individual needs of fashion industry, such as precise fit evaluation of the garment, interactive style editing with ease allowance and constrained contour lines in fashion design.
Design/methodology/approach
The authors first construct a parametric pattern-making model for flat pattern design corresponding to the body dimensions. Then, the designing 2D patterns are stitched on a virtual 3D mannequin by performing a virtual try-on. If the customer is unsatisfied after the virtual try-on, the adjustable parameters (appearance parameters and fit parameters) can be adjusted using the 2D–3D linkage editing with hierarchical constrained contour lines, and the fit evaluation tool interactively provides the feedback.
Findings
The authors observed that the usability and efficiency of the existing garment pattern-making method simplifies the garment pattern-making process. The authors utilize an interactive garment parametric flat pattern-making model to generate an individualized garment flat pattern that effectively adjust and realize the local editing of the garment pattern-making. The 2D–3D linkage editing is then employed, which alters the size and shape of garment pattern for a precise human model fit of the 3D garment using hierarchical constrained contour lines. Various instances have validated the effectiveness of the proposed scheme, which can increase the reusability of the existing garment styles and improve the efficiency of fashion design.
Research limitations/implications
First, the authors do not consider the garment pattern-making design of sophisticated styles. Second, the authors do not directly consider complex garment shapes such as wrinkles, folds, multi-layer models and fabric physical properties.
Originality/value
The authors propose a pattern adjustment scheme that uses the 3D virtual try-on technology to avoid repetitions of reality-based fit tests and garment sample making in the designing process of clothing products. The proposed scheme provides interactive selections of garment patterns and sizes and renders modification tools for 3D garment designing and 2D garment pattern-making. The authors present the 2D–3D interactive linkage editing scheme for a custom-fit garment pattern based on the hierarchical constraint contour lines. The spatial relationship among the human body, pattern pieces and 3D garment model is adequately expressed, and the final design result of the garment pattern is obtained by constraint solving. Meanwhile, the tightness tension of different parts of the 3D garment is analyzed, and the fit and comfort of the garment are quantitatively evaluated.
Details
Keywords
Rebeca Schroeder, Denio Duarte and Ronaldo dos Santos Mello
Designing efficient XML schemas is essential for XML applications which manage semi‐structured data. On generating XML schemas, there are two opposite goals: to avoid redundancy…
Abstract
Purpose
Designing efficient XML schemas is essential for XML applications which manage semi‐structured data. On generating XML schemas, there are two opposite goals: to avoid redundancy and to provide connected structures in order to achieve good performance on queries. In general, highly connected XML structures allow data redundancy, and redundancy‐free schemas generate disconnected XML structures. The purpose of this paper is to describe and evaluate by experiments an approach which balances such trade‐off through a workload analysis. Additionally, it aims to identify the most accessed data based on the workload and suggest indexes to improve access performance.
Design/methodology/approach
The paper applies and evaluates a workload‐aware methodology to provide indexing and highly connected structures for data which are intensively accessed through paths traversed by the workload.
Findings
The paper presents benchmarking results on a set of design approaches for XML schemas and demonstrates that the XML schemas generated by the approach provide high query performance and low cost of data redundancy on balancing the trade‐off on XML schema design.
Research limitations/implications
Although an XML benchmark is applied in these experiments, further experiments are expected in a real‐world application.
Practical implications
The approach proposed may be applied in a real‐world process for designing new XML databases as well as in reverse engineering process to improve XML schemas from legacy databases.
Originality/value
Unlike related work, the reported approach integrates the two opposite goal in the XML schema design, and generates suitable schemas according to a workload. An experimental evaluation shows that the proposed methodology is promising.
Details
Keywords
Benchmarks are the vital tools in the performance measurement and evaluation of database management systems (DBMS), including the relational database management systems (RDBMS…
Abstract
Benchmarks are the vital tools in the performance measurement and evaluation of database management systems (DBMS), including the relational database management systems (RDBMS) and the object‐oriented/object‐relational database management systems (OODBMS/ORDBMS). Standard synthetic benchmarks have been used to assess the performance of RDBMS software. Other benchmarks have been utilized to appraise the performance of OODBMS/ORDBMS products. In this paper, an analytical framework of workload characterization to extensively and expansively examine the rationale and design of the industry standard and synthetic standard benchmarks is presented. This analytical framework of workload analysis is made up of four main components: the schema analysis, the operation analysis, the control analysis, and the system analysis. These analysis results are compiled and new concepts and perspectives of benchmark design are collated. Each analysis aspect is described and each managerial implication is discussed in detail.
Details
Keywords
Fei Guo, Shoukun Wang, Junzheng Wang and Huan Yu
In this research, the authors established a hierarchical motion planner for quadruped locomotion, which enables a parallel wheel-quadruped robot, the “BIT-NAZA” robot, to traverse…
Abstract
Purpose
In this research, the authors established a hierarchical motion planner for quadruped locomotion, which enables a parallel wheel-quadruped robot, the “BIT-NAZA” robot, to traverse rough three-dimensional (3-D) terrain.
Design/methodology/approach
Presented is a novel wheel-quadruped mobile robot with parallel driving mechanisms and based on the Stewart six degrees of freedom (6-DOF) platform. The task for traversing rough terrain is decomposed into two prospects: one is the configuration selection in terms of a local foothold cost map, in which the kinematic feasibility of parallel mechanism and terrain features are satisfied in heuristic search planning, and the other one is a whole-body controller to complete smooth and continuous motion transitions.
Findings
A fan-shaped foot search region focuses on footholds with a strong possibility of becoming foot placement, simplifying computation complexity. A receding horizon avoids kinematic deadlock during the search process and improves robot adaptation.
Research limitations/implications
Both simulation and experimental results validated the proposed scenario available and appropriate for quadruped locomotion to traverse challenging 3-D terrains.
Originality/value
This paper analyzes kinematic workspace for a parallel robot with 6-DOF Stewart mechanism on both body and foot. A fan-shaped foot search region enhances computation efficiency. Receding horizon broadens the preview search to decrease the possibility of deadlock minima resulting from terrain variation.
Details