Search results
1 – 10 of 342Bin Wang, Nanyue Xu, Pengyuan Wu and Rongfei Yang
The purpose of this paper is to provide a new hydrostatic actuator controlled by a piezoelectric piston pump and to reveal its characteristics.
Abstract
Purpose
The purpose of this paper is to provide a new hydrostatic actuator controlled by a piezoelectric piston pump and to reveal its characteristics.
Design/methodology/approach
In this paper, a piezoelectric pump with passive poppet valves and hydraulic displacement amplifier is designed as a new control component in a hydrostatic actuator for high actuation capacity. A component-level mathematical model is established to describe the system characteristics. Simulation verification for cases under typical conditions is implemented to evaluate the delivery behavior of the pump and the carrying ability of the actuator.
Findings
By using the displacement amplifier and the passive distributing valves, simulation demonstrates that the pump can deliver flow rate up to 3 L/min, and the actuator controlled by this pump can push an object weighing approximately 50 kg. In addition, it is particularly important to decide a proper amplification ratio of the amplifier in the pump for better actuation performance.
Originality/value
The piezoelectric pump presented in this paper has its potential to light hydrostatic actuator. The model constructed in this paper is valid for characteristic analysis and performance evaluation of this pump and actuators.
Details
Keywords
Pingyu Jiang, Xiangtong Yan and Yang Liu
The purpose of this research is to develop a kind of new service‐driven MEMS “top‐down” design method and implement the correspondent web‐based software prototype as verification.
Abstract
Purpose
The purpose of this research is to develop a kind of new service‐driven MEMS “top‐down” design method and implement the correspondent web‐based software prototype as verification.
Design/methodology/approach
Bond graphs are used as the output of conceptual design. Feature modeling technology is just a bridge to perform the feature mapping from the conceptual model to the structure and geometric model of micro‐components. To support design evaluations, a 3D whiteboard collaborative mechanism powered with networked VR service functions is put forward to visualize and operate small micro‐components in a macro world. In addition, service‐driven MEMS “top‐down” design flow is modeled as a formalized “AND/OR” graph with which the design service scheduling is finished with the priority‐in‐depth searching algorithm.
Findings
Traditional MEMS CAD systems were developed on the basis of “bottom‐up” design method mainly used in laboratory practice. Different from the above, this research presents a solution, which can outline a new service‐driven MEMS CAD model and decrease designers' dependency to manufacturing knowledge.
Research limitations/implications
It is still necessary to study further how to create a much more efficient feature modeling mechanism to enable design evaluation for micro manufacturing.
Practical implications
The new design method and the correspondent software prototyping would potentially help industry to promote the development of new MEMS CAD systems.
Originality/value
The value is to identify a kind of new MEMS CAD implementation mode, which is suitable for general design uses on network.
Details
Keywords
Tiedo Tinga, Flip Wubben, Wieger Tiddens, Hans Wortmann and Gerard Gaalman
For many decades, it has been recognized that maintenance activities should be adapted to the specific usage of a system. For that reason, many advanced policies have been…
Abstract
Purpose
For many decades, it has been recognized that maintenance activities should be adapted to the specific usage of a system. For that reason, many advanced policies have been developed, such as condition-based and load-based maintenance policies. However, these policies require advanced monitoring techniques and rather detailed understanding of the failure behavior, which requires the support of an OEM or expert, prohibiting application by an operator in many cases. The present work proposes a maintenance policy that relieves the high (technical) demands set by these existing policies and provides a more accurate specification of the required (dynamic) maintenance interval than traditional usage-based maintenance.
Design/methodology/approach
The methodology followed starts with a review and critical assessment of existing maintenance policies, which are classified according to six different aspects. Based on the need for a technically less demanding policy that appears from this comparison, a new policy is developed. The consecutive steps required for this functional usage profiles based maintenance policy are then critically discussed: usage profile definition, monitoring, profile severity quantification and the possible extension to the fleet level. After the description of the proposed policy, it is demonstrated in three case studies on real systems.
Findings
A maintenance policy based on a simple usage registration procedure appears to be feasible, which enables a significantly more efficient maintenance process than the traditional usage-based policies. This is demonstrated by the policy proposed here.
Practical implications
The proposed maintenance policy based on functional usage profiles offers the operators of fleets of systems the opportunity to increase the efficiency and effectiveness of their maintenance process, without the need for a high investment in advanced monitoring systems and in experts interpreting the results.
Originality/value
The original contribution of this work is the explicit definition of a new maintenance policy, which combines the benefits of considering the effects of usage or environment severity with a limited investment in monitoring technology.
Details
Keywords
The purpose of this study is to address the concept and the step-by-step procedure of a high-precision optical alignment test for spacecrafts using digital theodolites. The…
Abstract
Purpose
The purpose of this study is to address the concept and the step-by-step procedure of a high-precision optical alignment test for spacecrafts using digital theodolites. The proposed scheme focuses on the non-contact alignment qualification of spacecraft components during the integration and test phases until the launch event.
Design/methodology/approach
The proposed approach is based on the exploitation of the auto-collimation feature of theodolites and several prisms attached to the requested component and satellite configuration. As soon as the misalignment measurement including the difference between the real and desired attitude or position aberration of an instrument is made, the results must be transformed from the component level to the system level for misalignment error identification in the spacecraft dynamic model.
Findings
The paper introduces the main instruments, the defined coordinate systems and the architecture of the optical spacecraft misalignment test. Moreover, the guideline of the test implementation and the resulting data process have been presented carefully.
Research limitations/implications
There is no limitation associated with this method because the procedure is applicable for high-precision typical missions.
Practical implications
This paper describes a fully implementable scheme to examine any possible inaccuracy in mounting of the spacecraft components both in position and orientation. The test can be performed without the need for a huge budget or complicated hardwares.
Originality/value
The contribution of this work revolves around illustrating the context and procedure of the spacecraft misalignment test which has remained unknown in literature despite the frequent implementation in the different satellite projects.
Details
Keywords
Three problematic issues followed by paradigm changes over the recent history of human intellectual endeavour are identified as 1. mysticism/superstition to – conventional science…
Abstract
Purpose
Three problematic issues followed by paradigm changes over the recent history of human intellectual endeavour are identified as 1. mysticism/superstition to – conventional science (of physics), 2. predominant use of qualitative/quantitative properties for analysis and design to – structural or systemic properties, and 3. current speculative/fragmented, multiple approaches to the “systemic view” to – a firmer knowledge-based approach reflecting the empirical and universal nature of this view. This paper aims to consider the problematic issues, to conclude that conventional science is inadequate to cope with the 2nd paradigm change and to introduce a “new science of systems” which can integrate conventional science and alleviate the 3rd problematic issue by suggesting three principles implemented by linguistic modelling as operational model.
Design/methodology/approach
The highly successful methodology of conventional science is followed with systemic content by suggesting three general principles of systems, namely, principle of existence (pervasiveness of structural description), principle of complexity (aggregates for emergence of outcomes) and principle of change (change by purpose or chance), and linguistic modelling of static and dynamic scenarios based on natural language as operational model. This language is processed to “elementary constituents”, of which complex structures can be constructed. These constituents are converted into reasoning schemes consisting of “ordered pairs” and “predicate logic statements” in static and dynamic states.
Findings
Stories of problematic scenarios are converted into the universal scheme of “management/producers” – “products” – “users/consumers” by constructing linguistic networks of products and semantic diagrams of organizations/user/consumers for investigating the emergence of outcomes in analysis and for designing prototypes. Problematic issues of individual objects in a scenario are resolved by methods of conventional science, which is thus integrated with systems science to form the “scientific enterprise”.
Research limitations/implications
Once the new approach is debated, further developments in the mathematics of ordered pairs, predicate logic and uncertainties are needed. The linguistic basis is to be further investigated. Connection with AI and “logical atomism of Bertrand Russell” is to be explored.
Practical implications
Further applications to large-scale scenarios by practitioners using the “universal scheme” and development of software are needed.
Social implications
The approach is rooted in accepted branches of knowledge, is highly teachable and should lead to be used by professionals and others once debated and accepted.
Details
Keywords
Lin-Lin Xie, Yajiao Chen, Sisi Wu, Rui-Dong Chang and Yilong Han
Project scheduling plays an essential role in the implementation of a project due to the limitation of resources in practical projects. However, the existing research tend to…
Abstract
Purpose
Project scheduling plays an essential role in the implementation of a project due to the limitation of resources in practical projects. However, the existing research tend to focus on finding suitable algorithms to solve various scheduling problems and fail to find the potential scheduling rules in these optimal or near-optimal solutions, that is, the possible intrinsic relationships between attributes related to the scheduling of activity sequences. Data mining (DM) is used to analyze and interpret data to obtain valuable information stored in large-scale data. The goal of this paper is to use DM to discover scheduling concepts and obtain a set of rules that approximate effective solutions to resource-constrained project scheduling problems. These rules do not require any search and simulation, which have extremely low time complexity and support real-time decision-making to improve planning/scheduling.
Design/methodology/approach
The resource-constrained project scheduling problem can be described as scheduling a group of interrelated activities to optimize the project completion time and other objectives while satisfying the activity priority relationship and resource constraints. This paper proposes a new approach to solve the resource-constrained project scheduling problem by combining DM technology and the genetic algorithm (GA). More specifically, the GA is used to generate various optimal project scheduling schemes, after that C4.5 decision tree (DT) is adopted to obtain valuable knowledge from these schemes for further predicting and solving new scheduling problems.
Findings
In this study, the authors use GA and DM technology to analyze and extract knowledge from a large number of scheduling schemes, and determine the scheduling rule set to minimize the completion time. In order to verify the application effect of the proposed DT classification model, the J30, J60 and J120 datasets in PSPLIB are used to test the validity of the scheduling rules. The results show that DT can readily duplicate the excellent performance of GA for scheduling problems of different scales. In addition, the DT prediction model developed in this study is applied to a high-rise residential project consisting of 117 activities. The results show that compared with the completion time obtained by GA, the DT model can realize rapid adjustment of project scheduling problem to deal with the dynamic environment interference. In a word, the data-based approach is feasible, practical and effective. It not only captures the knowledge contained in the known optimal scheduling schemes, but also helps to provide a flexible scheduling decision-making approach for project implementation.
Originality/value
This paper proposes a novel knowledge-based project scheduling approach. In previous studies, intelligent optimization algorithm is often used to solve the project scheduling problem. However, although these intelligent optimization algorithms can generate a set of effective solutions for problem instances, they are unable to explain the process of decision-making, nor can they identify the characteristics of good scheduling decisions generated by the optimization process. Moreover, their calculation is slow and complex, which is not suitable for planning and scheduling complex projects. In this study, the set of effective solutions of problem instances is taken as the training dataset of DM algorithm, and the extracted scheduling rules can provide the prediction and solution of new scheduling problems. The proposed method focuses on identifying the key parameters of a specific dynamic scheduling environment, which can not only reproduces the scheduling performance of the original algorithm well, but also has the ability to make decisions quickly under the dynamic interference construction scenario. It is helpful for project managers to implement quick decisions in response to construction emergencies, which is of great practical significance for improving the flexibility and efficiency of construction projects.
Details
Keywords
Huiying (Cynthia) Hou, Joseph H.K. Lai, Hao Wu and Tong Wang
This paper aims to investigate the theoretical and practical links between digital twin (DT) application in heritage facilities management (HFM) from a life cycle management…
Abstract
Purpose
This paper aims to investigate the theoretical and practical links between digital twin (DT) application in heritage facilities management (HFM) from a life cycle management perspective and to signpost the future development directions of DT in HFM.
Design/methodology/approach
This state-of-the-art review was conducted using a systematic literature review method. Inclusive and exclusive criteria were identified and used to retrieve relevant literature from renowned literature databases. Shortlisted publications were analysed using the VOSviewer software and then critically reviewed to reveal the status quo of research in the subject area.
Findings
The review results show that DT has been mainly adopted to support decision-making on conservation approach and method selection, performance monitoring and prediction, maintenance strategies design and development, and energy evaluation and management. Although many researchers attempted to develop DT models for part of a heritage building at component or system level and test the models using real-life cases, their works were constrained by availability of empirical data. Furthermore, data capture approaches, data acquisition methods and modelling with multi-source data are found to be the existing challenges of DT application in HFM.
Originality/value
In a broader sense, this study contributes to the field of engineering, construction and architectural management by providing an overview of how DT has been applied to support management activities throughout the building life cycle. For the HFM practice, a DT-cum-heritage building information modelling (HBIM) framework was developed to illustrate how DT can be integrated with HBIM to facilitate future DT application in HFM. The overall implication of this study is that it reveals the potential of heritage DT in facilitating HFM in the urban development context.
Details
Keywords
Kiyoshi Kobayashi and Kiyoyuki Kaito
This study aims to focus on asset management of large‐scale information systems supporting infrastructures and especially seeks to address a methodology of their statistical…
Abstract
Purpose
This study aims to focus on asset management of large‐scale information systems supporting infrastructures and especially seeks to address a methodology of their statistical deterioration prediction based on their historical inspection data. Information systems are composed of many devices. Deterioration process i.e. wear‐out failure generation process of those devices is formulated by a Weibull hazard model. Furthermore, in order to consider the heterogeneity of the hazard rate of each device, the random proportional Weibull hazard model, which expresses the heterogeneity of the hazard rate as random variables, is to be proposed.
Design/methodology/approach
Large‐scale information systems comprise many components, and different types of components might have different hazard rates. Therefore, when analyzing faults of information systems that comprise various types of devices and components, it is important to consider the heterogeneity of the hazard rates that exist between the different types of components. In this study, with this in consideration, the random proportional Weibull hazard model, whose heterogeneity of hazard rates is subject to a gamma distribution, is formulated and a methodology is proposed which estimates the failure rate of various components comprising an information system.
Findings
Through a case study using a traffic control system for expressways, the validity of the proposed model is empirically verified. Concretely, as for HDD, the service life at which the survival probability is 50 percent is estimated as 158 months. However, even for the same HDD, use environment differs according to usage. Actually, among the three different usages (PC, server, others), failures happen earliest in the case of PCs, which have the highest heterogeneity parameter and a survival probability of 50 percent after 135 months of usage. On the other hand, as for others, its survival probability is 50 percent at 303 months.
Originality/value
To operationally express the heterogeneity of failure rates, the Weibull hazard model is employed as a base, and a random proportional Weibull hazard model expressing the proportional heterogeneity of hazard rates with a standard gamma distribution is formulated. By estimating the parameter of the standard proportional Weibull hazard function and the parameter of the probability distribution that expresses the heterogeneity of the proportionality constant between the types, the random proportional Weibull hazard model can easily express the heterogeneity of the hazard rates between types and components.
Details
Keywords
The purpose of this article is to present a system dynamic model for studying the interconnections between human weight and health problems which cause various problems throughout…
Abstract
Purpose
The purpose of this article is to present a system dynamic model for studying the interconnections between human weight and health problems which cause various problems throughout life.
Design/methodology/approach
The paper reviews key points about system thinking, its theories, and system dynamics. Models in the form of causal loops presenting the interconnections between weight factor and health problems are developed and discussed. Thereafter, a flow model of the problem is constructed and deaths caused by heart attack are studied under two situations of regular and taught cases. The paper identifies key health problems related to weight by using causal loops that demonstrate the whole picture of the situation.
Findings
With the aid of systems thinking and dynamic modeling researchers can study the impacts of weight on the generation of various health problems such as heart disease, high blood pressure, blood sugar, knee problems and more. This study shows that teaching people about their health will have a significant impact on the number of deaths related to heart attack.
Practical implications
With the model proposed here various studies can be carried out that relates weight to health issues. A sample situation is presented where deaths related to heart attack are simulated.
Originality/value
This article makes a significant contribution to the health study issues due to the fact that it shows how a factor such as weight can impact on hearth attacks, blood pressure, and blood sugar, to mention a few. To the best of the author's knowledge, this is the first study that relates weight to health problems using systems thinking concepts and system dynamic and it therefore make a significant contribution to the health literature.
Details
Keywords
Zachary Ball, Jonathan Cagan and Kenneth Kotovsky
This study aims to gain a deeper understanding of the industry practice to guide the formation of support tools with a rigorous theoretical backing. Cross-functional teams are an…
Abstract
Purpose
This study aims to gain a deeper understanding of the industry practice to guide the formation of support tools with a rigorous theoretical backing. Cross-functional teams are an essential component in new product development (NPD) of complex products to promote comprehensive coverage of product design, marketing, sales, support as well as many other activities of business. Efficient use of teams can allow for greater technical competency coverage, increased creativity, reduced development times and greater consideration of ideas from a variety of stakeholders. While academics continually aspire to propose methods for improved team composition, there exists a gap between research directions and applications found within industry practice.
Design/methodology/approach
Through interviewing product development managers working across a variety of industries, this paper investigates the common practices of team utilization in an organizational setting. Following these interviews, this paper proposes a conceptual two-dimensional management support model aggregating the primary drivers of team success and providing direction to systematically address features of team management and composition.
Findings
Based on this work, product managers are recommended to continually address the positioning of members throughout the entire NPD process. In the early stages, individuals are to be placed to work on project components with explicit consideration toward the perceived complexity of tasks and individual competency. Throughout the development process, individuals’ positions vary based on new information while continued emphasis is placed on maintaining a shared understanding.
Originality/value
Bridging the gap between theory and application within product development teams is a necessary step toward improved product develop. Industrial settings require practical solutions that can be applied economically and efficiently within their organization. Theoretical reflections postulated by academia support improved team design; however, to achieve true success, they must be applicable when considering product development.
Details