Search results
1 – 10 of over 3000Sifeng Liu, Jeffrey Forrest and Yingjie Yang
The purpose of this paper is to introduce the elementary concepts and fundamental principles of grey systems and the main components of grey systems theory. Also to…
Abstract
Purpose
The purpose of this paper is to introduce the elementary concepts and fundamental principles of grey systems and the main components of grey systems theory. Also to discuss the astonishing progress that grey systems theory has made in the world of learning and its wide‐ranging applications in the entire spectrum of science.
Design/methodology/approach
The characteristics of unascertained systems including incomplete information and inaccuracies in data are analysed and four uncertain theories: probability statistics, fuzzy mathematics, grey system and rough set theory are compared. The scientific principle of simplicity and how precise models suffer from inaccuracies are also shown.
Findings
The four uncertain theories, probability statistics, fuzzy mathematics, grey system and rough set theory are examined with different research objects, different basic sets, different methods and procedures, different data requirements, different emphasis, different objectives and different characteristics.
Practical implications
The scientific principle of simplicity and how precise models suffer from inaccuracies are shown. So, precise models are not necessarily an effective means to deal with complex matters, especially in the case that the available information is incomplete and the collected data inaccurate.
Originality/value
The elementary concepts and fundamental principles of grey systems and the main components of grey systems theory are introduced briefly. The reader is given a general picture of grey systems theory as a new method for studying problems where partial information is known, partial information is unknown; especially for uncertain systems with few data points and poor information.
Details
Keywords
Manfredi Bruccoleri, Salvatore Cannella and Giulia La Porta
– The purpose of this paper is to explore the effect of inventory record inaccuracy due to behavioral aspects of workers on the order and inventory variance amplification.
Abstract
Purpose
The purpose of this paper is to explore the effect of inventory record inaccuracy due to behavioral aspects of workers on the order and inventory variance amplification.
Design/methodology/approach
The authors adopt a continuous-time analytical approach to describe the effect of inbound throughput on the inventory and order variance amplification due to the workload pressure and arousal of workers. The model is numerically solved through simulation and results are analyzed with statistical general linear model.
Findings
Inventory management policies that usually dampen variance amplification are not effective when inaccuracy is generated due to workers’ behavioral aspects. Specifically, the psychological sensitivity and stability of workers to deal with a given range of operational conditions have a combined and multiplying effect over the amplification of order and inventory variance generated by her/his errors.
Research limitations/implications
The main limitation of the research is that the authors model workers’ behavior by inheriting a well-known theory from psychology that assumes a U-shaped relationship between stress and errors. The authors do not validate this relationship in the specific context of inventory operations.
Practical implications
The paper gives suggestions for managers who are responsible for designing order and inventory policies on how to take into account workers’ behavioral reaction to work pressure.
Originality/value
The logistics management literature does not lack of research works on behavioral decision-making causes of order and inventory variance amplification. Contrarily, this paper investigates a new kind of behavioral issue, namely, the impact of psycho-behavioral aspects of workers on variance amplification.
Details
Keywords
Sreenivas R. Sukumar, Ramachandran Natarajan and Regina K. Ferrell
The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business…
Abstract
Purpose
The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics.
Design/methodology/approach
The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines.
Findings
Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance.
Practical implications
The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines.
Originality/value
Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.
Details
Keywords
The purpose of this paper is to fill an existing gap in the field. A transaction-based hotel price index for Europe is constructed to provide a true measure for hotel real…
Abstract
Purpose
The purpose of this paper is to fill an existing gap in the field. A transaction-based hotel price index for Europe is constructed to provide a true measure for hotel real estate performance. The index will enable investors enhance investment decisions in many ways: to assess individual property performance; to make an objective decision about where to invest and in which property type; to assess the relative performance of hotel assets to all other sectors and consequently reach optimal funds allocation decisions. This will allow investors to time their acquisitions/disposals according to the hotel property cycle.
Design/methodology/approach
Data include 495 hotel property transactions in Europe during the period between 2004 and 2013. Transaction prices and property characteristics were collected from a variety sources published by hotel agents and consultants, property magazines, newspapers, tourist board, individual property and hotel association registers and web sites. Data include property name, sale price, size, time of sale, location, buyers and sellers. A hedonic pricing model is developed where the transaction price is regressed on the different characteristics. The index is calculated by taking the anti-logs of regression coefficients of the year index.
Findings
This paper claims that the hotel property price index (HPPI) portrays a more realistic picture of what happened to hotel property prices in 2008 showing a single digit negative growth vs the hotel valuation index which reports a double digit negative growth rate in European hotel prices during the same year. The real impact of recession showed on hotel property prices in 2009. HPPI shows a crash in hotel property prices by -23.7 per cent in 2009. The year 2011 was marked by more sales transacted through administrators and a looming double-dip recession. Unlike appraisal-based indices, HPPI does not suffer from sticky valuation issues and is not desensitise from distressed properties. Therefore, it was more volatile to distressed situations throughout the period between 2011 and 2013.
Research limitations/implications
Results of this study should be considered with caution. There are limitations associated with transaction data including incompleteness or inaccuracies regarding price data, financing information for each deal, property tenure, and property characteristics.
Practical implications
This work has successfully developed an HPPI for hotel property in Europe. This paper paves the way for transaction-based indices that are more volatile than existing appraisal-based indices. This represents a significant development in tracking price movements of hotel properties in Europe. The index has potential to support research and forecasting of the hotel property cycles.
Originality/value
This paper fulfils an identified need to track hotel property prices and timing the hotel property cycle.
Details
Keywords
Yee Ling Yap, Yong Sheng Edgar Tan, Heang Kuan Joel Tan, Zhen Kai Peh, Xue Yi Low, Wai Yee Yeong, Colin Siang Hui Tan and Augustinus Laude
The design process of a bio-model involves multiple factors including data acquisition technique, material requirement, resolution of the printing technique…
Abstract
Purpose
The design process of a bio-model involves multiple factors including data acquisition technique, material requirement, resolution of the printing technique, cost-effectiveness of the printing process and end-use requirements. This paper aims to compare and highlight the effects of these design factors on the printing outcome of bio-models.
Design/methodology/approach
Different data sources including engineering drawing, computed tomography (CT), and optical coherence tomography (OCT) were converted to a printable data format. Three different bio-models, namely, an ophthalmic model, a retina model and a distal tibia model, were printed using two different techniques, namely, PolyJet and fused deposition modelling. The process flow and 3D printed models were analysed.
Findings
The data acquisition and 3D printing process affect the overall printing resolution. The design process flows using different data sources were established and the bio-models were printed successfully.
Research limitations/implications
Data acquisition techniques contained inherent noise data and resulted in inaccuracies during data conversion.
Originality/value
This work showed that the data acquisition and conversion technique had a significant effect on the quality of the bio-model blueprint and subsequently the printing outcome. In addition, important design factors of bio-models were highlighted such as material requirement and the cost-effectiveness of the printing technique. This paper provides a systematic discussion for future development of an engineering design process in three-dimensional (3D) printed bio-models.
Details
Keywords
Issam Moussaoui, Brent D. Williams, Christian Hofer, John A. Aloysius and Matthew A. Waller
The purpose of this paper is to: first, provide a systematic review of the drivers of retail on-shelf availability (OSA) that have been scrutinized in the literature;…
Abstract
Purpose
The purpose of this paper is to: first, provide a systematic review of the drivers of retail on-shelf availability (OSA) that have been scrutinized in the literature; second, identify areas where further scrutiny is needed; and third, critically reflect on current conceptualizations of OSA and suggest alternative perspectives that may help guide future investigations.
Design/methodology/approach
A systematic approach is adopted wherein nine leading journals in logistics, supply chain management, operations management, and retailing are systematically scanned for articles discussing OSA drivers. The respective journals’ websites are used as the primary platform for scanning, with Google Scholar serving as a secondary platform for completeness. Journal articles are carefully read and their respective relevance assessed. A final set of 73 articles is retained and thoroughly reviewed for the purpose of this research. The systematic nature of the review minimizes researcher bias, ensures reasonable completeness, maximizes reliability, and enables replicability.
Findings
Five categories of drivers of OSA are identified. The first four – i.e., operational, behavioral, managerial, and coordination drivers – stem from failures at the planning or execution stages of retail operations. The fifth category – systemic drivers – encompasses contingency factors that amplify the effect of supply chain failures on OSA. The review also indicates that most non-systemic OOS could be traced back to incentive misalignments within and across supply chain partners.
Originality/value
This research consolidates past findings on the drivers of OSA and provides valuable insights as to areas where further research may be needed. It also offers forward-looking perspectives that could help advance research on the drivers of OSA. For example, the authors invite the research community to revisit the pervasive underlying assumption that OSA is an absolute imperative and question the unidirectional relationship that higher OSA is necessarily better. The authors initiate an open dialogue to approach OSA as a service-level parameter, rather than a maximizable outcome, as indicated by inventory theory.
Details
Keywords
Henryk UGOWSKI and Andrzej DYKA
In the first part of this paper subtitled ‘Theory and estimation of the truncation error’ we have examined the existence and unicity of the convolution inverse. In this…
Abstract
In the first part of this paper subtitled ‘Theory and estimation of the truncation error’ we have examined the existence and unicity of the convolution inverse. In this part of the paper we discuss the application of convolution inverses for determining the solution to the Fredholm equation of the first kind. Particular attention is paid to the errors that arise from both the truncation of the infinite sequence that represents the inverse and the inaccuracy in input data.
David H. Taylor and Andrew Fearne
The purpose of this paper is to highlight the problems with demand management in fresh food value chains and to propose a framework for demand analysis and improved demand…
Abstract
Purpose
The purpose of this paper is to highlight the problems with demand management in fresh food value chains and to propose a framework for demand analysis and improved demand management.
Design/methodology/approach
The paper draws on empirical evidence from multiple case studies undertaken in the UK food industry.
Findings
Evidence from the case studies indicates a consistent misalignment of demand and supply, due to demand amplification, poor production systems and inconsistencies with information and data handling procedures.
Research limitations/implications
The case study evidence is limited to the UK context and is therefore unlikely to be representative of the global situation in fresh food value chains. The proposed framework is based on the case study evidence but has not been formally tested.
Practical implications
More collaboration, information sharing and joint planning from primary production through to retailing is critical if fresh food value chains are to function efficiently and effectively in retail environments where promotional activity creates significant uncertainty.
Originality/value
Demand management has received little attention to date, outside the industry framework of ECR. This paper is the first to propose a framework for improvement based on greater collaboration and joint planning that extends beyond the retailer‐manufacturer interface
Details
Keywords
The purpose of this paper is to report findings from a major UK research project covering six agri‐food supply chains, each spanning from farm to consumer.
Abstract
Purpose
The purpose of this paper is to report findings from a major UK research project covering six agri‐food supply chains, each spanning from farm to consumer.
Design/methodology/approach
The paper gives an overview of relevant literature out of which three research questions are posed. The research methodology is then outlined, followed by an overview of the methods used to collect and analyse the case study data. A summary of the research findings is then presented followed by a discussion of the findings in the context of the research questions. The paper concludes with an assessment of the validity of the research followed by some tentative suggestions regarding the need for, and potential benefits of, improving demand management in agri‐food chains.
Findings
The research has found that demand management is an area, which in practice is beset by difficulties and inefficiencies, which in turn affects the operational performance of the supply chains. Analysis of the characteristics of demand along the chains demonstrates a propensity for misalignment of demand and activity due to issues such as demand amplification and inappropriate production policies. The paper also identifies a number of operational inefficiencies and inconsistencies, which typically occur in the information systems and data handling procedures within these chains.
Originality/value
Suggestions are given as to how demand management processes could be improved through cooperative efforts across the supply chain.
Details