Search results

1 – 10 of over 259000
Article
Publication date: 1 June 2001

Terrence Perera and Kapila Liyanage

In recent years, computer simulation has become a mainstream decision support tool in manufacturing industry. In order to maximise the benefits of using simulation within…

1859

Abstract

In recent years, computer simulation has become a mainstream decision support tool in manufacturing industry. In order to maximise the benefits of using simulation within businesses simulation models should be designed, developed and deployed in a shorter time span. A number of factors, such as excessive model details, inefficient data collection, lengthy model documentation and poorly planned experiments, increase the overall lead time of simulation projects. Among these factors, input data modelling is seen as a major obstacle. Input data identification, collection, validation, and analysis, typically take more than one‐third of project time. This paper presents a IDEF (Integrated computer‐aided manufacturing DEFinition) based approach to accelerate identification and collection of input data. The use of the methodology is presented through its application in batch manufacturing environments. A functional module library and a reference data model, both developed using the IDEF family of constructs, are the core elements of the methodology. The paper also identifies the major causes behind the inefficient collection of data.

Details

Integrated Manufacturing Systems, vol. 12 no. 3
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 1 June 2004

R.H. Khatibi, R. Lincoln, D. Jackson, S. Surendran, C. Whitlow and J. Schellekens

With the diversification of modelling activities encouraged by versatile modelling tools, handling their datasets has become a formidable problem. A further impetus stems from the…

Abstract

With the diversification of modelling activities encouraged by versatile modelling tools, handling their datasets has become a formidable problem. A further impetus stems from the emergence of the real‐time forecasting culture, transforming data embedded in computer programs of one‐off modelling activities of the 1970s‐1980s into dataset assets, an important feature of modelling since the 1990s, where modelling has emerged as a practice with a pivotal role to data transactions. The scope for data is now vast but in legacy data management practices datasets are fragmented, not transparent outside their native software systems, and normally “monolithic”. Emerging initiatives on published interfaces will make datasets transparent outside their native systems but will not solve the fragmentation and monolithic problems. These problems signify a lack of science base in data management and as such it is necessary to unravel inherent generic structures in data. This paper outlines root causes for these problems and presents a tentative solution referred to as “systemic data management”, which is capable of solving the above problems through the assemblage of packaged data. Categorisation is presented as a packaging methodology and the various sources contributing to the generic structure of data are outlined, e.g. modelling techniques, modelling problems, application areas and application problems. The opportunities offered by systemic data management include: promoting transparency among datasets of different software systems; exploiting inherent synergies within data; and treating data as assets with a long‐term view on reuse of these assets in an integrated capability.

Details

Management of Environmental Quality: An International Journal, vol. 15 no. 3
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 1 April 2003

David Cranage

One of the most basic pieces of information useful to hospitality operations is gross sales, and the ability to forecast them is strategically important. These forecasts could…

3677

Abstract

One of the most basic pieces of information useful to hospitality operations is gross sales, and the ability to forecast them is strategically important. These forecasts could provide powerful information to cut costs, increase efficient use of resources, and improve the ability to compete in a constantly changing environment. This study tests sophisticated, yet simple‐to‐use time series models to forecast sales. The results show that, with slight re‐arrangement of historical sales data, easy‐to‐use time series models can accurately forecast gross sales.

Details

International Journal of Contemporary Hospitality Management, vol. 15 no. 2
Type: Research Article
ISSN: 0959-6119

Keywords

Book part
Publication date: 1 November 2007

Irina Farquhar and Alan Sorkin

This study proposes targeted modernization of the Department of Defense (DoD's) Joint Forces Ammunition Logistics information system by implementing the optimized innovative…

Abstract

This study proposes targeted modernization of the Department of Defense (DoD's) Joint Forces Ammunition Logistics information system by implementing the optimized innovative information technology open architecture design and integrating Radio Frequency Identification Device data technologies and real-time optimization and control mechanisms as the critical technology components of the solution. The innovative information technology, which pursues the focused logistics, will be deployed in 36 months at the estimated cost of $568 million in constant dollars. We estimate that the Systems, Applications, Products (SAP)-based enterprise integration solution that the Army currently pursues will cost another $1.5 billion through the year 2014; however, it is unlikely to deliver the intended technical capabilities.

Details

The Value of Innovation: Impact on Health, Life Quality, Safety, and Regulatory Research
Type: Book
ISBN: 978-1-84950-551-2

Abstract

Details

Handbook of Transport Modelling
Type: Book
ISBN: 978-0-08-045376-7

Book part
Publication date: 15 January 2010

Isobel Claire Gormley and Thomas Brendan Murphy

Ranked preference data arise when a set of judges rank, in order of their preference, a set of objects. Such data arise in preferential voting systems and market research surveys…

Abstract

Ranked preference data arise when a set of judges rank, in order of their preference, a set of objects. Such data arise in preferential voting systems and market research surveys. Covariate data associated with the judges are also often recorded. Such covariate data should be used in conjunction with preference data when drawing inferences about judges.

To cluster a population of judges, the population is modeled as a collection of homogeneous groups. The Plackett-Luce model for ranked data is employed to model a judge's ranked preferences within a group. A mixture of Plackett- Luce models is employed to model the population of judges, where each component in the mixture represents a group of judges.

Mixture of experts models provide a framework in which covariates are included in mixture models. Covariates are included through the mixing proportions and the component density parameters. A mixture of experts model for ranked preference data is developed by combining a mixture of experts model and a mixture of Plackett-Luce models. Particular attention is given to the manner in which covariates enter the model. The mixing proportions and group specific parameters are potentially dependent on covariates. Model selection procedures are employed to choose optimal models.

Model parameters are estimated via the ‘EMM algorithm’, a hybrid of the expectation–maximization and the minorization–maximization algorithms. Examples are provided through a menu survey and through Irish election data. Results indicate mixture modeling using covariates is insightful when examining a population of judges who express preferences.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Abstract

Details

Handbook of Transport Geography and Spatial Systems
Type: Book
ISBN: 978-1-615-83253-8

Book part
Publication date: 25 October 2023

Md Aminul Islam and Md Abu Sufian

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The…

Abstract

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The study thoroughly investigated with advanced tools to scrutinize key performance indicators integral to the functioning of smart cities, thereby enhancing leadership and decision-making strategies. Our work involves the implementation of various machine learning models such as Logistic Regression, Support Vector Machine, Decision Tree, Naive Bayes, and Artificial Neural Networks (ANN), to the data. Notably, the Support Vector Machine and Bernoulli Naive Bayes models exhibit robust performance with an accuracy rate of 70% precision score. In particular, the study underscores the employment of an ANN model on our existing dataset, optimized using the Adam optimizer. Although the model yields an overall accuracy of 61% and a precision score of 58%, implying correct predictions for the positive class 58% of the time, a comprehensive performance assessment using the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) metrics was necessary. This evaluation results in a score of 0.475 at a threshold of 0.5, indicating that there's room for model enhancement. These models and their performance metrics serve as a key cog in our data analytics pipeline, providing decision-makers and city leaders with actionable insights that can steer urban service management decisions. Through real-time data availability and intuitive visualization dashboards, these leaders can promptly comprehend the current state of their services, pinpoint areas requiring improvement, and make informed decisions to bolster these services. This research illuminates the potential for data analytics, machine learning, and AI to significantly upgrade urban service management in smart cities, fostering sustainable and livable communities. Moreover, our findings contribute valuable knowledge to other cities aiming to adopt similar strategies, thus aiding the continued development of smart cities globally.

Details

Technology and Talent Strategies for Sustainable Smart Cities
Type: Book
ISBN: 978-1-83753-023-6

Keywords

Book part
Publication date: 16 December 2009

Jeffrey S. Racine

The R environment for statistical computing and graphics (R Development Core Team, 2008) offers practitioners a rich set of statistical methods ranging from random number…

Abstract

The R environment for statistical computing and graphics (R Development Core Team, 2008) offers practitioners a rich set of statistical methods ranging from random number generation and optimization methods through regression, panel data, and time series methods, by way of illustration. The standard R distribution (base R) comes preloaded with a rich variety of functionality useful for applied econometricians. This functionality is enhanced by user-supplied packages made available via R servers that are mirrored around the world. Of interest in this chapter are methods for estimating nonparametric and semiparametric models. We summarize many of the facilities in R and consider some tools that might be of interest to those wishing to work with nonparametric methods who want to avoid resorting to programming in C or Fortran but need the speed of compiled code as opposed to interpreted code such as Gauss or Matlab by way of example. We encourage those working in the field to strongly consider implementing their methods in the R environment thereby making their work accessible to the widest possible audience via an open collaborative forum.

Details

Nonparametric Econometric Methods
Type: Book
ISBN: 978-1-84950-624-3

Article
Publication date: 31 October 2023

Yangze Liang and Zhao Xu

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components…

Abstract

Purpose

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components during the construction phase is predominantly done manually, resulting in low efficiency and hindering the progress of intelligent construction. This paper presents an intelligent inspection method for assessing the appearance quality of PC components, utilizing an enhanced you look only once (YOLO) model and multi-source data. The aim of this research is to achieve automated management of the appearance quality of precast components in the prefabricated construction process through digital means.

Design/methodology/approach

The paper begins by establishing an improved YOLO model and an image dataset for evaluating appearance quality. Through object detection in the images, a preliminary and efficient assessment of the precast components' appearance quality is achieved. Moreover, the detection results are mapped onto the point cloud for high-precision quality inspection. In the case of precast components with quality defects, precise quality inspection is conducted by combining the three-dimensional model data obtained from forward design conversion with the captured point cloud data through registration. Additionally, the paper proposes a framework for an automated inspection platform dedicated to assessing appearance quality in prefabricated buildings, encompassing the platform's hardware network.

Findings

The improved YOLO model achieved a best mean average precision of 85.02% on the VOC2007 dataset, surpassing the performance of most similar models. After targeted training, the model exhibits excellent recognition capabilities for the four common appearance quality defects. When mapped onto the point cloud, the accuracy of quality inspection based on point cloud data and forward design is within 0.1 mm. The appearance quality inspection platform enables feedback and optimization of quality issues.

Originality/value

The proposed method in this study enables high-precision, visualized and automated detection of the appearance quality of PC components. It effectively meets the demand for quality inspection of precast components on construction sites of prefabricated buildings, providing technological support for the development of intelligent construction. The design of the appearance quality inspection platform's logic and framework facilitates the integration of the method, laying the foundation for efficient quality management in the future.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of over 259000