Search results

1 – 10 of over 10000
Article
Publication date: 8 March 2011

Jingbin Hao, Liang Fang and Robert E. Williams

Rapid prototyping (RP) of large‐scale solid models requires the stereolithographic (STL) file to be precisely partitioned. Especially, the selection of cutting positions is…

1013

Abstract

Purpose

Rapid prototyping (RP) of large‐scale solid models requires the stereolithographic (STL) file to be precisely partitioned. Especially, the selection of cutting positions is critical for the fabrication and assembly of sub‐models. The purpose of this paper is to present an efficient curvature‐based partitioning for selecting the best‐fit loop and decomposing the large complex model into smaller and simpler sub‐models with similar‐shaped joints, which facilitate the final assembly.

Design/methodology/approach

The partition algorithm is benefited from curvature analysis of the model surface, including extracting the feature edges and constructing the feature loops. The efficiency enhancement is achieved by selecting the best‐fit loop and constructing the similar‐shape joints. The utility of the algorithm is demonstrated by the fabrication of large‐scale rapid prototypes.

Findings

By using the proposed curvature‐based partition algorithm, the reasonability and efficiency of STL model partition can be greatly improved, and the complexity of sub‐models has been reduced. It is found that the large‐scale model is efficiently partitioned and the sub‐models are precisely assembled using the proposed partitioning.

Originality/value

The curvature‐based partition algorithm is used in the RP field for the first time. Based on the curvature‐based partitioning, the reasonability and efficiency of large‐scale RP is addressed in this paper.

Details

Rapid Prototyping Journal, vol. 17 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 16 January 2017

Xiaotong Jiang, Xiaosheng Cheng, Qingjin Peng, Luming Liang, Ning Dai, Mingqiang Wei and Cheng Cheng

It is a challenge to print a model with the size that is larger than the working volume of a three-dimensional (3D) printer. The purpose of this paper is to present a feasible…

Abstract

Purpose

It is a challenge to print a model with the size that is larger than the working volume of a three-dimensional (3D) printer. The purpose of this paper is to present a feasible approach to divide a large model into small printing parts to fit the volume of a printer and then assemble these parts into the final model.

Design/methodology/approach

The proposed approach is based on the skeletonization and the minima rule. The skeleton of a printing model is first extracted using the mesh contraction and the principal component analysis. The 3D model is then partitioned preliminarily into many smaller parts using the space sweep method and the minima rule. The preliminary partition is finally optimized using the greedy algorithm.

Findings

The skeleton of a 3D model can effectively represent a simplified version of the geometry of the 3D model. Using a model’s skeleton to partition the model is an efficient way. As it is generally desirable to have segmentations at concave creases and seams, the cutting position should be located in the concave region. The proposed approach can partition large models effectively to well retain the integrity of meaningful parts.

Originality/value

The proposed approach is new in the rapid prototyping field using the model skeletonization and the minima rule. Based on the authors’ knowledge, there is no method that concerns the integrity of meaningful parts for partitioning. The proposed method can achieve satisfactory results by the integrity of meaningful parts and assemblability for most 3D models.

Details

Rapid Prototyping Journal, vol. 23 no. 1
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 12 June 2017

Shabia Shabir Khan and S.M.K. Quadri

As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on…

Abstract

Purpose

As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on computational intelligence, particularly this involves dealing with vagueness, multi-objectivity and good amount of possible solutions. In practical applications, computational techniques have given best results and the research in this field is continuously growing. The purpose of this paper is to search for a general and effective intelligent tool for prediction of patient survival after surgery. The present study involves the construction of such intelligent computational models using different configurations, including data partitioning techniques that have been experimentally evaluated by applying them over realistic medical data set for the prediction of survival in pancreatic cancer patients.

Design/methodology/approach

On the basis of the experiments and research performed over the data belonging to various fields using different intelligent tools, the authors infer that combining or integrating the qualification aspects of fuzzy inference system and quantification aspects of artificial neural network can prove an efficient and better model for prediction. The authors have constructed three soft computing-based adaptive neuro-fuzzy inference system (ANFIS) models with different configurations and data partitioning techniques with an aim to search capable predictive tools that could deal with nonlinear and complex data. After evaluating the models over three shuffles of data (training set, test set and full set), the performances were compared in order to find the best design for prediction of patient survival after surgery. The construction and implementation of models have been performed using MATLAB simulator.

Findings

On applying the hybrid intelligent neuro-fuzzy models with different configurations, the authors were able to find its advantage in predicting the survival of patients with pancreatic cancer. Experimental results and comparison between the constructed models conclude that ANFIS with Fuzzy C-means (FCM) partitioning model provides better accuracy in predicting the class with lowest mean square error (MSE) value. Apart from MSE value, other evaluation measure values for FCM partitioning prove to be better than the rest of the models. Therefore, the results demonstrate that the model can be applied to other biomedicine and engineering fields dealing with different complex issues related to imprecision and uncertainty.

Originality/value

The originality of paper includes framework showing two-way flow for fuzzy system construction which is further used by the authors in designing the three simulation models with different configurations, including the partitioning methods for prediction of patient survival after surgery. Several experiments were carried out using different shuffles of data to validate the parameters of the model. The performances of the models were compared using various evaluation measures such as MSE.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 18 July 2016

Alan D. Olinsky, Kristin Kennedy and Michael Salzillo

Forecasting the number of bed days (NBD) needed within a large hospital network is extremely challenging, but it is imperative that management find a predictive model that best…

Abstract

Forecasting the number of bed days (NBD) needed within a large hospital network is extremely challenging, but it is imperative that management find a predictive model that best estimates the calculation. This estimate is used by operational managers for logistical planning purposes. Furthermore, the finance staff of a hospital would require an expected NBD as input for estimating future expenses. Some hospital reimbursement contracts are on a per diem schedule, and expected NBD is useful in forecasting future revenue.

This chapter examines two ways of estimating the NBD for a large hospital system, and it builds from previous work comparing time regression and an autoregressive integrated moving average (ARIMA). The two approaches discussed in this chapter examine whether using the total or combined NBD for all the data is a better predictor than partitioning the data by different types of services. The four partitions are medical, maternity, surgery, and psychology. The partitioned time series would then be used to forecast future NBD by each type of service, but one could also sum the partitioned predictors for an alternative total forecaster. The question is whether one of these two approaches outperforms the other with a best fit for forecasting the NBD. The approaches presented in this chapter can be applied to a variety of time series data for business forecasting when a large database of information can be partitioned into smaller segments.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78635-534-8

Keywords

Article
Publication date: 5 October 2015

Sez Atamturktur and Ismail Farajpour

Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena…

Abstract

Purpose

Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena, partitioning has become a widely implemented computational approach. Partitioned analysis involves the exchange of inputs and outputs from constituent models (partitions) via iterative coupling operations, through which the individually developed constituent models are allowed to affect each other’s inputs and outputs. Partitioning, whether multi-scale or multi-physics in nature, is a powerful technique that can yield coupled models that can predict the behavior of a system more complex than the individual constituents themselves. The paper aims to discuss these issues.

Design/methodology/approach

Although partitioned analysis has been a key mechanism in developing more realistic predictive models over the last decade, its iterative coupling operations may lead to the propagation and accumulation of uncertainties and errors that, if unaccounted for, can severely degrade the coupled model predictions. This problem can be alleviated by reducing uncertainties and errors in individual constituent models through further code development. However, finite resources may limit code development efforts to just a portion of possible constituents, making it necessary to prioritize constituent model development for efficient use of resources. Thus, the authors propose here an approach along with its associated metric to rank constituents by tracing uncertainties and errors in coupled model predictions back to uncertainties and errors in constituent model predictions.

Findings

The proposed approach evaluates the deficiency (relative degree of imprecision and inaccuracy), importance (relative sensitivity) and cost of further code development for each constituent model, and combines these three factors in a quantitative prioritization metric. The benefits of the proposed metric are demonstrated on a structural portal frame using an optimization-based uncertainty inference and coupling approach.

Originality/value

This study proposes an approach and its corresponding metric to prioritize the improvement of constituents by quantifying the uncertainties, bias contributions, sensitivity analysis, and cost of the constituent models.

Details

Engineering Computations, vol. 32 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 13 June 2016

Garrison Stevens, Sez Atamturktur, Ricardo Lebensohn and George Kaschner

Highly anisotropic zirconium is a material used in the cladding of nuclear fuel rods, ensuring containment of the radioactive material within. The complex material structure of…

Abstract

Purpose

Highly anisotropic zirconium is a material used in the cladding of nuclear fuel rods, ensuring containment of the radioactive material within. The complex material structure of anisotropic zirconium requires model developers to replicate not only the macro-scale stresses but also the meso-scale material behavior as the crystal structure evolves; leading to strongly coupled multi-scale plasticity models. Such strongly coupled models can be achieved through partitioned analysis techniques, which couple independently developed constituent models through an iterative exchange of inputs and outputs. Throughout this iterative process, biases, and uncertainties inherent within constituent model predictions are inevitably transferred between constituents either compensating for each other or accumulating during iterations. The paper aims to discuss these issues.

Design/methodology/approach

A finite element model at the macro-scale is coupled in an iterative manner with a meso-scale viscoplastic self-consistent model, where the former supplies the stress input and latter represents the changing material properties. The authors present a systematic framework for experiment-based validation taking advantage of both separate-effect experiments conducted within each constituent’s domain to calibrate the constituents in their respective scales and integral-effect experiments executed within the coupled domain to test the validity of the coupled system.

Findings

This framework developed is shown to improve predictive capability of a multi-scale plasticity model of highly anisotropic zirconium.

Originality/value

For multi-scale models to be implemented to support high-consequence decisions, such as the containment of radioactive material, this transfer of biases and uncertainties must be evaluated to ensure accuracy of the predictions of the coupled model. This framework takes advantage of the transparency of partitioned analysis to reduce the accumulation of errors and uncertainties.

Details

Multidiscipline Modeling in Materials and Structures, vol. 12 no. 1
Type: Research Article
ISSN: 1573-6105

Keywords

Article
Publication date: 1 December 2004

Toshio Nakagawa, Kazumi Yasui and Hiroaki Sandoh

There exist some reliability models whose performances increase by partitioning them into parts. Such a typical model is the basic inspection policy in which an operating unit is…

375

Abstract

There exist some reliability models whose performances increase by partitioning them into parts. Such a typical model is the basic inspection policy in which an operating unit is checked at suitable times for a finite time span and its failure is detected. This paper applies the concept of the basic inspection policy to five models: back‐up for hard disk, checkpoint for double modular redundancy, job partition, garbage collection, and network partition. The performances of each model are analytically evaluated.

Details

Journal of Quality in Maintenance Engineering, vol. 10 no. 4
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 20 March 2007

Hossam A. Gabbar

This paper aims to provide a design of intelligent model‐based topology analyzer that can be used to improve plant operation.

3510

Abstract

Purpose

This paper aims to provide a design of intelligent model‐based topology analyzer that can be used to improve plant operation.

Design/methodology/approach

POOM process modeling methodology is proposed to model and partition plant topology so that plant operation can be designed in effective manner as mapped to plant topology partitions. Plant topology is divided into areas based on the operation required and material flow and isolation paths are identified automatically using intelligent algorithm.

Findings

It is possible to define hierarchical plant structure (topology) partitions that can be mapped to plant operation levels, which are described by ANSI/ISA‐S88. In addition, the use of design knowledge can be useful to define conceptual partitions such as Block, which is essential to link design and operation knowledge. Plant operation (jobs) can be flexibly defined in view of plant structure partitions in terms of resources (materials, plant structural areas) required and operation scheduling.

Research limitations/implications

It is important to link to production scheduling to ensure effective use of topology partitions in real time based on available resources. The proposed approach can be improved via the integration with intelligent production scheduling.

Practical implications

Production and manufacturing plants will be able to use the proposed approach to design and validate plant operation and to improve plant maintenance while reducing operation risks by identifying plant structures required for each operation task. The proposed technique can be helpful to engineers and R&D members to consider in their design and investigation for process safety, risk management, and plant operation and management.

Originality/value

The idea of topology analysis is quite new where it is usually implemented using search algorithms without considering domain knowledge and operation structures. This paper proposes a valuable technique to link plant structure with plant operation hierarchies, which is important for design and engineering practices and R&D activities for plant operation.

Details

Industrial Management & Data Systems, vol. 107 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 31 December 2020

Bing Liu, Hongyao Shen, Rongxin Deng, Zeyu Zhou, Jia’ao Jin and Jianzhong Fu

Additive manufacturing based on arc welding is a fast and effective way to fabricate complex and irregular metal workpieces. Thin-wall metal structures are widely used in the…

Abstract

Purpose

Additive manufacturing based on arc welding is a fast and effective way to fabricate complex and irregular metal workpieces. Thin-wall metal structures are widely used in the industry. However, it is difficult to realize support-free freeform thin-wall structures. This paper aims to propose a new method of non-supporting thin-wall structure (NSTWS) manufacturing by gas metal arc welding (GMAW) with the help of a multi-degree of freedom robot arm.

Design/methodology/approach

This study uses the geodesic distance on the triangular mesh to build a scalar field, and then the equidistant iso-polylines are obtained, which are used as welding paths for thin-wall structures. Focusing on the possible problems of interference and the violent variation of the printing directions, this paper proposes two types of methods to partition the model mesh and generate new printable iso-polylines on the split meshes.

Findings

It is found that irregular thin-wall models such as an elbow, a vase or a transition structure can be deposited without any support and with a good surface quality after applying the methods.

Originality/value

The experiments producing irregular models illustrate the feasibility and effectiveness of the methods to fabricate NSTWSs, which could provide guidance to some industrial applications.

Details

Rapid Prototyping Journal, vol. 27 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud…

274

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 10000