Search results
11 – 20 of over 66000Vinayak Agrawal and Shashikala Tapaswi
The purpose of this paper is to conduct a forensic analysis of Google Allo messenger on an Android-based mobile phone. The focus was on the analysis of the data stored by this…
Abstract
Purpose
The purpose of this paper is to conduct a forensic analysis of Google Allo messenger on an Android-based mobile phone. The focus was on the analysis of the data stored by this application in the internal memory of the mobile device, with minimal use of third-party applications. The findings were compared with the already existing works on this topic. Android is the most popular operating system for mobile devices, and these devices often contain a massive amount of personal information about the user such as photos and contact details. Analysis of these applications is required in case of a forensic investigation and makes the process easier for forensic analysts.
Design/methodology/approach
Logical acquisition of the data stored by these applications was performed. A locked Android device was used for this purpose. Some scripts are presented to help in data acquisition using Android Debug Bridge (ADB). Manual forensic analysis of the device image was performed to see whether the activities carried out on these applications are stored in the internal memory of the device. A comparative analysis of an existing mobile forensic tool was also performed to show the effectiveness of the methodology adopted.
Findings
Forensic artifacts were recovered from Allo application. Multimedia content such as images were also retrieved from the internal memory.
Research limitations/implications
As this study was conducted for forensic analysis, it assumed that the mobile device used already has USB debugging enabled on it, although this might not be the applicable in some of the cases. This work provides an optimal approach to acquiring artifacts with minimal use of third-party applications.
Practical implications
Most of the mobile devices contain messaging application such as Allo installed. A large amount of personal information can be obtained from the forensic analysis of these applications, which can be useful in any criminal investigation.
Originality/value
This is the first study which focuses on the Google Allo application. The proposed methodology was able to extract almost as much as the data obtained using earlier approaches, but with minimal third-party application usage.
Details
Keywords
Xiaoyu Yang, Philip R. Moore, Chi‐Biu Wong, Jun‐Sheng Pu and Seng Kwong Chong
This paper aims to capture and manage the product lifecycle data for consumer products, especially data that occur in distribution, usage, maintenance and end‐of‐life stages, and…
Abstract
Purpose
This paper aims to capture and manage the product lifecycle data for consumer products, especially data that occur in distribution, usage, maintenance and end‐of‐life stages, and to use them to provide information and knowledge.
Design/methodology/approach
A lifecycle information acquisition and management model is proposed, and an information management system framework is formulated. The information management system developed is then used in actual field trials to manage lifecycle data for refrigeration products and game consoles.
Findings
It has been demonstrated that valuable services can be delivered through a lifecycle information management system.
Practical implications
Lifecycle information management systems can open new horizons for product design which are sustainable and environmentally sensitive. They also contribute to the wider exploration of eco‐design and development of next generation consumer products (e.g. smart home appliances).
Originality/value
Existing lifecycle information systems cannot support all phases of the product lifecycle. They mainly manage the lifecycle data only during the design and manufacture stages. Lifecycle data during distribution, usage, maintenance and end‐of‐life stages are usually hard to acquire and in most cases lost. The lifecycle information management system developed can capture them, and manage them in an integrated and systematic manner to provide information and knowledge.
Details
Keywords
Antonio Acernese, Carmen Del Vecchio, Massimo Tipaldi, Nicola Battilani and Luigi Glielmo
The purpose of this paper is to describe a model for the design and development of a condition-based maintenance (CBM) strategy for the cutting group of a labeling machine. The…
Abstract
Purpose
The purpose of this paper is to describe a model for the design and development of a condition-based maintenance (CBM) strategy for the cutting group of a labeling machine. The CBM aims to ensure the quality of labels' cut and overall machine performances.
Design/methodology/approach
In developing a complete CBM strategy, two main difficulties have to be overcome: (1) appropriately dealing with incomplete and low-quality production database and (2) selecting the most promising predictive model. The first issue has been addressed applying data cleansing operations and creating ad hoc methodology to enlarge the training data. The second issue has been handled developing and comparing an empirical model with a machine learning (ML)-based model; the comparison has been performed assessing capabilities thereof in predicting erroneous label cuts on data obtained from an operating plant located in Italy.
Findings
Research results showed that both empirical and ML-based approaches exhibit good performances in detecting the operating conditions of the cutting machine. The advantage of adopting an ML-based model is that it can be used not only as a condition indicator (i.e. a model able to continuously provide the health status of an asset) but also in predictive maintenance policies (i.e. a CBM carried out following a forecast of the degradation of the item).
Research limitations/implications
The study described in this manuscript has been developed on the practices of a labeling machine developed by an international company manufacturing bottling lines for beverage industry. The proposed approach might need some customization in case it is applied to other industries. Future researches can validate the applicability of such models on different rotary machines in other companies and similar industries.
Originality/value
The main contribution of this paper lies in the empirical demonstration of the benefits of CBM and predictive maintenance in manufacturing, through the overcoming of a specific production issue. The large number of variables involved in thin label cutting lines (film thickness between 30 and 38 µm), the high throughput and the high costs due to production interruptions render the prediction of non-conforming labels an economically relevant, albeit challenging, goal. Moreover, despite the large scientific literature on CBM in rolling bearing and face cutting movements, papers dealing with rotary labeling machines are very unusual and unique.
Details
Keywords
Kenneth Lawani, Farhad Sadeghineko, Michael Tong and Mehmethan Bayraktar
The purpose of this study is to explore the suggestions that construction processes could be considerably improved by integrating building information modelling (BIM) with 3D…
Abstract
Purpose
The purpose of this study is to explore the suggestions that construction processes could be considerably improved by integrating building information modelling (BIM) with 3D laser scanning technologies. This case study integrated 3D laser point cloud scans with BIM to explore the effects of BIM adoption on ongoing construction project, whilst evaluating the utility of 3D laser scanning technology for producing structural 3D models by converting point cloud data (PCD) into BIM.
Design/methodology/approach
The primary data acquisition adopted the use of Trimble X7 laser scanning process, which is a set of data points in the scanned space that represent the scanned structure. The implementation of BIM with the 3D PCD to explore the precision and effectiveness of the construction processes as well as the as-built condition of a structure was precisely captured using the 3D laser scanning technology to recreate accurate and exact 3D models capable of being used to find and fix problems during construction.
Findings
The findings indicate that the integration of BIM and 3D laser scanning technology has the tendency to mitigate issues such as building rework, improved project completion times, reduced project cost, enhanced interdisciplinary communication, cooperation and collaboration amongst the project duty holders, which ultimately enhances the overall efficiency of the construction project.
Research limitations/implications
The acquisition of data using 3D laser scanner is usually conducted from the ground. Therefore, certain aspects of the building could potentially disturb data acquisition; for example, the gable and sections of eaves (fascia and soffit) could be left in a blind spot. Data acquisition using 3D laser scanner technology takes time, and the processing of the vast amount of data acquired is laborious, and if not carefully analysed, could result in errors in generated models. Furthermore, because this was an ongoing construction project, material stockpiling and planned construction works obstructed and delayed the seamless capture of scanned data points.
Originality/value
These findings highlight the significance of integrating BIM and 3D laser scanning technology in the construction process and emphasise the value of advanced data collection methods for effectively managing construction projects and streamlined workflows.
Details
Keywords
Ruzairi Abdul Rahim, Chiam Kok Thiam, Jaysuman Pusppanathan and Yvette Shaan‐Li Susiapan
The purpose of this paper is to view the flow concentration of the flowing material in a pipeline conveyor.
Abstract
Purpose
The purpose of this paper is to view the flow concentration of the flowing material in a pipeline conveyor.
Design/methodology/approach
Optical tomography provides a method to view the cross sectional image of flowing materials in a pipeline conveyor. Important flow information such as flow concentration profile, flow velocity and mass flow rate can be obtained without the need to invade the process vessel. The utilization of powerful computer together with expensive data acquisition system (DAQ) as the processing device in optical tomography systems has always been a norm. However, the advancements in silicon fabrication technology nowadays allow the fabrication of powerful digital signal processors (DSP) at reasonable cost. This allows the technology to be applied in optical tomography system to reduce or even eliminate the need of personal computer and the DAQ. The DSP system was customized to control the data acquisition of 16 × 16 optical sensors (arranged in orthogonal projection) and 23 × 23 optical sensors (arranged in rectilinear projections). The data collected were used to reconstruct the cross sectional image of flowing materials inside the pipeline. In the developed system, the accuracy of the image reconstruction was increased by 12.5 per cent by using new hybrid image reconstruction algorithm.
Findings
The results proved that the data acquisition and image reconstruction algorithm is capable of acquiring accurate data to reconstruct cross sectional images with only little error compared to the expected measurements.
Originality/value
The DSP system was customized to control the data acquisition of 16 × 16 optical sensors (arranged in orthogonal projection) and 23 × 23 optical sensors (arranged in rectilinear projections).
Details
Keywords
Mert Gülçür, Kevin Couling, Vannessa Goodship, Jérôme Charmet and Gregory J. Gibbons
The purpose of this study is to demonstrate and characterise a soft-tooled micro-injection moulding process through in-line measurements and surface metrology using a data…
Abstract
Purpose
The purpose of this study is to demonstrate and characterise a soft-tooled micro-injection moulding process through in-line measurements and surface metrology using a data-intensive approach.
Design/methodology/approach
A soft tool for a demonstrator product that mimics the main features of miniature components in medical devices and microsystem components has been designed and fabricated using material jetting technique. The soft tool was then integrated into a mould assembly on the micro-injection moulding machine, and mouldings were made. Sensor and data acquisition devices including thermal imaging and injection pressure sensing have been set up to collect data for each of the prototypes. Off-line dimensional characterisation of the parts and the soft tool have also been carried out to quantify the prototype quality and dimensional changes on the soft tool after the manufacturing cycles.
Findings
The data collection and analysis methods presented here enable the evaluation of the quality of the moulded parts in real-time from in-line measurements. Importantly, it is demonstrated that soft-tool surface temperature difference values can be used as reliable indicators for moulding quality. Reduction in the total volume of the soft-tool moulding cavity was detected and quantified up to 100 cycles. Data collected from in-line monitoring was also used for filling assessment of the soft-tool moulding cavity, providing about 90% accuracy in filling prediction with relatively modest sensors and monitoring technologies.
Originality/value
This work presents a data-intensive approach for the characterisation of soft-tooled micro-injection moulding processes for the first time. The overall results of this study show that the product-focussed data-rich approach presented here proved to be an essential and useful way of exploiting additive manufacturing technologies for soft-tooled rapid prototyping and new product introduction.
Details
Keywords
Christina Öberg, Christina Grundström and Petter Jönsson
The purpose of the paper is to discuss whether or not an acquisition changes the network identity of an acquired firm and, if so, how. This study aims to bring new insights to the…
Abstract
Purpose
The purpose of the paper is to discuss whether or not an acquisition changes the network identity of an acquired firm and, if so, how. This study aims to bring new insights to the corporate marketing field, as it examines corporate identity in the context of how a company is perceived because of its relationships with other firms. The focus of this research is acquired innovative firms.
Design/methodology/approach
This paper adopts a multiple case study approach. Data on four acquisitions of innovative firms were collected using 41 interviews, which were supplemented with secondary data.
Findings
Based on the case studies, it can be concluded that the network identity of the acquired firms does change following an acquisition. The acquired firms inherited the acquirers' identity, regardless of whether or not the companies were integrated. Previous, present and potential business partners regarded the innovative firms as being more solvent, but distanced themselves. In addition, some of them regarded the innovative firms as competitors.
Practical implications
Changes in the way a firm is perceived by its business partners, following an acquisition, will influence the future business operations of the firm. Expected changes to business relationships should ideally be considered part of due diligence. Acquirers need to consider how they can minimise the risks associated with business partners' changed perceptions of acquired firms.
Originality/value
This paper contributes to the research on identity, through discussion of the consequences of an acquisition for the identity and relationships of a firm. It also contributes to the existing corporate marketing literature, through consideration of perceptions at a network level. Furthermore, this paper contributes to merger and acquisition literature, by highlighting the influence of ownership on relationships with external parties.
Details
Keywords
Ken Harrison and David Summers
As a consequence of both limited funding and a desire to remain independent of any single supplier, the University of Lancaster Library is developing an integrated library package…
Abstract
As a consequence of both limited funding and a desire to remain independent of any single supplier, the University of Lancaster Library is developing an integrated library package with software based on the Pick operating system. The first stage in the library's automation programme, an acquisitions system, went live in April 1987. This article presents an account of its implementation, and shows how wide participation in its development has resulted in various refinements and in swift acceptance by all levels of staff. A full description of the system is given, showing the day‐to‐day procedures involved and the unlimited enquiry potential provided by the Pick access language. The system is judged a great success, both on its own merits and as the First stage in the library's continuing automation programme.
Qinghua Liu, Lu Sun, Alain Kornhauser, Jiahui Sun and Nick Sangwa
To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on…
Abstract
Purpose
To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation algorithm for road roughness detection is presented in this paper. The developed measurement system, including hardware designs and algorithm for software, constitutes an independent system which is low-cost, convenient for installation and small.
Design/methodology/approach
The inputs of restricted Boltzmann machine deep neural network are the vehicle vertical acceleration power spectrum and the pitch acceleration power spectrum, which is calculated using ADAMS finite element software. Adaboost Backward Propagation algorithm is used in each restricted Boltzmann machine deep neural network classification model for fine-tuning given its performance of global searching. The algorithm is first applied to road spectrum detection and experiments indicate that the algorithm is suitable for detecting pavement roughness.
Findings
The detection rate of RBM deep neural network algorithm based on Adaboost Backward Propagation is up to 96 per cent, and the false positive rate is below 3.34 per cent. These indices are both better than the other supervised algorithms, which also performs better in extracting the intrinsic characteristics of data, and therefore improves the classification accuracy and classification quality. Additionally, the classification performance is optimized. The experimental results show that the algorithm can improve performance of restricted Boltzmann machine deep neural networks. The system can be used for detecting pavement roughness.
Originality/value
This paper presents an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation for identifying the road roughness. Through the restricted Boltzmann machine, it completes pre-training and initializing sample weights. The entire neural network is fine-tuned through the Adaboost Backward Propagation algorithm, verifying the validity of the algorithm on the MNIST data set. A quarter vehicle model is used as the foundation, and the vertical acceleration spectrum of the vehicle center of mass and pitch acceleration spectrum were obtained by simulation in ADAMS as the input samples. The experimental results show that the improved algorithm has better optimization ability, improves the detection rate and can detect the road roughness more effectively.
Details
Keywords
Yee Ling Yap, Yong Sheng Edgar Tan, Heang Kuan Joel Tan, Zhen Kai Peh, Xue Yi Low, Wai Yee Yeong, Colin Siang Hui Tan and Augustinus Laude
The design process of a bio-model involves multiple factors including data acquisition technique, material requirement, resolution of the printing technique, cost-effectiveness of…
Abstract
Purpose
The design process of a bio-model involves multiple factors including data acquisition technique, material requirement, resolution of the printing technique, cost-effectiveness of the printing process and end-use requirements. This paper aims to compare and highlight the effects of these design factors on the printing outcome of bio-models.
Design/methodology/approach
Different data sources including engineering drawing, computed tomography (CT), and optical coherence tomography (OCT) were converted to a printable data format. Three different bio-models, namely, an ophthalmic model, a retina model and a distal tibia model, were printed using two different techniques, namely, PolyJet and fused deposition modelling. The process flow and 3D printed models were analysed.
Findings
The data acquisition and 3D printing process affect the overall printing resolution. The design process flows using different data sources were established and the bio-models were printed successfully.
Research limitations/implications
Data acquisition techniques contained inherent noise data and resulted in inaccuracies during data conversion.
Originality/value
This work showed that the data acquisition and conversion technique had a significant effect on the quality of the bio-model blueprint and subsequently the printing outcome. In addition, important design factors of bio-models were highlighted such as material requirement and the cost-effectiveness of the printing technique. This paper provides a systematic discussion for future development of an engineering design process in three-dimensional (3D) printed bio-models.
Details