Search results
1 – 10 of over 2000Aanand Davé, Michael Oates, Christopher Turner and Peter Ball
This paper reports on the experimentation of an integrated manufacturing and building model to improve energy efficiency. Traditionally, manufacturing and building-facilities…
Abstract
Purpose
This paper reports on the experimentation of an integrated manufacturing and building model to improve energy efficiency. Traditionally, manufacturing and building-facilities engineers work independently, with their own performance objectives, methods and software support. However, with progresses in resource reduction, advances have become more challenging. Further opportunities for energy efficiency require an expansion of scope across the functional boundaries of facility, utility and manufacturing assets.
Design/methodology/approach
The design of methods that provide guidance on factory modelling is inductive. The literature review outlines techniques for the simulation of energy efficiency in manufacturing, utility and facility assets. It demonstrates that detailed guidance for modelling across these domains is sparse. Therefore, five experiments are undertaken in an integrated manufacturing, utility and facility simulation software IES < VE > . These evaluate the impact of time-step granularity on the modelling of a paint shop process.
Findings
Experimentation demonstrates that time-step granularity can have a significant impact on simulation model results quality. Linear deterioration in results can be assumed from time intervals of 10 minutes and beyond. Therefore, an appropriate logging interval, and time-step granularity should be chosen during the data composition process. Time-step granularity is vital factor in the modelling process, impacting the quality of simulation results produced.
Practical implications
This work supports progress towards sustainable factories by understanding the impact of time-step granularity on data composition, modelling, and on the quality of simulation results. Better understanding of this granularity factor will guide engineers to use an appropriate level of data and understand the impact of the choices they are making.
Originality/value
This paper reports on the use of simulation modelling tool that links manufacturing, utilities and facilities domains, enabling their joint analysis to reduce factory resource consumption. Currently, there are few available tools to link these areas together; hence, there is little or no understanding of how such combined factory analysis should be conducted to assess and reduce factory resource consumption.
Details
Keywords
Lizhao Zhang, Jui-Long Hung, Xu Du, Hao Li and Zhuang Hu
Student engagement is a key factor that connects with student achievement and retention. This paper aims to identify individuals' engagement automatically in the classroom with…
Abstract
Purpose
Student engagement is a key factor that connects with student achievement and retention. This paper aims to identify individuals' engagement automatically in the classroom with multimodal data for supporting educational research.
Design/methodology/approach
The video and electroencephalogram data of 36 undergraduates were collected to represent observable and internal information. Since different modal data have different granularity, this study proposed the Fast–Slow Neural Network (FSNN) to detect engagement through both observable and internal information, with an asynchrony structure to preserve the sequence information of data with different granularity.
Findings
Experimental results show that the proposed algorithm can recognize engagement better than the traditional data fusion methods. The results are also analyzed to figure out the reasons for the better performance of the proposed FSNN.
Originality/value
This study combined multimodal data from observable and internal aspects to improve the accuracy of engagement detection in the classroom. The proposed FSNN used the asynchronous process to deal with the problem of remaining sequential information when facing multimodal data with different granularity.
Details
Keywords
This paper aims to identify how non-financial firms manage their interest rate (IR) exposure. IR risk is complex, as it comprises the unequal cash flow and fair value risk. The…
Abstract
Purpose
This paper aims to identify how non-financial firms manage their interest rate (IR) exposure. IR risk is complex, as it comprises the unequal cash flow and fair value risk. The paper is able to separate both risk types and investigate empirically how the exposure is composed and managed, and whether firms increase or decrease their exposure with derivative transactions.
Design/methodology/approach
The paper examines an unexplored regulatory environment that contains publicly reported IR exposure data on the firms’ exposures before and after hedging. The data were complemented by indicative interviews with four treasury executives of major German corporations, including two DAX-30 firms, to include professional opinions to validate the results.
Findings
The paper provides new empirical insights about how non-financial firms manage their interest rate exposure. It suggests that firms use hedging instruments to swap from fixed- to floating-rate positions predominantly in the short-to medium-term, and that 63 [37] per cent of IR firm exposure are managed using risk-decreasing [risk-increasing/-constant] strategies.
Practical implications
Interviewed treasury executives suggest that the advanced disclosures benefit various stakeholders, ranging from financial analysts and shareholders to potential investors through more meaningful analyses on firms’ risk management activities. Further, the treasury executives indicate that the new data granularity would enable firms to carry out unprecedented competitive analyses and thereby benchmark and improve their own risk management.
Originality/value
The paper is the first empirical study to analyze the interest rate activities of non-financial firms based on actually reported exposure data before and after hedging, rather than using proxy variables. In addition, the new data granularity enables a separate analysis of the cash flow and fair value risk to focus on the non-financial firms’ requirements.
Details
Keywords
This paper aims to present collection and analysis of heterogeneous urban traffic data, and integration of them through a kernel-based approach for assessing performance of urban…
Abstract
Purpose
This paper aims to present collection and analysis of heterogeneous urban traffic data, and integration of them through a kernel-based approach for assessing performance of urban transport network facilities. The recent development in sensing and information technology opens up opportunities for researching the use of this vast amount of new urban traffic data. This paper contributes to analysis and management of urban transport facilities.
Design/methodology/approach
In this paper, the data fusion algorithm are developed by using a kernel-based interpolation approach. Our objective is to reconstruct the underlying urban traffic pattern with fine spatial and temporal granularity through processing and integrating data from different sources. The fusion algorithm can work with data collected in different space-time resolution, with different level of accuracy and from different kinds of sensors. The properties and performance of the fusion algorithm is evaluated by using a virtual test bed produced by VISSIM microscopic simulation. The methodology is demonstrated through a real-world application in Central London.
Findings
The results show that the proposed algorithm is able to reconstruct accurately the underlying traffic flow pattern on transport network facilities with ordinary data sources on both virtual and real-world test beds. The data sources considered herein include loop detectors, cameras and GPS devices. The proposed data fusion algorithm does not require assumption and calibration of any underlying model. It is easy to implement and compute through advanced technique such as parallel computing.
Originality/value
The presented study is among the first utilizing and integrating heterogeneous urban traffic data from a major city like London. Unlike many other existing studies, the proposed method is data driven and does not require any assumption of underlying model. The formulation of the data fusion algorithm also allows it to be parallelized for large-scale applications. The study contributes to the application of Big Data analytics to infrastructure management.
Details
Keywords
Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng
Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…
Abstract
Purpose
Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.
Design/methodology/approach
This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.
Findings
As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.
Originality/value
This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.
Details
Keywords
Senan Kiryakos and Shigeo Sugimoto
Multiple studies have illustrated that the needs of various users seeking descriptive bibliographic data for pop culture resources (e.g. manga, anime, video games) have not been…
Abstract
Purpose
Multiple studies have illustrated that the needs of various users seeking descriptive bibliographic data for pop culture resources (e.g. manga, anime, video games) have not been properly met by cultural heritage institutions and traditional models. With a focus on manga as the central resource, the purpose of this paper is to address these issues to better meet user needs.
Design/methodology/approach
Based on an analysis of existing bibliographic metadata, this paper proposes a unique bibliographic hierarchy for manga that is also extendable to other pop culture sources. To better meet user requirements of descriptive data, an aggregation-based approach relying on the Object Reuse and Exchange-Open Archives Initiative (OAI-ORE) model utilized existing, fan-created data on the web.
Findings
The proposed hierarchy is better able to portray multiple entities of manga as they exist across data providers compared to existing models, while the utilization of OAI-ORE-based aggregation to build and provide bibliographic metadata for said hierarchy resulted in levels of description that more adequately meet user demands.
Originality/value
Though studies have proposed alternative models for resources like games or comics, manga has remained unexamined. As manga is a major component of many popular multimedia franchises, a focus here with the intention while building the model to support other resource types provides a foundation for future work seeking to incorporate these resources.
Details
Keywords
Timo Smura, Antero Kivi and Juuso Töyli
Collecting and analysing data on mobile service usage is increasingly complex as usage diverges between different types of devices and networks. The purpose of this paper is to…
Abstract
Purpose
Collecting and analysing data on mobile service usage is increasingly complex as usage diverges between different types of devices and networks. The purpose of this paper is to suggest and apply a holistic framework that helps in designing mobile service usage research as well as in communicating, positioning, and comparing research results.
Design/methodology/approach
The framework was constructed based on longitudinal and cross‐sectional mobile service usage measurements carried out in Finland annually in 2005‐2008, covering 80‐90 percent of all mobile users and service usage. Broad use of multiple data collection methods and measurement points enabled data and method triangulation, as well as analysis and comparison of their scopes and limitations.
Findings
The paper suggests a holistic framework for analysing mobile services, relying on service science approach. For measurements and analysis, mobile services are decomposed into four technical components: devices, applications, networks, and content. The paper further presents classifications for each component and discusses their relationships with possible measurement points. The framework is applied to mobile browsing usage studies.
Research limitations/implications
Future work includes adding an actors dimension to the framework in order to analyse their roles in the value networks providing mobile services. Extending the framework to Internet services more generally is also possible.
Originality/value
The paper presents an original, broadly applicable framework for designing mobile service usage research, and communicating, positioning, and comparing research results. The framework helps academics and practitioners to design and to recognise the limitations of mobile service usage studies, and to avoid misinterpretations based on insufficient data.
Details
Keywords
This paper discusses the factors that will influence a publisher's choice of cost‐effective electronic and media delivery techniques while allowing growth potential that serves…
Abstract
This paper discusses the factors that will influence a publisher's choice of cost‐effective electronic and media delivery techniques while allowing growth potential that serves the needs of end‐users. The paper describes the use of diskettes, Bernoulli removable cartridge disks, CD‐ROMs, modems using dial‐up phone lines, leased phone lines, satellites, and FM subcarriers. Sample data sizings evaluated include spreadsheet and word processing application programs, catalog distribution, CAD and accounting data files, online databases, news and financial information, and database updates. The economics of each delivery technique are given as well as the advantages and disadvantages.
Elizabeth Shepherd, Anna Sexton, Oliver Duke-Williams and Alexandra Eveleigh
Government administrative data have enormous potential for public and individual benefit through improved educational and health services to citizens, medical research…
Abstract
Purpose
Government administrative data have enormous potential for public and individual benefit through improved educational and health services to citizens, medical research, environmental and climate interventions and better use of scarce energy resources. The purpose of this study (part of the Administrative Data Research Centre in England, ADRC-E) was to examine perspectives about the sharing, linking and re-use (secondary use) of government administrative data. This study seeks to establish an analytical understanding of risk with regard to administrative data.
Design/methodology/approach
This qualitative study focused on the secondary use of government administrative data by academic researchers. Data collection was through 44 semi-structured interviews plus one focus group, and was supported by documentary analysis and a literature review. The study draws on the views of expert data researchers, data providers, regulatory bodies, research funders, lobby groups, information practitioners and data subjects.
Findings
This study discusses the identification and management of risk in the use of government administrative data and presents a risk framework.
Practical implications
This study will have resonance with records managers, risk managers, data specialists, information policy and compliance managers, citizens groups that engage with data, as well as all those responsible for the creation and management of government administrative data.
Originality/value
First, this study identifies and categorizes the risks arising from the research use of government administrative data, based on policy, practice and experience of those involved. Second, it identifies mitigating risk management activities, linked to five key stakeholder communities, and it discusses the locus of responsibility for risk management actions. The conclusion presents the elements of a new risk framework to inform future actions by the government data community and enable researchers to exploit the power of administrative data for public good.
Details
Keywords
Gennaro Maione, Daniela Sorrentino and Alba Demneri Kruja
At exceptional times, governments are entrusted with greater authority. This creates significant concerns over governments’ transparency and accountability. This paper aims to…
Abstract
Purpose
At exceptional times, governments are entrusted with greater authority. This creates significant concerns over governments’ transparency and accountability. This paper aims to pursue a twofold objective: assessing the patterns of open government data during the extraordinary time initiated by the COVID-19 pandemic drawing relevant policy and managerial implications regarding the future development of open data as a mechanism of accountability at times of exception.
Design/methodology/approach
The study follows exploratory research, relying on a web content analysis. The empirical setting is provided by 20 Italian regional governments during the COVID-19 pandemic as a shock that has triggered an exceptional time for governments.
Findings
Results on the desirable (extrinsic and intrinsic) characteristics of the data analyzed show that in the empirical setting investigated, open data does not enable to properly address the accountability concerns of a demanding forum at times of exception.
Research limitations/implications
The paper enriches the state of the art on accountability and provides both scholars and practitioners (e.g. policymakers, managers, etc.) a current reading of data-driven orientation as a stimulus to the accountability of public administrations during exceptional times.
Originality/value
The paper investigates open data as a condition of public accountability, assessing whether and how Italian regional governments have concretely opened their data to enable their forums to elaboration of an informed opinion about their conduct during the ongoing pandemic. This fosters the understanding of how accountability is deployed in times of exception in light of the possibilities offered by the availability of online platforms.
Details