Search results

11 – 20 of over 163000
Article
Publication date: 11 July 2016

Wonil Lee and Giovanni Ciro Migliaccio

The purpose of this paper was to investigate the physiological cost of concrete construction activities.

Abstract

Purpose

The purpose of this paper was to investigate the physiological cost of concrete construction activities.

Design/methodology/approach

Five concrete construction workers were recruited. The workers’ three-week heart rate (HR) data were collected in summer and autumn. In this paper, several HR indexes were used to investigate the physiological cost of work in concrete construction trades, including average working HR, relative HR and ratio of working HR to resting HR.

Findings

This paper measures how absolute and relative HRs vary throughout a workday and how working HR compares to resting HR for individual workers.

Research limitations/implications

Field observations are usually extremely difficult as researchers need to overcome a number of barriers, including employers’ resistance to perceived additional liabilities, employees’ fear that their level of activity will be reported to managers and many other practical and technical difficulties. As these challenges increase exponentially with the number of employers, subjects and sites, this study was limited to a small number of subjects all working for the same employer on the same jobsite. Still, challenges are often unpredictable and lessons learned from this study are expected to guide both our and other researchers’ continuation of this work.

Originality/value

The time effect on the physiological cost of work has not been considered in previous studies. Thus, this study is noteworthy owing to the depth of the data collected rather than the breadth of the data.

Details

Construction Innovation, vol. 16 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 1 April 1980

Linda C. Smith

Over the past decade machine‐readable data bases have grown both in number and variety. In addition to the familiar bibliographic data bases such as MEDLINE and ERIC, one now…

Abstract

Over the past decade machine‐readable data bases have grown both in number and variety. In addition to the familiar bibliographic data bases such as MEDLINE and ERIC, one now finds data bases containing such things as properties (e.g., RTECS ‐ Registry of Toxic Effects of Chemical Substances) and full text (e.g., LEXIS, a family of files that contains the full text of court decisions, statutes, regulations, and other legal materials). As data bases increase in importance as information resources, there is a growing need for printed tools which can assist librarians in their identification and use. Available tools fall into three categories: (1) guides issued by data base producers which describe the contents of a given data base and methods of searching (e.g., INSPEC Database Users' Guide); (2) guides produced by online vendors which indicate how data bases can be searched on a particular system (e.g., Lockheed's Guide to DIALOG ‐ Databases); and (3) data base directories which include coverage of data bases produced by many different organizations and processed by a variety of online vendors. The third category is the subject of this comparative review. Readers interested in the first two categories should consult Online Reference Aids: A Directory of Manuals, Guides, and Thesauri published by the California Library Authority for Systems and Services (CLASS). This publication contains information on manuals, guides, and other search aids for over 100 online data bases, including those available through the New York Times Information Bank, National Library of Medicine (NLM), Bibliographic Retrieval Services (BRS), Lockheed DIALOG, and System Development Corporation (SDC) ORBIT. This directory is arranged by data base name, giving ordering and price information for aids available from both data base producers and online vendors. Subject and vendor indexes are also provided.

Details

Reference Services Review, vol. 8 no. 4
Type: Research Article
ISSN: 0090-7324

Article
Publication date: 1 February 1994

Indira Mahalingam Carr and Katherine S. Williams

Banking and insurance sectors have witnessed an increased use of computers for transferring all types of information. There is no uniformity in the extent to which different…

Abstract

Banking and insurance sectors have witnessed an increased use of computers for transferring all types of information. There is no uniformity in the extent to which different Member States of the European Community (EC) protect a data subject's right to privacy. The EC has sought to harmonise the disparate laws while trying to maintain a balance between the free flow of information and the individual's right to privacy. This paper considers in brief some of the provisions put forward in the revised draft directive of the Council that are likely to affect the insurance and business industries.

Details

Journal of Financial Crime, vol. 2 no. 1
Type: Research Article
ISSN: 1359-0790

Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

Book part
Publication date: 19 July 2022

Claire Farrugia, Simon Grima and Kiran Sood

Purpose: This chapter sets out to lay out and analyse the effectiveness of the General Data Protection Regulation (GDPR), a recently established European Union (EU) regulation, in…

Abstract

Purpose: This chapter sets out to lay out and analyse the effectiveness of the General Data Protection Regulation (GDPR), a recently established European Union (EU) regulation, in the local insurance industry.

Methodology: This was done through a systematic literature review to determine what has already been done and then a survey as a primary research tool to gather information. The survey was aimed at clients and employees of insurance entities.

Findings: The general results are that effectiveness can be segmented into different factors and vary regarding the respondents’ confidence. Other findings include that the GDPR has increased costs, and its expectations are unclear. These findings suggest that although the GDPR was influential in the insurance market, some issues about this regulation still exist.

Conclusions: GDPR fulfils its purposes; however, the implementation process of this regulation can be facilitated if better guidelines are issued for entities to follow to understand its expectations better and follow the law and fulfil its purposes most efficiently.

Practical implications: These conclusions imply that the GDPR can be improved in the future. Overall, as a regulation, it is suitable for the different member states of the EU, including small states like Malta.

Details

Big Data: A Game Changer for Insurance Industry
Type: Book
ISBN: 978-1-80262-606-3

Keywords

Article
Publication date: 11 July 2016

Matthew D Dean, Dinah M Payne and Brett J.L. Landry

The purpose of this paper is to advocate for and provide guidance for the development of a code of ethical conduct surrounding online privacy policies, including those concerning…

5408

Abstract

Purpose

The purpose of this paper is to advocate for and provide guidance for the development of a code of ethical conduct surrounding online privacy policies, including those concerning data mining. The hope is that this research generates thoughtful discussion on the issue of how to make data mining more effective for the business stakeholder while at the same time making it a process done in an ethical way that remains effective for the consumer. The recognition of the privacy rights of data mining subjects is paramount within this discussion.

Design/methodology/approach

The authors derive foundational principles for ethical data mining. First, philosophical literature on moral principles is used as the theoretical foundation. Then, using existing frameworks, including legislation and regulations from a range of jurisdictions, a compilation of foundational principles was derived. This compilation was then evaluated and honed through the integration of stakeholder perspective and the assimilation of moral and philosophical precepts. Evaluating a sample of privacy policies hints that current practice does not meet the proposed principles, indicating a need for changes in the way data mining is performed.

Findings

A comprehensive framework for the development a contemporary code of conduct and proposed ethical practices for online data mining was constructed.

Research limitations/implications

This paper provides a configuration upon which a code of ethical conduct for performing data mining, tailored to meet the particular needs of any organization, can be designed.

Practical implications

The implications of data mining, and a code of ethical conduct regulating it, are far-reaching. Implementation of such principles serve to improve consumer and stakeholder confidence, ensure the enduring compliance of data providers and the integrity of its collectors, and foster confidence in the security of data mining.

Originality/value

Existing legal mandates alone are insufficient to properly regulate data mining, therefore supplemental reference to ethical considerations and stakeholder interest is required. The adoption of a functional code of general application is essential to address the increasing proliferation of apprehension regarding online privacy.

Details

Journal of Enterprise Information Management, vol. 29 no. 4
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 13 February 2019

Darra Hofman, Victoria Louise Lemieux, Alysha Joo and Danielle Alves Batista

This paper aims to explore a paradoxical situation, asking whether it is possible to reconcile the immutable ledger known as blockchain with the requirements of the General Data

1918

Abstract

Purpose

This paper aims to explore a paradoxical situation, asking whether it is possible to reconcile the immutable ledger known as blockchain with the requirements of the General Data Protection Regulations (GDPR), and more broadly privacy and data protection.

Design/methodology/approach

This paper combines doctrinal legal research examining the GDPR’s application and scope with case studies examining blockchain solutions from an archival theoretic perspective to answer several questions, including: What risks are blockchain solutions said to impose (or mitigate) for organizations dealing with data that is subject to the GDPR? What are the relationships between the GDPR principles and the principles of archival theory? How can these two sets of principles be aligned within a particular blockchain solution? How can archival principles be applied to blockchain solutions so that they support GDPR compliance?

Findings

This work will offer an initial exploration of the strengths and weaknesses of blockchain solutions for GDPR compliant information governance. It will present the disjunctures between GDPR requirements and some current blockchain solution designs and implementations, as well as discussing how solutions may be designed and implemented to support compliance. Immutability of information recorded on a blockchain is a differentiating positive feature of blockchain technology from the perspective of trusted exchanges of value (e.g. cryptocurrencies) but potentially places organizations at risk of non-compliance with GDPR if personally identifiable information cannot be removed. This work will aid understanding of how blockchain solutions should be designed to ensure compliance with GDPR, which could have significant practical implications for organizations looking to leverage the strengths of blockchain technology to meet their needs and strategic goals.

Research limitations/implications

Some aspects of the social layer of blockchain solutions, such as law and business procedures, are also well understood. Much less well understood is the data layer, and how it serves as an interface between the social and the technical in a sociotechnical system like blockchain. In addition to a need for more research about the data/records layer of blockchains and compliance, there is a need for more information governance professionals who can provide input on this layer, both to their organizations and other stakeholders.

Practical implications

Managing personal data will continue to be one of the most challenging, fraught issues for information governance moving forward; given the fairly broad scope of the GDPR, many organizations, including those outside of the EU, will have to manage personal data in compliance with the GDPR. Blockchain technology could play an important role in ensuring organizations have easily auditable, tamper-resistant, tamper-evident records to meet broader organizational needs and to comply with the GDPR.

Social implications

Because the GDPR professes to be technology-neutral, understanding its application to novel technologies such as blockchain provides an important window into the broader context of compliance in evolving information governance spaces.

Originality/value

The specific question of how GDPR will apply to blockchain information governance solutions is almost entirely novel. It has significance to the design and implementation of blockchain solutions for recordkeeping. It also provides insight into how well “technology-neutral” laws and regulations actually work when confronted with novel technologies and applications. This research will build upon significant bodies of work in both law and archival science to further understand information governance and compliance as we are shifting into the new GDPR world.

Details

Records Management Journal, vol. 29 no. 1/2
Type: Research Article
ISSN: 0956-5698

Keywords

Article
Publication date: 26 November 2020

Muhammad Al-Abdullah, Izzat Alsmadi, Ruwaida AlAbdullah and Bernie Farkas

The paper posits that a solution for businesses to use privacy-friendly data repositories for its customers’ data is to change from the traditional centralized repository to a…

Abstract

Purpose

The paper posits that a solution for businesses to use privacy-friendly data repositories for its customers’ data is to change from the traditional centralized repository to a trusted, decentralized data repository. Blockchain is a technology that provides such a data repository. However, the European Union’s General Data Protection Regulation (GDPR) assumed a centralized data repository, and it is commonly argued that blockchain technology is not usable. This paper aims to posit a framework for adopting a blockchain that follows the GDPR.

Design/methodology/approach

The paper uses the Levy and Ellis’ narrative review of literature methodology, which is based on constructivist theory posited by Lincoln and Guba. Using five information systems and computer science databases, the researchers searched for studies using the keywords GDPR and blockchain, using a forward and backward search technique. The search identified a corpus of 416 candidate studies, from which the researchers applied pre-established criteria to select 39 studies. The researchers mined this corpus for concepts, which they clustered into themes. Using the accepted computer science practice of privacy by design, the researchers combined the clustered themes into the paper’s posited framework.

Findings

The paper posits a framework that provides architectural tactics for designing a blockchain that follows GDPR to enhance privacy. The framework explicitly addresses the challenges of GDPR compliance using the unimagined decentralized storage of personal data. The framework addresses the blockchain–GDPR tension by establishing trust between a business and its customers vis-à-vis storing customers’ data. The trust is established through blockchain’s capability of providing the customer with private keys and control over their data, e.g. processing and access.

Research limitations/implications

The paper provides a framework that demonstrates that blockchain technology can be designed for use in GDPR compliant solutions. In using the framework, a blockchain-based solution provides the ability to audit and monitor privacy measures, demonstrates a legal justification for processing activities, incorporates a data privacy policy, provides a map for data processing and ensures security and privacy awareness among all actors. The research is limited to a focus on blockchain–GDPR compliance; however, future research is needed to investigate the use of the framework in specific domains.

Practical implications

The paper posits a framework that identifies the strategies and tactics necessary for GDPR compliance. Practitioners need to compliment the framework with rigorous privacy risk management, i.e. conducting a privacy risk analysis, identifying strategies and tactics to address such risks and preparing a privacy impact assessment that enhances accountability and transparency of a blockchain.

Originality/value

With the increasingly strategic use of data by businesses and the contravening growth of data privacy regulation, alternative technologies could provide businesses with a means to nurture trust with its customers regarding collected data. However, it is commonly assumed that the decentralized approach of blockchain technology cannot be applied to this business need. This paper posits a framework that enables a blockchain to be designed that follows the GDPR; thereby, providing an alternative for businesses to collect customers’ data while ensuring the customers’ trust.

Details

Digital Policy, Regulation and Governance, vol. 22 no. 5/6
Type: Research Article
ISSN: 2398-5038

Keywords

Article
Publication date: 13 June 2008

Araby Greene

The purpose of this paper is to report on the content management solution for 50 subject guides maintained by librarian subject specialists at the University of Nevada, Reno…

4392

Abstract

Purpose

The purpose of this paper is to report on the content management solution for 50 subject guides maintained by librarian subject specialists at the University of Nevada, Reno Libraries.

Design/methodology/approach

The Web Development Librarian designed an SQL Server database to store subject guide content and wrote ASP.Net scripts to generate dynamic web pages. Subject specialists provided input throughout the process. Hands‐on workshops were held the summer before the new guides were launched.

Findings

The new method has successfully produced consistent but individually customized subject guides while greatly reducing maintenance time. Simple reports reveal the association between guides and licensed resources. Using the system to create course‐specific guides would be a useful follow‐up project. Skills learned in training workshops should be refreshed at regular intervals to boost confidence and introduce changes in the system.

Practical implications

The advantages of centralizing content and separating it from presentation cannot be overstated. More consistency and less maintenance is just the beginning. Once accomplished, a library can incorporate Web 2.0 features into the application by repurposing the data or modifying the ASP.Net template. The now‐organized data is clean and ready to migrate to web services or next‐generation research guides when the time is right.

Originality/value

This paper uniquely reports on an SQL Server, ASP.Net solution for managing subject guides. SQL Server includes data management features that increase application security and ASP.Net offers built‐in functionality for manipulating and presenting data. Utmost attention was given to creating simple user interfaces that enable subject specialists to create complex web pages without coding HTML.

Details

Library Hi Tech, vol. 26 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 23 May 2019

Jane Cho

RDR has become an essential academic infrastructure in an atmosphere that facilitates the openness of research output granted by public research funds. This study aims to…

Abstract

Purpose

RDR has become an essential academic infrastructure in an atmosphere that facilitates the openness of research output granted by public research funds. This study aims to understand operational status of 152 Asian data repositories on re3data and cluster repositories into four groups according to their operational status. In addition, identify the main subject areas of RDRs in Asian countries and try to understand what topic correlations exist between data archived in Asian countries.

Design/methodology/approach

This study extracts metadata from re3data and analyzes it in various ways to grasp the current status of research data repositories in Asian countries. The author clusters the repositories into four groups using hierarchical cluster analysis according to the level of operation. In addition, for identifying the main subject areas of RDRs in Asian countries, extracted the keywords of the subject field assigned to the each repository, and Pathfinder Network (PFNET) analysis is performed.

Findings

About 70 per cent of the Asian-country repositories are those where licenses or policies are declared but not granted permanent identifiers and international-level certification. As a result of the subject domain analysis, eight clusters are formed centering on life sciences and natural sciences.

Originality/value

The research output in developing countries, especially non-English-speaking countries, tends not to be smoothly circulated in the international community due to the immaturity of the open-access culture, as well as linguistic and technical problems. This study has value, in that it investigates the status of Asian countries’ research data management and global distribution infrastructure in global open-science trends.

Details

The Electronic Library , vol. 37 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

11 – 20 of over 163000