Search results
1 – 10 of over 8000Călin Mihail Rangu, Leonardo Badea, Mircea Constantin Scheau, Larisa Găbudeanu, Iulian Panait and Valentin Radu
In recent years, the frequency and severity of cybersecurity incidents have prompted customers to seek out specialized insurance products. However, this has also presented…
Abstract
Purpose
In recent years, the frequency and severity of cybersecurity incidents have prompted customers to seek out specialized insurance products. However, this has also presented insurers with operational challenges and increased costs. The assessment of risks for health systems and cyber–physical systems (CPS) necessitates a heightened degree of attention. The significant values of potential damages and claims request a solid insurance system, part of cyber-resilience. This research paper focuses on the emerging cyber insurance market that is currently in the process of standardizing and improving its risk analysis concerning the potential insured entity.
Design/methodology/approach
The authors' approach involves a quantitative analysis utilizing a Likert-style questionnaire designed to survey cyber insurance professionals. The authors' aim is to identify the current methods used in gathering information from potential clients, as well as the manner in which this information is analyzed by the insurers. Additionally, the authors gather insights on potential improvements that could be made to this process.
Findings
The study the authors elaborated it has a particularly important cyber and risk components for insurance area, because it addresses a “niche” area not yet proper addressed in specialized literature – cyber insurance. Cyber risk management approaches are not uniform at the international level, nor at the insurer level. Also, not all insurers can perform solid assessments, especially since their companies should first prove that they are fully compliant with international cyber security standards.
Research limitations/implications
This research has concentrated on analyzing the current practices in terms of gathering information about the insured entity before issuing the cyber insurance policy, level of details concerning the cyber security posture of the insured entity and way such information should be analyzed in a standardized and useful manner. The novelty of this research resides in the analysis performed as detailed above and the proposals in terms of information gathered, depth of analysis and standardization of approach made. Future work on the topic can focus on the standardization process for analyzing cyber risk for insurance clients, to improve the proposal based also on historical elements and trends in the market. Thus, future research can further refine the standardization process to analyze in more depth the way this can be implemented and included in relevant legislation at the EU level.
Practical implications
Proposed improvements include proposals in terms of the level of detail and the usefulness of an independent centralized approach for information gathering and analysis, especially given the re-insurance and brokerage activities. The authors also propose a common practical procedural approach in risk management, with the involvement of insurance companies and certification institutions of cyber security auditors.
Originality/value
The study investigates the information gathered by insurers from potential clients of cyber insurance and the way this is analyzed and updated for issuance of the insurance policy.
Details
Keywords
Abdelkader Daghfous, Noha Tarek Amer, Omar Belkhodja, Linda C. Angell and Taisier Zoubi
Job market shifts, such as workforce mobility and aging societies, cause the exit of knowledgeable personnel from organizations. The ensuing knowledge loss (K-loss) has broad…
Abstract
Purpose
Job market shifts, such as workforce mobility and aging societies, cause the exit of knowledgeable personnel from organizations. The ensuing knowledge loss (K-loss) has broad negative effects. This study analyzes the knowledge management literature on K-loss published from 2000 to 2021 and identifies fruitful directions for future research.
Design/methodology/approach
The authors conduct a systematic literature review of 74 peer-reviewed articles published between 2000 and 2021. These articles were retrieved from ProQuest Central, Science Direct, EBSCOhost and Emerald databases. The analysis utilizes Jesson et al.’s (2011) six principles: field mapping, comprehensive search, quality assessment, data extraction, synthesis and write-up.
Findings
Three sub-topics emerge from the systematic literature review: K-loss drivers, positive and negative impacts of K-loss and mitigation strategies. Over half of the literature addresses mitigation strategies and provides solutions for K-loss already in progress, rather than proposing preventive measures.
Research limitations/implications
This study has limitations related to the time span covered. Moreover, it focuses on articles published in refereed journals. Therefore, important contributions from conference papers, books and professional reports were excluded.
Originality/value
This research comprehensively synthesizes the K-loss literature and proposes future avenues of research to address under-investigated areas and potentially lead to theoretical and empirical advancements in the field. This study also provides suggestions for improving managerial practices.
Details
Keywords
Charlotte Haugland Sundkvist and Tonny Stenheim
The purpose of this paper is to examine the role family identity and reputational concerns plays when private family firms engage in earnings management.
Abstract
Purpose
The purpose of this paper is to examine the role family identity and reputational concerns plays when private family firms engage in earnings management.
Design/methodology/approach
The paper is conducted as an archival study using data from private limited liability firms in Norway over the period from 2002 to 2015. The dataset includes financial accounting data and data on family relationships between shareholders, board members and CEOs, where family relationships are determined through bloodlines, adoption and marriage, tracing back four generations and extending out to third cousins. To investigate the incidence of earnings management, the authors employ a measure of accrual-based earnings management (AEM) (Dechow and Dichev, 2002; McNichols, 2002) and a measure of real earnings management (REM) (Roychowdhury, 2006). They use whether or not the family name is included in the firm name (i.e. family name congruence) as a proxy for family members' identification with the family firm and their sensitivity to reputational concerns.
Findings
The authors’ results show that AEM is lower for family-named family firms. Moreover, their findings also indicate that family-named family firms are more likely to select REM over AEM, compared to nonfamily named family firms. This is even more pronounced when detection risk is high (high quality audit proxied by Big 4).
Research limitations/implications
The quality of the authors’ findings is limited to the validity of their proxy for family firm identification and reputational concerns (the family name included in the firm name). Even though findings from prior research suggest that family name congruence is a valid proxy for identity and reputational concerns (e.g. Kashmiri and Mahajan, 2010, 2014; Rousseau et al., 2018; Zellweger et al., 2013), future research should investigate the validity of these results using alternative proxies for family firm identification. Future research should also investigate whether the authors’ findings are generalizable to public family firms.
Practical implications
The authors’ results suggest that the risk of AEM is lower for family-named family firms, whereas the risk of REM is somewhat higher, compared to nonfamily named family firms. These results might be relevant for financial accounting users, auditors and supervisory and monitoring bodies when assessing the risk of earnings management.
Originality/value
The paper is, as far as the authors are aware of, the first to investigate the role of family name congruence and detection risk when private family firms select between AEM and REM.
Details
Keywords
Kunpeng Shi, Guodong Jin, Weichao Yan and Huilin Xing
Accurately evaluating fluid flow behaviors and determining permeability for deforming porous media is time-consuming and remains challenging. This paper aims to propose a novel…
Abstract
Purpose
Accurately evaluating fluid flow behaviors and determining permeability for deforming porous media is time-consuming and remains challenging. This paper aims to propose a novel machine-learning method for the rapid estimation of permeability of porous media at different deformation stages constrained by hydro-mechanical coupling analysis.
Design/methodology/approach
A convolutional neural network (CNN) is proposed in this paper, which is guided by the results of finite element coupling analysis of equilibrium equation for mechanical deformation and Boltzmann equation for fluid dynamics during the hydro-mechanical coupling process [denoted as Finite element lattice Boltzmann model (FELBM) in this paper]. The FELBM ensures the Lattice Boltzmann analysis of coupled fluid flow with an unstructured mesh, which varies with the corresponding nodal displacement resulting from mechanical deformation. It provides reliable label data for permeability estimation at different stages using CNN.
Findings
The proposed CNN can rapidly and accurately estimate the permeability of deformable porous media, significantly reducing processing time. The application studies demonstrate high accuracy in predicting the permeability of deformable porous media for both the test and validation sets. The corresponding correlation coefficients (R2) is 0.93 for the validation set, and the R2 for the test set A and test set B are 0.93 and 0.94, respectively.
Originality/value
This study proposes an innovative approach with the CNN to rapidly estimate permeability in porous media under dynamic deformations, guided by FELBM coupling analysis. The fast and accurate performance of CNN underscores its promising potential for future applications.
Details
Keywords
Nadeeshani Wanigarathna, Keith Jones, Federica Pascale, Mariantonietta Morga and Abdelghani Meslem
Recent earthquake-induced liquefaction events and associated losses have increased researchers’ interest into liquefaction risk reduction interventions. To the best of the…
Abstract
Purpose
Recent earthquake-induced liquefaction events and associated losses have increased researchers’ interest into liquefaction risk reduction interventions. To the best of the authors’ knowledge, there was no scholarly literature related to an economic appraisal of these risk reduction interventions. The purpose of this paper is to investigate the issues in applying cost–benefit analysis (CBA) principles to the evaluation of technical mitigations to reduce earthquake-induced liquefaction risk.
Design/methodology/approach
CBA has been substantially used for risk mitigation option appraisal for a number of hazard threats. Previous literature in the form of systematic reviews, individual research and case studies, together with liquefaction risk and loss modelling literature, was used to develop a theoretical model of CBA for earthquake-induced liquefaction mitigation interventions. The model was tested using a scenario in a two-day workshop.
Findings
Because liquefaction risk reduction techniques are relatively new, there is limited damage modelling and cost data available for use within CBAs. As such end users need to make significant assumptions when linking the results of technical investigations of damage to built-asset performance and probabilistic loss modelling resulting in many potential interventions being not cost-effective for low-impact disasters. This study questions whether a probabilistic approach should really be applied to localised rapid onset events like liquefaction, arguing that a deterministic approach for localised knowledge and context would be a better base for the cost-effectiveness mitigation interventions.
Originality/value
This paper makes an original contribution to literature through a critical review of CBA approaches applied to disaster mitigation interventions. Further, this paper identifies challenges and limitations of applying probabilistic based CBA models to localised rapid onset disaster events where human losses are minimal and historic data is sparse; challenging researchers to develop new deterministic based approaches that use localised knowledge and context to evaluate the cost-effectiveness of mitigation interventions.
Details
Keywords
Marcello Braglia, Francesco Di Paco, Roberto Gabbrielli and Leonardo Marrazzini
This paper presents a new and well-structured framework that aims to assess the current environmental impact from a Greenhouse Gas (GHG) emissions perspective. This tool includes…
Abstract
Purpose
This paper presents a new and well-structured framework that aims to assess the current environmental impact from a Greenhouse Gas (GHG) emissions perspective. This tool includes a new set of Lean Key Performance Indicators (KPIs), which translates the well-known logic of Overall Equipment Effectiveness in the field of GHG emissions, that can progressively detect industrial losses that cause GHG emissions and support decision-making for implementing improvements.
Design/methodology/approach
The new metrics are presented with reference to two different perspectives: (1) to highlight the deviation of the current value of emissions from the target; (2) to adopt a diagnostic orientation not only to provide an assessment of current performance but also to search for the main causes of inefficiencies and to direct improvement implementations.
Findings
The proposed framework was applied to a major company operating in the plywood production sector. It identified emission-related losses at each stage of the production process, providing an overall performance evaluation of 53.1%. The industrial application shows how the indicators work in practice, and the framework as a whole, to assess GHG emissions related to industrial losses and to proper address improvement actions.
Originality/value
This paper scrutinizes a new set of Lean KPIs to assess the industrial losses causing GHG emissions and identifies some significant drawbacks. Then it proposes a new structure of losses and KPIs that not only quantify efficiency but also allow to identify viable countermeasures.
Details
Keywords
The study assesses impact of individual cultural values on investment choices (aggressive or conservative), of 450 investors with behavioural biases and risk propensity in serial…
Abstract
Purpose
The study assesses impact of individual cultural values on investment choices (aggressive or conservative), of 450 investors with behavioural biases and risk propensity in serial as mediators in the relationship.
Design/methodology/approach
The study used serial mediation analysis using Hayes model 6 for creating six models.
Findings
Findings of the study indicated that individualism traits are inclined to aggressive investment choices due to presence of overconfidence biases. Uncertainty avoidance and longtermism traits of investors resulted in aggressive investment choices due to presence of herd mentality bias. The moderating impact of past investing experiences was found significant.
Originality/value
The study indicates the importance of cultural values and past investing experiences of investors that may develop biases to assess investment choices and decisions of investors.
Details
Keywords
Laura Lucantoni, Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua
The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely…
Abstract
Purpose
The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely used for analyzing OEE results and identifying corrective actions. Therefore, the approach proposed in this paper aims to provide a new rule-based Machine Learning (ML) framework for OEE enhancement and the selection of improvement actions.
Design/methodology/approach
Association Rules (ARs) are used as a rule-based ML method for extracting knowledge from huge data. First, the dominant loss class is identified and traditional methodologies are used with ARs for anomaly classification and prioritization. Once selected priority anomalies, a detailed analysis is conducted to investigate their influence on the OEE loss factors using ARs and Network Analysis (NA). Then, a Deming Cycle is used as a roadmap for applying the proposed methodology, testing and implementing proactive actions by monitoring the OEE variation.
Findings
The method proposed in this work has also been tested in an automotive company for framework validation and impact measuring. In particular, results highlighted that the rule-based ML methodology for OEE improvement addressed seven anomalies within a year through appropriate proactive actions: on average, each action has ensured an OEE gain of 5.4%.
Originality/value
The originality is related to the dual application of association rules in two different ways for extracting knowledge from the overall OEE. In particular, the co-occurrences of priority anomalies and their impact on asset Availability, Performance and Quality are investigated.
Details
Keywords
John M. Violanti and Michael E. Andrew
Policing requires atypical work hours. The present study examined associations between shiftwork and pregnancy loss among female police officers.
Abstract
Purpose
Policing requires atypical work hours. The present study examined associations between shiftwork and pregnancy loss among female police officers.
Design/methodology/approach
Participants were 91 female officers with a prior history of at least one pregnancy. Shiftwork information was assessed using daily electronic payroll work records. Any prior pregnancy loss (due to miscarriage) was self-reported. Logistic regression estimated odds ratios (OR) and 95% confidence intervals (CI) for main associations.
Findings
On average, the officers were 42 years old, had 14 years of service, and 56% reported a prior pregnancy loss. Officers who worked dominantly on the afternoon or night shift during their career had 96% greater odds of pregnancy loss compared to those on day shift (OR = 1.96, 95% CI:0.71–5.42), but the result was not statistically significant. A 25% increase in percent of hours worked on night shift was associated with 87% increased odds of pregnancy loss (OR = 1.87, 95% CI:1.01–3.47). Associations were adjusted for demographic and lifestyle factors. Objective assessment of shiftwork via electronic records strengthened the study. Limitations include small sample size, cross-sectional design and lack of details on pregnancy loss or the timing of pregnancy loss with regard to shiftwork.
Research limitations/implications
The present study is preliminary and cross-sectional.
Practical implications
With considerable further inquiry and findings into this topic, results may have an impact on police policy affecting shift work and pregnant police officers.
Social implications
Implication on the health and welfare of police officers.
Originality/value
To our knowledge, there are no empirical studies which associate shiftwork and pregnancy loss among police officers. This preliminary study suggested an association between shiftwork and increased odds of pregnancy loss and points out the need for further study.
Details
Keywords
Jayalakshmy Ramachandran, Yezen H. Kannan and Samuel Jebaraj Benjamin
This paper aims to investigate auditors’ pricing of excess cash holdings and the variation in their pricing decisions in light of the precautionary motives of cash holdings and…
Abstract
Purpose
This paper aims to investigate auditors’ pricing of excess cash holdings and the variation in their pricing decisions in light of the precautionary motives of cash holdings and certain firm-specific conditions and during periods of crisis.
Design/methodology/approach
The authors conduct the two-stage-least-squares multivariate analysis using a sample of publicly listed non-financial US firms for the period 2003 to 2021 (42,413 firm-year observations).
Findings
The findings show a significant positive relationship between excess cash and audit fee. Next, the authors find that audit pricing of excess cash is significantly higher for firms with lower financial constraints. However, the authors do not find evidence to suggest that auditors price excess cash significantly higher for firms with lower hedging needs. In additional analysis, the authors find evidence to suggest that auditors charge significantly less for excess cash in firms that report financial loss and firms operating in industries with high litigation risk. The additional analysis also reveals excess cash is not positively and significantly priced by auditors as a result of the global financial crisis and Covid-19 pandemic.
Originality/value
Most researchers have analyzed excess cash holding from the perspective of managers, i.e. agency conflict or managerial prudence, while somewhat neglecting auditors’ perception of the embedded risk of excess cash holdings. The authors provide new insights on auditors’ perspective of excess cash holding and identify certain factors/situation/conditions that cause variation in the audit fee premium. The findings offer useful insights for managers and shareholders who are interested in assessing the effects of excess cash holdings policies on the audit fee premium.
Details