Search results
1 – 10 of 847When the probability of each model is known, a natural idea is to select the most probable model. However, in many practical situations, the exact values of these probabilities…
Abstract
Purpose
When the probability of each model is known, a natural idea is to select the most probable model. However, in many practical situations, the exact values of these probabilities are not known; only the intervals that contain these values are known. In such situations, a natural idea is to select some probabilities from these intervals and to select a model with the largest selected probabilities. The purpose of this study is to decide how to most adequately select these probabilities.
Design/methodology/approach
It is desirable to have a probability-selection method that preserves independence. If, according to the probability intervals, the two events were independent, then the selection of probabilities within the intervals should preserve this independence.
Findings
The paper describes all techniques for decision making under interval uncertainty about probabilities that are consistent with independence. It is proved that these techniques form a 1-parametric family, a family that has already been successfully used in such decision problems.
Originality/value
This study provides a theoretical explanation of an empirically successful technique for decision-making under interval uncertainty about probabilities. This explanation is based on the natural idea that the method for selecting probabilities from the corresponding intervals should preserve independence.
Details
Keywords
Warattaya Chinnakum, Laura Berrout Ramos, Olugbenga Iyiola and Vladik Kreinovich
In real life, we only know the consequences of each possible action with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds…
Abstract
Purpose
In real life, we only know the consequences of each possible action with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds on the expected gain. A usual way to compare such interval-valued alternatives is to use the optimism–pessimism criterion developed by Nobelist Leo Hurwicz. In this approach, a weighted combination of the worst-case and the best-case gains is maximized. There exist several justifications for this criterion; however, some of the assumptions behind these justifications are not 100% convincing. The purpose of this paper is to find a more convincing explanation.
Design/methodology/approach
The authors used utility approach to decision-making.
Findings
The authors proposed new, hopefully more convincing, justifications for Hurwicz’s approach.
Originality/value
This is a new, more intuitive explanation of Hurwicz’s approach to decision-making under interval uncertainty.
Details
Keywords
Olga Kosheleva, Vladik Kreinovich and Uyen Pham
In many real-life situations, we do not know the exact values of the expected gain corresponding to different possible actions, we only have lower and upper bounds on these gains…
Abstract
Purpose
In many real-life situations, we do not know the exact values of the expected gain corresponding to different possible actions, we only have lower and upper bounds on these gains – i.e., in effect, intervals of possible gain values. The purpose of this study is to describe all possible ways to make decisions under such interval uncertainty.
Design/methodology/approach
The authors used both natural invariance and additivity requirements.
Findings
The authors demonstrated that natural requirements – invariance or additivity – led to a two-parametric family of possible decision-making strategies.
Originality/value
This is a first description of all reasonable strategies for decision-making under interval uncertainty – strategies that satisfy natural requirements of invariance or additivity.
Details
Keywords
Warisa Thangjai and Sa-Aat Niwitpong
Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty…
Abstract
Purpose
Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty. Their applications encompass economic forecasting, market research, financial forecasting, econometric analysis, policy analysis, financial reporting, investment decision-making, credit risk assessment and consumer confidence surveys. Signal-to-noise ratio (SNR) finds applications in economics and finance across various domains such as economic forecasting, financial modeling, market analysis and risk assessment. A high SNR indicates a robust and dependable signal, simplifying the process of making well-informed decisions. On the other hand, a low SNR indicates a weak signal that could be obscured by noise, so decision-making procedures need to take this into serious consideration. This research focuses on the development of confidence intervals for functions derived from the SNR and explores their application in the fields of economics and finance.
Design/methodology/approach
The construction of the confidence intervals involved the application of various methodologies. For the SNR, confidence intervals were formed using the generalized confidence interval (GCI), large sample and Bayesian approaches. The difference between SNRs was estimated through the GCI, large sample, method of variance estimates recovery (MOVER), parametric bootstrap and Bayesian approaches. Additionally, confidence intervals for the common SNR were constructed using the GCI, adjusted MOVER, computational and Bayesian approaches. The performance of these confidence intervals was assessed using coverage probability and average length, evaluated through Monte Carlo simulation.
Findings
The GCI approach demonstrated superior performance over other approaches in terms of both coverage probability and average length for the SNR and the difference between SNRs. Hence, employing the GCI approach is advised for constructing confidence intervals for these parameters. As for the common SNR, the Bayesian approach exhibited the shortest average length. Consequently, the Bayesian approach is recommended for constructing confidence intervals for the common SNR.
Originality/value
This research presents confidence intervals for functions of the SNR to assess SNR estimation in the fields of economics and finance.
Details
Keywords
Elisa Verna, Gianfranco Genta and Maurizio Galetto
The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality…
Abstract
Purpose
The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality performance in both assembly and disassembly operations. This topic has not been extensively investigated in previous research.
Design/methodology/approach
An extensive experimental campaign involving 84 operators was conducted to repeatedly assemble and disassemble six different products of varying complexity to construct productivity and quality learning curves. Data from the experiment were analysed using statistical methods.
Findings
The human learning factor of productivity increases superlinearly with the increasing architectural complexity of products, i.e. from centralised to distributed architectures, both in assembly and disassembly, regardless of the level of overall product complexity. On the other hand, the human learning factor of quality performance decreases superlinearly as the architectural complexity of products increases. The intrinsic characteristics of product architecture are the reasons for this difference in learning factor.
Practical implications
The results of the study suggest that considering product complexity, particularly architectural complexity, in the design and planning of manufacturing processes can optimise operator learning, productivity and quality performance, and inform decisions about improving manufacturing operations.
Originality/value
While previous research has focussed on the effects of complexity on process time and defect generation, this study is amongst the first to investigate and quantify the effects of product complexity, including architectural complexity, on operator learning using an extensive experimental campaign.
Details
Keywords
This study analyses agricultural land price dynamics in order to better understand price development and to improve forecast accuracy. Understanding the evolution of agricultural…
Abstract
Purpose
This study analyses agricultural land price dynamics in order to better understand price development and to improve forecast accuracy. Understanding the evolution of agricultural land prices is important when considering sound investment decisions.
Design/methodology/approach
This study applies threshold autoregression to model agricultural land prices. The data includes quarterly observations on Finnish agricultural land prices.
Findings
The study shows that Finnish agricultural land prices exhibit regime-switching behaviour when using past changes in prices as a threshold variable. The threshold autoregressive model not only fits the data better but also improves the accuracy of price forecasts compared to the linear autoregressive model.
Originality/value
The results show that a sharp fall in agricultural land prices temporarily changes the regular development of prices. This information significantly improves the accuracy of price predictions.
Details
Keywords
Bo Nordlund, Johan Lorentzon and Hans Lind
The purpose of this article is to study how fair values in financial reports are audited.
Abstract
Purpose
The purpose of this article is to study how fair values in financial reports are audited.
Design/methodology/approach
The study is a qualitative case study based on in-depth interviews.
Findings
One important finding is that auditors anchor in the figure presented by the company, and despite the auditing efforts, there is a substantial risk of management bias in the fair values reported. There is a risk for confirmation bias.
Research limitations/implications
Relatively, few respondents were employed in this study, but their background and competence lead to the assessment that the study provides a representative picture of what is being investigated.
Practical implications
Auditors may need to develop ways of performing auditing of fair values to reduce the risks identified in this study.
Social implications
This study presents a perspective of the auditing process enabling an evaluation of the quality of fair value estimates regarding investment properties in the financial reports. This study also provides users of financial reports as investors, bankers and other institutions with an enhanced understanding of reported estimates of fair (market) values.
Originality/value
Very few studies have investigated how auditors evaluate fair values of investment properties. This study contributes by giving users of financial reports an enhanced understanding of the quality of reported estimates of fair (market) values.
Details
Keywords
Stefan Kleinke and David Cross
The purpose of this two-part research was to investigate the effect of remote learning on student progress in elementary education. Part 2, presented in this paper, is a follow-up…
Abstract
Purpose
The purpose of this two-part research was to investigate the effect of remote learning on student progress in elementary education. Part 2, presented in this paper, is a follow-up study to examine how student progression in the two pandemic-induced environments compared to the pre-pandemic conditions.
Design/methodology/approach
The authors expanded the quantitative, quasi-experimental factorial design of the authors' initial study with additional ex-post-facto standardized test score data from before the pandemic to enhance the group comparison with a control: the conventional pre-pandemic classroom environment. Thus, the authors were able to examine in which ways the two pandemic-induced learning environments (remote and hybrid) may have affected learner progress in the two subject areas: English Language (ELA) and Math. Additionally, the authors provided a grade-by-grade breakdown of analysis results.
Findings
Findings revealed significant group differences in grade levels at or below 6th grade. In the majority of analyzed comparisons, learner achievement in the hybrid group was significantly lower than those in either the remote or the classroom group, or both.
Research limitations/implications
The additional findings further supported the authors' initial hypotheses: Differences in the consistency and continuity of educational approaches, as well as potential differences in learner predispositions and the availability of home support systems may have influenced observed results. Thus, this research also contributes to the general knowledge about learner needs in elementary education.
Originality/value
During the pandemic, remote learning became ubiquitous. However, in contrast to e-learning in postsecondary education, for which an abundance of research has been conducted, relatively little is known about the efficacy of such approaches in elementary education.
Details
Keywords
Ben M. Roberts, David Allinson and Kevin J. Lomas
Accurate values for infiltration rate are important to reliably estimate heat losses from buildings. Infiltration rate is rarely measured directly, and instead is usually…
Abstract
Purpose
Accurate values for infiltration rate are important to reliably estimate heat losses from buildings. Infiltration rate is rarely measured directly, and instead is usually estimated using algorithms or data from fan pressurisation tests. However, there is growing evidence that the commonly used methods for estimating infiltration rate are inaccurate in UK dwellings. Furthermore, most prior research was conducted during the winter season or relies on single measurements in each dwelling. Infiltration rates also affect the likelihood and severity of summertime overheating. The purpose of this work is to measure infiltration rates in summer, to compare this to different infiltration estimation methods, and to quantify the differences.
Design/methodology/approach
Fifteen whole house tracer gas tests were undertaken in the same test house during spring and summer to measure the whole building infiltration rate. Eleven infiltration estimation methods were used to predict infiltration rate, and these were compared to the measured values. Most, but not all, infiltration estimation methods relied on data from fan pressurisation (blower door) tests. A further four tracer gas tests were also done with trickle vents open to allow for comment on indoor air quality, but not compared to infiltration estimation methods.
Findings
The eleven estimation methods predicted infiltration rates between 64 and 208% higher than measured. The ASHRAE Enhanced derived infiltration rate (0.41 ach) was closest to the measured value of 0.25 ach, but still significantly different. The infiltration rate predicted by the “divide-by-20” rule of thumb, which is commonly used in the UK, was second furthest from the measured value at 0.73 ach. Indoor air quality is likely to be unsatisfactory in summer when windows are closed, even if trickle vents are open.
Practical implications
The findings have implications for those using dynamic thermal modelling to predict summertime overheating who, in the absence of a directly measured value for infiltration rate (i.e. by tracer gas), currently commonly use infiltration estimation methods such as the “divide-by-20” rule. Therefore, infiltration may be overestimated resulting in overheating risk and indoor air quality being incorrectly predicted.
Originality/value
Direct measurement of air infiltration rate is rare, especially multiple tests in a single home. Past measurements have invariably focused on the winter heating season. This work is original in that the tracer gas technique used to measure infiltration rate many times in a single dwelling during the summer. This work is also original in that it quantifies both the infiltration rate and its variability, and compares these to values produced by eleven infiltration estimation methods.
Details
Keywords
Jaime A. Teixeira da Silva and Panagiotis Tsigaris
The purpose of this paper is to provide an estimate of the costs of premature mortality caused by the COVID-19 pandemic.
Abstract
Purpose
The purpose of this paper is to provide an estimate of the costs of premature mortality caused by the COVID-19 pandemic.
Design/methodology/approach
Using COVID-19 pandemic-derived mortality data for November 9, 2020 (globally 1,303,215 deaths) and applying a country-based value of statistical life (VSL), the worldwide cost of premature mortality was assessed. The cost was assessed based on income groups until November 9, 2020 and projected into the future until March 1, 2021 using three scenarios from the Institute for Health Metrics and Evaluation (IHME).
Findings
The global cost of premature mortality is currently estimated at Int$5.9 trillion. For the high-income group, the current estimated cost is Int$ $4.4 trillion or $3,700 per person. Using IHME projections until March 1, 2021, global premature mortality costs will increase to Int$13.7 trillion and reach Int$22.1 trillion if policies are relaxed, while the cost with 95% universal masks is Int$10.9 trillion. The richest nations will bear the largest burden of these costs, reaching $15,500 per person by March 1, 2021 if policies are relaxed.
Originality/value
The cost of human lives lost due to the pandemic is unprecedented. Preparedness in the future is the best policy to avoid many premature deaths and severe recessions in order to combat pandemics.
Details