Search results
1 – 10 of over 2000Owing to some uncontrollable factors, only a portion of an experiment can be completed. Such incomplete data are generally referred to as censored data. Conventional approaches…
Abstract
Owing to some uncontrollable factors, only a portion of an experiment can be completed. Such incomplete data are generally referred to as censored data. Conventional approaches for analysis of censored data are computationally complicated. In this work an effective means of applying neural networks to analyze an experiment with singly‐censored data is presented. Two procedures are developed, which are simpler than conventional ones such as maximum likelihood estimation and Taguchi’s minute accumulating analysis. In addition, three numerical examples are presented to compare the proposed procedures with the conventional ones. Those comparisons reveal that proposed procedures are effective and feasible.
Details
Keywords
Esteban and D. Morales
Proposes a partially parametric estimation of a survival function when data may be both left and right censored. Assuming that the chance of censoring is not related to the…
Abstract
Proposes a partially parametric estimation of a survival function when data may be both left and right censored. Assuming that the chance of censoring is not related to the individual’s survival time, the proposed estimator treats the uncensored observations non parametrically and uses parametic models for the censored observations. In this way, the results extend Klein et al.’s work (1990) to the doubly censored data case. Shows some of the properties of the estimator when the correct theoretical parametric model is selected.
Details
Keywords
L.L. Ho and A.F. Silva
To present the bootstrap procedure to correct biases in maximum likelihood estimator of mean time to failure (MTTF) and percentiles in a Weibull regression model.
Abstract
Purpose
To present the bootstrap procedure to correct biases in maximum likelihood estimator of mean time to failure (MTTF) and percentiles in a Weibull regression model.
Design/methodology/approach
A reliability model is described by a Weibull regression model with parameters being estimated by maximum likelihood method and they will be used estimate other quantities of interest as MTTF or percentiles. When a small sample is employed it is known that the estimates of these quantities are biased. A simulation study varying sample size, censored mechanisms, allocation mechanisms and levels of censored data are designed to quantify these biases.
Findings
The bootstrap procedure corrects the biased maximum likelihood estimates of MTTF and percentiles.
Practical implications
A minor sample may be required if the bootstrap procedure is required to produce estimator of the quantities as MTTF and percentiles.
Originality/value
The employment of bootstrap procedure to quantify the biases since analytical expression of the biases are very difficult to calculate. And the minor samples are needed to obtain unbiased estimates for bootstrap corrected estimator.
Details
Keywords
Priyanka Chaurasia, Sally McClean, Chris D. Nugent and Bryan Scotney
The purpose of this paper is to discuss an online sensor-based support system which the authors believe can be useful in such scenarios. Persons with a cognitive impairment, such…
Abstract
Purpose
The purpose of this paper is to discuss an online sensor-based support system which the authors believe can be useful in such scenarios. Persons with a cognitive impairment, such as those with Alzheimer’s disease, suffer from deficiencies in cognitive skills which reduce their independence; such patients can benefit from the provision of further assistance such as reminders for carrying out instrumental activities of daily living (IADLs).
Design/methodology/approach
The system proposed processes data from a network of sensors that have the capability of sensing user interactions and on-going IADLs in the living environment itself. A probabilistic learning model is built that computes joint probability distributions over different activities representing users’ behavioural patterns in performing activities. This probability model can underpin an intervention framework that prompts the user with the next step in the IADL when inactivity is being observed. This prompt for the next step is inferred from the conditional probability taken into consideration the IADL steps that have already been completed, in addition to contextual information relating to the time of day and the amount of time already spent on the activity. The originality of the work lies in combining partially observed sensor sequences and duration data associated with the IADLs. The prediction of the next step is then adjusted as further steps are completed and more time is spent towards the completion of the activity, thus updating the confidence that the prediction is correct. A reminder is only issued when there has been sufficient inactivity on the part of the patient and the confidence is high that the prediction is correct.
Findings
The results of this study verify that by including duration information the prediction accuracy of the model is increased and the confidence level for the next step in the IADL is also increased. As such, there is approximately a 10 per cent rise in the prediction performance in the case of single sensor activation in comparison to an alternative approach which did not consider activity durations.
Practical implications
Duration information to a certain extent has been widely ignored by activity recognition researchers and has received a very limited application within smart environments.
Originality/value
This study concludes that incorporating progressive duration information into partially observed sensor sequences of IADLs has the potential to increase performance of a reminder system for patients with a cognitive impairment, such as Alzheimer’s disease.
Details
Keywords
Priyanka Chaurasia, Sally McClean, Chris D. Nugent and Bryan Scotney
This paper aims to discuss an online sensor-based support system which is believed to be useful for persons with a cognitive impairment, such as those with Alzheimer’s disease…
Abstract
Purpose
This paper aims to discuss an online sensor-based support system which is believed to be useful for persons with a cognitive impairment, such as those with Alzheimer’s disease, suffering from deficiencies in cognitive skills which reduce their independence. Such patients can benefit from the provision of further assistance such as reminders for carrying out instrumental activities of daily living (iADLs).
Design/methodology/approach
The system proposed processes data from a network of sensors that have the capability of sensing user interactions and ongoing iADLs in the living environment itself. A probabilistic learning model is built that computes joint probability distributions over different activities representing users’ behavioural patterns in performing activities. This probability model can underpin an intervention framework that prompts the user with the next step in the iADL when inactivity is being observed. This prompt for the next step is inferred from the conditional probability, taking into consideration the iADL steps that have already been completed, in addition to contextual information relating to the time of day and the amount of time already spent on the activity. The originality of the work lies in combining partially observed sensor sequences and duration data associated with the iADLs. The prediction of the next step is then adjusted as further steps are completed and more time is spent towards the completion of the activity; thus, updating the confidence that the prediction is correct. A reminder is only issued when there has been sufficient inactivity on the part of the patient and the confidence is high that the prediction is correct.
Findings
The results verify that by including duration information, the prediction accuracy of the model is increased, and the confidence level for the next step in the iADL is also increased. As such, there is approximately a 10 per cent rise in the prediction performance in the case of single-sensor activation in comparison to an alternative approach which did not consider activity durations. Thus, it is concluded that incorporating progressive duration information into partially observed sensor sequences of iADLs has the potential to increase performance of a reminder system for patients with a cognitive impairment, such as Alzheimer’s disease.
Originality/value
Activity duration information can be a potential feature in measuring the performance of a user and distinguishing different activities. The results verify that by including duration information, the prediction accuracy of the model is increased, and the confidence level for the next step in the activity is also increased. The use of duration information in online prediction of activities can also be associated to monitoring the deterioration in cognitive abilities and in making a decision about the level of assistance required. Such improvements have significance in building more accurate reminder systems that precisely predict activities and assist its users, thus, improving the overall support provided for living independently.
Details
Keywords
Planning an accelerated life test (ALT) for a product is an important task for reliability practitioners. Traditional methods to create an optimal design of an ALT are often…
Abstract
Purpose
Planning an accelerated life test (ALT) for a product is an important task for reliability practitioners. Traditional methods to create an optimal design of an ALT are often computationally burdensome and numerically difficult. In this paper, the authors introduce a practical method to find an optimal design of experiments for ALTs by using simulation and empirical model building.
Design/methodology/approach
Instead of developing the Fisher information matrix-based objective function and analytic optimization, the authors suggest “experiments for experiments” approach to create optimal planning. The authors generate simulated data to evaluate the quantity of interest, e.g. 10th percentile of failure time and apply the response surface methodology (RSM) to find an optimal solution with respect to the design parameters, e.g. test conditions and test unit allocations. The authors illustrate their approach applied to the thermal ALT with right censoring and lognormal failure time distribution.
Findings
The design found by the proposed approach shows substantially improved statistical performance in terms of the standard error of estimates of 10th percentile of failure time. In addition, the approach provides useful insights about the sensitivity of each decision variable to the objective function.
Research limitations/implications
More comprehensive experiments might be needed to test its scalability of the method.
Practical implications
This method is practically useful to find a reasonably efficient optimal ALT design. It can be applied to any quantities of interest and objective functions as long as those quantities can be computed from a set of simulated datasets.
Originality/value
This is a novel approach to create an optimal ALT design by using RSM and simulated data.
Details
Keywords
The purpose of this paper is to examine the joint dynamics of volatility–volume relation in the high-yield (junk) corporate bond market during the 2007–2008 financial crisis.
Abstract
Purpose
The purpose of this paper is to examine the joint dynamics of volatility–volume relation in the high-yield (junk) corporate bond market during the 2007–2008 financial crisis.
Design/methodology/approach
The author proposes a new empirical model of three-stage equations to better estimate the volume–volatility relation that helps in alleviating three econometrical problems. In Stage 1, the author estimates the fitted values of trading volume using a censored regression model, to alleviate the truncation problems of using Transaction Reporting and Compliance Engine data. In Stage 2, the author calculates the fitted values of bond return volatility using asymmetric Sign-GARCH model, to control for the asymmetric volatility in return series. In Stage 3, the author uses the fitted values of trading volume from the censored regression model (Stage 1) and the fitted values of return volatility from the GARCH model (Stage 2), to better alleviate the endogeneity problems between both variables.
Findings
The central finding is that conclusions about the statistical significance and the direction of the volume–volatility relationship in the junk bond market are dependent on the econometric methodology used.
Originality/value
From a practitioner perspective, it is important for professional traders holding positions in fixed income securities in their trading accounts to be aware of their asymmetric time-varying volume–volatility shifting trends. Such knowledge helps traders diversify their positions and manage their portfolios more appropriately.
Details
Keywords
Stephan Lindner and Austin Nichols
Workers in the United States who lose their job may benefit from temporary assistance programs and may apply for Disability Insurance (DI) and Supplemental Security Income (SSI)…
Abstract
Workers in the United States who lose their job may benefit from temporary assistance programs and may apply for Disability Insurance (DI) and Supplemental Security Income (SSI). We measure whether participation in four temporary assistance programs (Temporary Assistance for Needy Families (TANF), Supplemental Nutrition Assistance Program (SNAP), Unemployment Insurance (UI), and Temporary Disability Insurance programs (TDI)) influence application for DI, SSI, and re-employment. We instrument temporary assistance participation using variation in policies across states and over time. Results from our instrumental variables models suggest that increased access to UI benefits reduces applications for DI. This result is robust to different sensitivity checks. We also find less robust evidence that UI participation increases the probability of return to work and reduces the probability of claiming SSI benefits. In contrast, some of our results suggest a positive effect of SNAP participation on claiming SSI.
Details
Keywords
Ruonan Liu, Yuhui Yue, Dongling Miao and Baodong Cheng
This article will select 25 years of subdivided data to perform Kaplan–Meier survival analysis on the export trade relations of Chinese wooden flooring, use discrete-time cloglog…
Abstract
Purpose
This article will select 25 years of subdivided data to perform Kaplan–Meier survival analysis on the export trade relations of Chinese wooden flooring, use discrete-time cloglog models to analyze influencing factors, use logit and probit models to test the robustness, and try to systematically reveal the duration of China's wood flooring export trade and its influencing factors.
Design/methodology/approach
This study used Kaplan–Meier survival function estimation method. In the survival analysis, survival function and hazard rate function are often used to characterize the distribution of survival time.
Findings
The continuous average export time of China's wooden flooring is relatively long, about 14 years. China's wooden flooring has a negative time dependency. After the export trade exceeds the threshold value of 15 years, the failure rate of trade greatly decreases, which has a “threshold effect.” Gravity model variables have a significant impact on the duration of China's wooden floor export.
Originality/value
Studying the duration of forest products trade is of great significance for clearing deep-level trade relations and promoting sustainable development of forest products trade.
Details
Keywords
Marcelo Cajias and Anna Freudenreich
This is the first article to apply a machine learning approach to the analysis of time on market on real estate markets.
Abstract
Purpose
This is the first article to apply a machine learning approach to the analysis of time on market on real estate markets.
Design/methodology/approach
The random survival forest approach is introduced to the real estate market. The most important predictors of time on market are revealed and it is analyzed how the survival probability of residential rental apartments responds to these major characteristics.
Findings
Results show that price, living area, construction year, year of listing and the distances to the next hairdresser, bakery and city center have the greatest impact on the marketing time of residential apartments. The time on market for an apartment in Munich is lowest at a price of 750 € per month, an area of 60 m2, built in 1985 and is in a range of 200–400 meters from the important amenities.
Practical implications
The findings might be interesting for private and institutional investors to derive real estate investment decisions and implications for portfolio management strategies and ultimately to minimize cash-flow failure.
Originality/value
Although machine learning algorithms have been applied frequently on the real estate market for the analysis of prices, its application for examining time on market is completely novel. This is the first paper to apply a machine learning approach to survival analysis on the real estate market.
Details