Search results

1 – 10 of over 6000
Article
Publication date: 22 September 2017

Ryan McKeon

The purpose of this paper is to conduct an empirical analysis of the pattern of time value decay in listed equity options, considering both call and put options and different…

Abstract

Purpose

The purpose of this paper is to conduct an empirical analysis of the pattern of time value decay in listed equity options, considering both call and put options and different moneyness and maturity levels.

Design/methodology/approach

The research design is empirical, with great attention paid to creating a standardized measure of time value that can be both tracked over time for an individual option contract and meaningfully compared across two or more different option contracts.

Findings

The author finds that moneyness classification at the beginning of the holding period is the key determinant of the pattern of subsequent time decay. The type of option, call or put, and the maturity of the contract have surprisingly little relevance to the pattern of time decay “out-the-money contracts having similar patterns on average, regardless of whether they are calls or puts, 30-day or 60-day contracts.” More detailed analysis reveals that In-the-money and out-the-money contracts have slow time decay for most of the contract life, with a significant percentage of the time decay concentrated on the final day of the option. At-the-money contracts experience strong decay early in the life of the option.

Research limitations/implications

The study is limited by not having intra-day data included to analyze more frequent price movements.

Practical implications

The results reported in the paper provide insight into issues of active management facing options traders, specifically choices such as the initial maturity of the option contract and rollover frequency.

Originality/value

Very few studies examine the important issue of how option time value behaves. Time value is the subjective part of the option contract value, and therefore very difficult to predict and understand. This paper provides insight into typical empirical patterns of time value behavior.

Details

China Finance Review International, vol. 7 no. 4
Type: Research Article
ISSN: 2044-1398

Keywords

Article
Publication date: 19 June 2019

Kasif Teker, Yassir A. Ali and Ali Uzun

This study aims to investigate photosensing characteristics of SiC and GaN nanowire-based devices through exposure to UV light. The photocurrent transients have been modeled to…

142

Abstract

Purpose

This study aims to investigate photosensing characteristics of SiC and GaN nanowire-based devices through exposure to UV light. The photocurrent transients have been modeled to determine rise and decay process time constants. The 1D-semiconductor nanowires can exhibit higher light sensitivity compared to bulk materials because of their large surface area to volume ratio and the quantum size effects.

Design/methodology/approach

Nanowire devices have been fabricated through dielectrophoresis for integrating nanowires onto pre-patterned electrodes (10 nm Ti/ 90 nm Au) with a spacing about 3 µm onto SiO2/Si (doped) substrate. The photocurrent measurements were carried out under room temperature conditions with UV light of 254 nm wavelength.

Findings

SiCNWs yield very short rise and decay time constants of 1.3 and 2.35 s, respectively. This fast response indicates an enhanced surface recombination of photoexcited electron-hole pairs. Conversely, GaNNWs yield longer rise and decay time constants of 10.3 and 15.4 s, respectively. This persistent photocurrent suggests a reduced surface recombination process for the GaNNWs.

Originality/value

High selective UV light sensitivity, small size, very short response time, low power consumption and high efficiency are the most important features of nanowire-based devices for new and superior applications in photodetectors, photovoltaics, optical switches, image sensors and biological and chemical sensing.

Article
Publication date: 18 April 2016

Arnt O. Hopland and Sturla F. Kvamsdal

– The purpose of this paper is to set up and analyze a formal model for maintenance scheduling for local government purpose buildings.

Abstract

Purpose

The purpose of this paper is to set up and analyze a formal model for maintenance scheduling for local government purpose buildings.

Design/methodology/approach

The authors formulate the maintenance scheduling decision as a dynamic optimization problem, subject to an accelerating decay. This approach offers a formal, yet intuitive, weighting of an important trade-off when deciding a maintenance schedule.

Findings

The optimal maintenance schedule reflects a trade-off between the interest rate and the rate at which the decay accelerates. The prior reflects the alternative cost, since the money spent on maintenance could be saved and earn interests, while the latter reflects the cost of postponing maintenance. Importantly, it turns out that it is sub-optimal to have a cyclical maintenance schedule where the building is allowed to decay and then be intensively maintained before decaying again. Rather, local governments should focus the maintenance either early in the building’s life-span and eventually let it decay toward replacement/abandonment or first let it decay to a target level and then keep it there until replacement/abandonment. Which of the two is optimal depends on the trade-off between the alternative cost and the cost of postponing maintenance.

Originality/value

The paper provides a first formal inquiry into important trade-offs that are important for maintenance scheduling of local public purpose buildings.

Details

Property Management, vol. 34 no. 2
Type: Research Article
ISSN: 0263-7472

Keywords

Article
Publication date: 10 July 2017

Walid Ben Omrane, Chao He, Zhongzhi Lawrence He and Samir Trabelsi

Forecasting the future movement of yield curves contains valuable information for both academic and practical issues such as bonding pricing, portfolio management, and government…

Abstract

Purpose

Forecasting the future movement of yield curves contains valuable information for both academic and practical issues such as bonding pricing, portfolio management, and government policies. The purpose of this paper is to develop a dynamic factor approach that can provide more precise and consistent forecasting results under various yield curve dynamics.

Design/methodology/approach

The paper develops a unified dynamic factor model based on Diebold and Li (2006) and Nelson and Siegel (1987) three-factor model to forecast the future movement yield curves. The authors apply the state-space model and the Kalman filter to estimate parameters and extract factors from the US yield curve data.

Findings

The authors compare both in-sample and out-of-sample performance of the dynamic approach with various existing models in the literature, and find that the dynamic factor model produces the best in-sample fit, and it dominates existing models in medium- and long-horizon yield curve forecasting performance.

Research limitations/implications

The authors find that the dynamic factor model and the Kalman filter technique should be used with caution when forecasting short maturity yields on a short time horizon, in which the Kalman filter is prone to trade off out-of-sample robustness to maintain its in-sample efficiency.

Practical implications

Bond analysts and portfolio managers can use the dynamic approach to do a more accurate forecast of yield curve movements.

Social implications

The enhanced forecasting approach also equips the government with a valuable tool in setting macroeconomic policies.

Originality/value

The dynamic factor approach is original in capturing the level, slope, and curvature of yield curves in that the decay rate is set as a free parameter to be estimated from yield curve data, instead of setting it to be a fixed rate as in the existing literature. The difference range of estimated decay rate provides richer yield curve dynamics and is the key to stronger forecasting performance.

Details

Managerial Finance, vol. 43 no. 7
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 11 March 2019

Oguchi Nkwocha

Measures are important to healthcare outcomes. Outcome changes result from deliberate selective intervention introduction on a measure. If measures can be characterized and…

Abstract

Purpose

Measures are important to healthcare outcomes. Outcome changes result from deliberate selective intervention introduction on a measure. If measures can be characterized and categorized, then the resulting schema may be generalized and utilized as a framework for uniquely identifying, packaging and comparing different interventions and probing target systems to facilitate selecting the most appropriate intervention for maximum desired outcomes. Measure characterization was accomplished with multi-axial statistical analysis and measure categorization by logical tabulation. The measure of interest is a key provider productivity index: “patient visits per hour,” while the specific intervention is “patient schedule manipulation by overbooking.” The paper aims to discuss these issues.

Design/methodology/approach

For statistical analysis, interrupted time series (ITS), robust-ITS and outlier detection models were applied to an 18-month data set that included patient visits per hour and intervention introduction time. A statistically significant change-point was determined, resulting in pre-intervention, transitional and post-effect segmentation. Linear regression modeling was used to analyze pre-intervention and post-effect mean change while a triangle was used to analyze the transitional state. For categorization, an “intervention moments” table was constructed from the analysis results with: time-to-effect, pre- and post-mean change magnitude and velocity; pre- and post-correlation and variance; and effect decay/doubling time. The table included transitional parameters such as transition velocity and transition footprint visualization represented as a triangle.

Findings

The intervention produced a significant change. The pre-intervention and post-effect means for patient visits per hour were statistically different (0.38, p=0.0001). The pre- and post-variance change (0.23, p=0.01) was statistically significant (variance was higher post-intervention, which was undesirable). Post-intervention correlation was higher (desirable). Decay time for the effect was calculated as 11 months post-effect. Time-to-effect was four months; mean change velocity was +0.094 visits per h/month. A transition triangular footprint was produced, yielding 0.35 visits per hr/month transition velocity. Using these results, the intervention was fully profiled and thereby categorized as an intervention moments table.

Research limitations/implications

One limitation is sample size for this time series, 18 monthly cycles’ analysis. However, interventions on measures in healthcare demand short time cycles (hence necessarily yielding fewer data points) for practicality, meaningfulness and usefulness. Despite this shortcoming, the statistical processes applied such as outliers detection, t-test for mean difference, F-test for variances and modeling, all consider the small sample sizes. Seasonality, which usually affects time series, was not detected and even if present, was also considered by modeling.

Practical implications

Obtaining an intervention profile, made possible by multidimensional analysis, allows interventions to be uniquely classified and categorized, enabling informed, comparative and appropriate selective deployment against health measures, thus potentially contributing to outcomes optimization.

Social implications

The inevitable direction for healthcare is heavy investment in measures outcomes optimization to improve: patient experience; population health; and reduce costs. Interventions are the tools that change outcomes. Creative modeling and applying novel methods for intervention analysis are necessary if healthcare is to achieve this goal. Analytical methods should categorize and rank interventions; probe the measures to improve future selection and adoption; reveal the organic systems’ strengths and shortcomings implementing the interventions for fine-tuning for better performance.

Originality/value

An “intervention moments table” is proposed, created from a multi-axial statistical intervention analysis for organizing, classifying and categorizing interventions. The analysis-set was expanded with additional parameters such as time-to-effect, mean change velocity and effect decay time/doubling time, including transition zone analysis, which produced a unique transitional footprint; and transition velocity. The “intervention moments” should facilitate intervention cross-comparisons, intervention selection and optimal intervention deployment for best outcomes optimization.

Details

International Journal of Health Care Quality Assurance, vol. 32 no. 2
Type: Research Article
ISSN: 0952-6862

Keywords

Article
Publication date: 27 February 2023

Wenfeng Zhang, Ming K. Lim, Mei Yang, Xingzhi Li and Du Ni

As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This…

Abstract

Purpose

As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This motivates researchers to continuously add new features to the datasets for the credit risk prediction (CRP). However, adding new features can easily lead to missing of the data.

Design/methodology/approach

Based on the gaps summarized from the literature in CRP, this study first introduces the approaches to the building of datasets and the framing of the algorithmic models. Then, this study tests the interpolation effects of the algorithmic model in three artificial datasets with different missing rates and compares its predictability before and after the interpolation in a real dataset with the missing data in irregular time-series.

Findings

The algorithmic model of the time-decayed long short-term memory (TD-LSTM) proposed in this study can monitor the missing data in irregular time-series by capturing more and better time-series information, and interpolating the missing data efficiently. Moreover, the algorithmic model of Deep Neural Network can be used in the CRP for the datasets with the missing data in irregular time-series after the interpolation by the TD-LSTM.

Originality/value

This study fully validates the TD-LSTM interpolation effects and demonstrates that the predictability of the dataset after interpolation is improved. Accurate and timely CRP can undoubtedly assist a target company in avoiding losses. Identifying credit risks and taking preventive measures ahead of time, especially in the case of public emergencies, can help the company minimize losses.

Details

Industrial Management & Data Systems, vol. 123 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 15 March 2018

Vaibhav Chaudhary, Rakhee Kulshrestha and Srikanta Routroy

The purpose of this paper is to review and analyze the perishable inventory models along various dimensions such as its evolution, scope, demand, shelf life, replenishment policy…

2592

Abstract

Purpose

The purpose of this paper is to review and analyze the perishable inventory models along various dimensions such as its evolution, scope, demand, shelf life, replenishment policy, modeling techniques and research gaps.

Design/methodology/approach

In total, 418 relevant and scholarly articles of various researchers and practitioners during 1990-2016 were reviewed. They were critically analyzed along author profile, nature of perishability, research contributions of different countries, publication along time, research methodologies adopted, etc. to draw fruitful conclusions. The future research for perishable inventory modeling was also discussed and suggested.

Findings

There are plethora of perishable inventory studies with divergent objectives and scope. Besides demand and perishable rate in perishable inventory models, other factors such as price discount, allow shortage or not, inflation, time value of money and so on were found to be combined to make it more realistic. The modeling of inventory systems with two or more perishable items is limited. The multi-echelon inventory with centralized decision and information sharing is acquiring lot of importance because of supply chain integration in the competitive market.

Research limitations/implications

Only peer-reviewed journals and conference papers were analyzed, whereas the manuals, reports, white papers and blood-related articles were excluded. Clustering of literature revealed that future studies should focus on stochastic modeling.

Practical implications

Stress had been laid to identify future research gaps that will help in developing realistic models. The present work will form a guideline to choose the appropriate methodology(s) and mathematical technique(s) in different situations with perishable inventory.

Originality/value

The current review analyzed 419 research papers available in the literature on perishable inventory modeling to summarize its current status and identify its potential future directions. Also the future research gaps were uncovered. This systemic review is strongly felt to fill the gap in the perishable inventory literature and help in formulating effective strategies to design of an effective and efficient inventory management system for perishable items.

Details

Journal of Advances in Management Research, vol. 15 no. 3
Type: Research Article
ISSN: 0972-7981

Keywords

Article
Publication date: 12 September 2008

S. El Ferik, C.B. Ahmed, L. Ben Amor and S.A. Hussain

The purpose of this paper is to reduce the inrush current and dip in voltage for energy‐ saving purposes in relation to residential air‐conditioning systems.

1097

Abstract

Purpose

The purpose of this paper is to reduce the inrush current and dip in voltage for energy‐ saving purposes in relation to residential air‐conditioning systems.

Design/methodology/approach

The paper focuses on the experimental harmonic investigation of a window‐type residential AC unit line current under time‐based soft‐starting control strategy. The control strategy assumes that only source voltage and current measurements are available. The soft‐starter is based on power electronic devices controlled through a firing signal generated by a programmed microcontroller during the first 500 ms.

Findings

The harmonic content shows the effect of the soft‐starter in exciting high‐frequency components of the line current. Harmonics investigations show that the high frequencies – even or odd multiples – of the fundamental line frequency are all excited by the soft‐starter approach. Some of these frequencies may harm the life cycle of the air‐conditioner.

Research limitations/implications

The real data harmonic analysis shows that the adopted approach excites the entire frequency spectrum of the signal. A better monitoring of the harmonics is required. A closed loop adaptive soft‐starting control may perform much better than a time‐based soft‐starting strategy.

Originality/value

The paper assesses the power quality related to time‐based soft‐starting strategy of a residential air‐conditioning system to reduce the inrush current and the dip in voltage, both with a serious effect on energy savings, especially when the AC load is around 65 per cent of the total power demand load.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 27 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 April 2000

Fumiyo N. Kondo and Genshiro Kitagawa

Access to daily store level scanner data has been increasingly easier in recent years in Japan and time series analysis based on a sales response model is becoming realistic…

2771

Abstract

Access to daily store level scanner data has been increasingly easier in recent years in Japan and time series analysis based on a sales response model is becoming realistic. Introduces a new method of combining time series analysis and regression analysis on the price promotion effect, which enables simultaneous decomposition of store level scanner sales into trend (including seasonality), day‐of‐the‐week effect and explanatory variable effect due to price promotion. The method was applied to daily store level scanner sales of milk, showing evidence of the existence of day‐of‐the‐week effect. Further, a method of incorporating several kinds of price‐cut variables in regression analysis and the analyzed results were presented.

Details

Marketing Intelligence & Planning, vol. 18 no. 2
Type: Research Article
ISSN: 0263-4503

Keywords

Article
Publication date: 1 March 2005

M. McSherry, C. Fitzpatrick and E. Lewis

There are various temperature measuring systems presented in the literature and on the market today. Over the past number of years a range of luminescent‐based optical fibre…

1445

Abstract

Purpose

There are various temperature measuring systems presented in the literature and on the market today. Over the past number of years a range of luminescent‐based optical fibre sensors have been reported and developed which include fluorescence and optical scattering. These temperature sensors incorporate materials that emit wavelength shifted light when excited by an optical source. The majority of commercially available systems are based on fluorescent properties.Design/methodology/approach – Many published journal articles and conference papers were investigated and existing temperature sensors in the market were examined.Findings – In optical thermometry, the light is used to carry temperature information. In many cases optical fibres are used to transmit and receive this light. Optical fibres are immune to electromagnetic interference and are small in size, which allows them to make very localized measurements. A temperature sensitive material forms a sensor and the subsequent optical data are transmitted via optical fibres to electronic detection systems. Two keys areas were investigated namely fluorescence based temperature sensors and temperature sensors involving optical scattering.Originality/value – An overview of optical fibre temperature sensors based on luminescence is presented. This review provides a summary of optical temperature sensors, old and new which exist in today's world of sensing.

Details

Sensor Review, vol. 25 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of over 6000