Search results
1 – 10 of over 6000Deregulation shifts the responsibility for mitigation of agency problems from the regulatory parties to the firms' shareholders. We investigate whether and how governance…
Abstract
Deregulation shifts the responsibility for mitigation of agency problems from the regulatory parties to the firms' shareholders. We investigate whether and how governance structure changes in response to the dynamics of the new business environment after the Regulatory Reform Act of 1994 for the US trucking industry. We show that deregulation increases market competition in the trucking industry. The deregulated trucking firms not only adjust internal governance structure but also alter antitakeover provisions to adapt themselves to the competitive status of business environment after deregulation.
Details
Keywords
Mihir Kumar Pal and Pinki Bera
This study attempts to analyze energy intensity, capacity utilization (CU), output and productivity growth of aggregate manufacturing sector in India during the period 1980–1981…
Abstract
This study attempts to analyze energy intensity, capacity utilization (CU), output and productivity growth of aggregate manufacturing sector in India during the period 1980–1981 to 2016–2017. A decadal analysis as well as a comparison between pre- and post-liberalization period of productivity growth is also made. Total factor productivity growth (TFPG) is also adjusted with CU to obtain adjusted TFPG. The trend in energy intensity is also analyzed to answer the question of sustainability. Results shows that TFPG declined in the post-reform period, highlighting the fact that liberalization process has its adverse impact on productivity growth. From the study it is observed that a declining trend in adjusted TFPG in the post-reform period, but the rate of decline is higher. Energy intensity and CU of the Indian manufacturing industries is found to be increasing over the study period. Increasing energy intensity, quite significantly, would increase the level of pollution generated by the manufacturing industries. So, interestingly enough, this may lead to conclude that the growth of the manufacturing industries is not in line with the basic essence of sustainable development.
Details
Keywords
Stephen M. Stohs and Jeffrey T. LaFrance
A common feature of certain kinds of data is a high level of statistical dependence across space and time. This spatial and temporal dependence contains useful information that…
Abstract
A common feature of certain kinds of data is a high level of statistical dependence across space and time. This spatial and temporal dependence contains useful information that can be exploited to significantly reduce the uncertainty surrounding local distributions. This chapter develops a methodology for inferring local distributions that incorporates these dependencies. The approach accommodates active learning over space and time, and from aggregate data and distributions to disaggregate individual data and distributions. We combine data sets on Kansas winter wheat yields – annual county-level yields over the period from 1947 through 2000 for all 105 counties in the state of Kansas, and 20,720 individual farm-level sample moments, based on ten years of the reported actual production histories for the winter wheat yields of farmers participating in the United States Department of Agriculture Federal Crop Insurance Corporation Multiple Peril Crop Insurance Program in each of the years 1991–2000. We derive a learning rule that combines statewide, county, and local farm-level data using Bayes’ rule to estimate the moments of individual farm-level crop yield distributions. Information theory and the maximum entropy criterion are used to estimate farm-level crop yield densities from these moments. These posterior densities are found to substantially reduce the bias and volatility of crop insurance premium rates.
Kimberly J. Vannest and Heather S. Davis
This chapter covers the conceptual framework and presents practical guidelines for using single-case research (SCR) methods to determine evidence-based treatments. We posit that…
Abstract
This chapter covers the conceptual framework and presents practical guidelines for using single-case research (SCR) methods to determine evidence-based treatments. We posit that SCR designs contribute compelling evidence to the knowledge base that is distinct from group design research. When effect sizes are calculated SCR can serve as a reliable indicator of how much behavior change occurs with an intervention in applied settings. Strong SCR design can determine functional relationships and effect sizes with confidence intervals can represent the size and the certainty of the results in a standardized manner. Thus, SCR is unique in retaining data about the individual and individual effects, while also providing data that can be aggregated to identify evidence-based treatments and examine moderator variables.
Thomas T. H. Wan, Yi-Ling Lin and Judith Ortiz
This study is to examine factors contributing to the variability in chronic obstructive pulmonary disorder (COPD) and asthma hospitalization rates when the influence of patient…
Abstract
Purpose
This study is to examine factors contributing to the variability in chronic obstructive pulmonary disorder (COPD) and asthma hospitalization rates when the influence of patient characteristics is being simultaneously considered by applying a risk adjustment method.
Methodology/approach
A longitudinal analysis of COPD and asthma hospitalization of rural Medicare beneficiaries in 427 rural health clinics (RHCs) was conducted utilizing administrative data and inpatient and outpatient claims from Region 4. The repeated measures of risk-adjusted COPD and asthma admission rate were analyzed by growth curve modeling. A generalized estimating equation (GEE) method was used to identify the relevance of selected predictors in accounting for the variability in risk-adjusted admission rates for COPD and asthma.
Findings
Both adjusted and unadjusted rates of COPD admission showed a slight decline from 2010 to 2013. The growth curve modeling showed the annual rates of change were gradually accentuated through time. GEE revealed that a moderate amount of variance (marginal R 2 = 0.66) in the risk-adjusted hospital admission rates for COPD and asthma was accounted for by contextual, ecological, and organizational variables.
Research limitations/implications
The contextual, ecological, and organizational factors are those associated with RHCs, not hospitals. We cannot infer how the variability in hospital practices in RHC service areas may have contributed to the disparities in admissions. Identification of RHCs with substantially higher rates than an average rate can portray the need for further enhancement of needed ambulatory or primary care services for the specific groups of RHCs. Because the risk-adjusted rates of hospitalization do not vary by classification of rural area, future research should address the variation in a specific COPD and asthma condition of RHC patients.
Originality/value
Risk-adjusted admission rates for COPD and asthma are influenced by the synergism of multiple contextual, ecological, and organizational factors instead of a single factor.
Details
Keywords
Catherine Doz and Anna Petronevich
Several official institutions (NBER, OECD, CEPR, and others) provide business cycle chronologies with lags ranging from three months to several years. In this paper, we propose a…
Abstract
Several official institutions (NBER, OECD, CEPR, and others) provide business cycle chronologies with lags ranging from three months to several years. In this paper, we propose a Markov-switching dynamic factor model that allows for a more timely estimation of turning points. We apply one-step and two-step estimation approaches to French data and compare their performance. One-step maximum likelihood estimation is confined to relatively small data sets, whereas two-step approach that uses principal components can accommodate much bigger information sets. We find that both methods give qualitatively similar results and agree with the OECD dating of recessions on a sample of monthly data covering the period 1993–2014. The two-step method is more precise in determining the beginnings and ends of recessions as given by the OECD. Both methods indicate additional downturns in the French economy that were too short to enter the OECD chronology.
Details
Keywords
This paper presents a decomposition forecast of stock prices using time series of weekly stock price data as implemented in Excel. The following decomposition components are…
Abstract
This paper presents a decomposition forecast of stock prices using time series of weekly stock price data as implemented in Excel. The following decomposition components are presented, analyzed, and interpreted including a moving average, a trend, a periodic function, and two shock variables including a triangular shock variable and a level change. The results of the individual components are compared and a discussion of each component’s efficiency is provided. The trend component is statistically significant over the forecast time. The moving average component displays a bi-modal error distribution over varying spans of the moving average and forecast periods. The first mode coincides with random walk behavior with an optimal span and forecast period of one. The second mode is more interesting and applicable for investing beyond the short-term with an optimal spans and forecast periods beyond 75 weeks. The periodic sine function well captures the typical U.S. business cycle of 4–5 years and significantly improves model performance. Finally, the significant outliers remaining from the decomposition are diagnosed and modeled with a triangular shock variable for the bust and recovery associated with the 2008 financial crisis. The model presented does a good job of decomposing the analytical components in forecasting stock prices and provides a useful illustration of Excel methods.
Details
Keywords
Dmitrij Celov and Mariarosaria Comunale
Recently, star variables and the post-crisis nature of cyclical fluctuations have attracted a great deal of interest. In this chapter, the authors investigate different methods of…
Abstract
Recently, star variables and the post-crisis nature of cyclical fluctuations have attracted a great deal of interest. In this chapter, the authors investigate different methods of assessing business cycles (BCs) for the European Union in general and the euro area in particular. First, the authors conduct a Monte Carlo (MC) experiment using a broad spectrum of univariate trend-cycle decomposition methods. The simulation aims to examine the ability of the analysed methods to find the observed simulated cycle with structural properties similar to actual macroeconomic data. For the simulation, the authors used the structural model’s parameters calibrated to the euro area’s real gross domestic product (GDP) and unemployment rate. The simulation outcomes indicate the sufficient composition of the suite of models (SoM) consisting of popular Hodrick–Prescott, Christiano–Fitzgerald and structural trend-cycle-seasonal filters, then used for the real application. The authors find that: (i) there is a high level of model uncertainty in comparing the estimates; (ii) growth rate (acceleration) cycles have often the worst performances, but they could be useful as early-warning predictors of turning points in growth and BCs; and (iii) the best-performing MC approaches provide a reasonable combination as the SoM. When swings last less time and/or are smaller, it is easier to pick a good alternative method to the suite to capture the BC for real GDP. Second, the authors estimate the BCs for real GDP and unemployment data varying from 1995Q1 to 2020Q4 (GDP) or 2020Q3 (unemployment), ending up with 28 cycles per country. This analysis also confirms that the BCs of euro area members are quite synchronized with the aggregate euro area. Some major differences can be found, however, especially in the case of periphery and new member states, with the latter improving in terms of coherency after the global financial crisis. The German cycles are among the cyclical movements least synchronized with the aggregate euro area.
Details
Keywords
Mukesh Bajaj, Andrew H. Chen and Sumon C. Mazumdar
Chen and Ritter (2000) documented that underwriter spreads for recent US initial public offerings (IPOs) in $20 million range as well as much larger IPOs in the $80 million range…
Abstract
Chen and Ritter (2000) documented that underwriter spreads for recent US initial public offerings (IPOs) in $20 million range as well as much larger IPOs in the $80 million range are clustered at 7%. This observation has led to a Department of Justice (DOJ) enquiry into potential price fixing by underwriters. We demonstrate through a times series analysis that IPOs have tripled in size and become much riskier over time. A pooled data analysis can therefore mask evidence of competition in the market. We find that spread clustering is not a recent phenomenon. Over time, clustering at 7% has increased as clustering above 7% has declined. IPO spreads have declined significantly over time as the firms going public more recently are riskier, underwriting efforts have increased and recent IPOs are much larger than IPOs in the past. Controlling for time trends, larger IPOs have lower average spreads. The market for underwriting IPOs seems to be competitive with entry of new firms during the hot markets.