Search results

1 – 10 of over 3000
Article
Publication date: 26 June 2019

P.K. Kapur, Saurabh Panwar and Ompal Singh

This paper aims to develop a parsimonious and innovative model that captures the dynamics of new product diffusion in the recent high-technology markets and thus assist both…

Abstract

Purpose

This paper aims to develop a parsimonious and innovative model that captures the dynamics of new product diffusion in the recent high-technology markets and thus assist both academicians and practitioners who are eager to understand the diffusion phenomena. Accordingly, this study develops a novel diffusion model to forecast the demand by centering on the dynamic state of the product’s adoption rate. The proposed study also integrates the consumer’s psychological point of view on price change and goodwill of the innovation in the diffusion process.

Design/methodology/approach

In this study, a two-dimensional distribution function has been derived using Cobb–Douglas’s production function to combine the effect of price change and continuation time (goodwill) of the technology in the market. Focused on the realistic scenario of sales growth, the model also assimilates the time-to-time variation in the adoption rate (hazard rate) of the innovation owing to companies changing marketing and pricing strategies. The time-instance upon which the adoption rate alters is termed as change-point.

Findings

For validation purpose, the developed model is fitted on the actual sales and price data set of dynamic random access memory (DRAM) semiconductors, liquid crystal display (LCD) monitors and room air-conditioners using non-linear least squares estimation procedure. The results indicate that the proposed model has better forecasting efficiency than the conventional diffusion models.

Research limitations/implications

The developed model is intrinsically restricted to a single generation diffusion process. However, technological innovations appear in generations. Therefore, this study also yields additional plausible directions for future analysis by extending the diffusion process in a multi-generational environment.

Practical implications

This study aims to assist marketing managers in determining the long-term performance of the technology innovation and examine the influence of fluctuating price on product demand. Besides, it also incorporates the dynamic tendency of adoption rate in modeling the diffusion process of technological innovations. This will support the managers in understanding the practical implications of different marketing and promotional strategies on the adoption rate.

Originality/value

This is the first attempt to study the value-based diffusion model that includes key interactions between goodwill of the innovation, price dynamics and change-point for anticipating the sales behavior of technological products.

Details

Journal of Modelling in Management, vol. 14 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 24 September 2018

Elaine Schornobay-Lui, Eduardo Carlos Alexandrina, Mônica Lopes Aguiar, Werner Siegfried Hanisch, Edinalda Moreira Corrêa and Nivaldo Aparecido Corrêa

There has been a growing concern about air quality because in recent years, industrial and vehicle emissions have resulted in unsatisfactory human health conditions. There is an…

Abstract

Purpose

There has been a growing concern about air quality because in recent years, industrial and vehicle emissions have resulted in unsatisfactory human health conditions. There is an urgent need for the measurements and estimations of particulate pollutants levels, especially in urban areas. As a contribution to this issue, the purpose of this paper is to use data from measured concentrations of particulate matter and meteorological conditions for the predictions of PM10.

Design/methodology/approach

The procedure included daily data collection of current PM10 concentrations for the city of São Carlos-SP, Brazil. These data series enabled to use an estimator based on artificial neural networks. Data sets were collected using the high-volume sampler equipment (VFA-MP10) in the period ranging from 1997 to 2006 and from 2014 to 2015. The predictive models were created using statistics from meteorological data. The models were developed using two neural network architectures, namely, perceptron multilayer (MLP) and non-linear autoregressive exogenous (NARX) inputs network.

Findings

It was observed that, over time, there was a decrease in the PM10 concentration rates. This is due to the implementation of more strict environmental laws and the development of less polluting technologies. The model NARX that used as input layer the climatic variables and the PM10 of the previous day presented the highest average absolute error. However, the NARX model presented the fastest convergence compared with the MLP network.

Originality/value

The presentation of a given PM10 concentration of the previous day improved the performance of the predictive models. This paper brings contributions with the NARX model applications.

Details

Management of Environmental Quality: An International Journal, vol. 30 no. 2
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 1 February 1992

D. LEFEBVRE, J. PERAIRE and K. MORGAN

We investigate the application of a least squares finite element method for the solution of fluid flow problems. The least squares finite element method is based on the…

Abstract

We investigate the application of a least squares finite element method for the solution of fluid flow problems. The least squares finite element method is based on the minimization of the L2 norm of the equation residuals. Upon discretization, the formulation results in a symmetric, positive definite matrix system which enables efficient iterative solvers to be used. The other motivations behind the development of least squares finite element methods are the applicability of higher order elements and the possibility of using the norm associated to the least squares functional for error estimation. For steady incompressible flows, we develop a method employing linear and quadratic triangular elements and compare their respective accuracy. For steady compressible flows, an implicit conservative least squares scheme which can capture shocks without the addition of artificial viscosity is proposed. A refinement strategy based upon the use of the least squares residuals is developed and several numerical examples are used to illustrate the capabilities of the method when implemented on unstructured triangular meshes.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 2 no. 2
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 1 March 1991

David Blake

The different types of estimators of rational expectations modelsare surveyed. A key feature is that the model′s solution has to be takeninto account when it is estimated. The two…

Abstract

The different types of estimators of rational expectations models are surveyed. A key feature is that the model′s solution has to be taken into account when it is estimated. The two ways of doing this, the substitution and errors‐in‐variables methods, give rise to different estimators. In the former case, a generalised least‐squares or maximum‐likelihood type estimator generally gives consistent and efficient estimates. In the latter case, a generalised instrumental variable (GIV) type estimator is needed. Because the substitution method involves more complicated restrictions and because it resolves the solution indeterminacy in a more arbitary fashion, when there are forward‐looking expectations, the errors‐in‐variables solution with the GIV estimator is the recommended combination.

Details

Journal of Economic Studies, vol. 18 no. 3
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 29 April 2014

Nebojsa S. Davcik

The research practice in management research is dominantly based on structural equation modeling (SEM), but almost exclusively, and often misguidedly, on covariance-based SEM. The…

2473

Abstract

Purpose

The research practice in management research is dominantly based on structural equation modeling (SEM), but almost exclusively, and often misguidedly, on covariance-based SEM. The purpose of this paper is to question the current research myopia in management research, because the paper adumbrates theoretical foundations and guidance for the two SEM streams: covariance-based and variance-based SEM; and improves the conceptual knowledge by comparing the most important procedures and elements in the SEM study, using different theoretical criteria.

Design/methodology/approach

The study thoroughly analyzes, reviews and presents two streams using common methodological background. The conceptual framework discusses the two streams by analysis of theory, measurement model specification, sample and goodness-of-fit.

Findings

The paper identifies and discusses the use and misuse of covariance-based and variance-based SEM utilizing common topics such as: first, theory (theory background, relation to theory and research orientation); second, measurement model specification (type of latent construct, type of study, reliability measures, etc.); third, sample (sample size and data distribution assumption); and fourth, goodness-of-fit (measurement of the model fit and residual co/variance).

Originality/value

The paper questions the usefulness of Cronbach's α research paradigm and discusses alternatives that are well established in social science, but not well known in the management research community. The author presents short research illustration in which analyzes the four recently published papers using common methodological background. The paper concludes with discussion of some open questions in management research practice that remain under-investigated and unutilized.

Details

Journal of Advances in Management Research, vol. 11 no. 1
Type: Research Article
ISSN: 0972-7981

Keywords

Article
Publication date: 1 March 2000

JEFFREY R. BOHN

In this second installment, the author addresses some of the problems associated with empirically validating contingent‐claim models for valuing risky debt. The article uses a…

Abstract

In this second installment, the author addresses some of the problems associated with empirically validating contingent‐claim models for valuing risky debt. The article uses a simple contingent claims risky debt valuation model to fit term structures of credit spreads derived from data for U.S. corporate bonds. An essential component to fitting this model is the use of expected default frequency; the estimate of the firms' expected default probability over a specific time horizon. The author discusses the statistical and econometric procedures used in fitting the term structure of credit spreads and estimating model parameters. These include iteratively reweighted non‐linear least squares are used to dampen the impact of outliers and ensure convergence in each cross‐sectional estimation from 1992 to 1999.

Details

The Journal of Risk Finance, vol. 1 no. 4
Type: Research Article
ISSN: 1526-5943

Article
Publication date: 30 October 2018

Mohammed Shuker Mahmood and D. Lesnic

The purpose of this paper is to solve numerically the identification of the thermal conductivity of an inhomogeneous and possibly anisotropic medium from interior/internal…

Abstract

Purpose

The purpose of this paper is to solve numerically the identification of the thermal conductivity of an inhomogeneous and possibly anisotropic medium from interior/internal temperature measurements.

Design/methodology/approach

The formulated coefficient identification problem is inverse and ill-posed, and therefore, to obtain a stable solution, a non-linear regularized least-squares approach is used. For the numerical discretization of the orthotropic heat equation, the finite-difference method is applied, while the non-linear minimization is performed using the MATLAB toolbox routine lsqnonlin.

Findings

Numerical results show the accuracy and stability of solution even in the presence of noise (modelling inexact measurements) in the input temperature data.

Research limitations/implications

The mathematical formulation uses temporal temperature measurements taken at many points inside the sample, and this may be too much information that is provided to identify a space-wise dependent only conductivity tensor.

Practical implications

As noisy data are inverted, the paper models real situations in which practical temperature measurements recorded using thermocouples are inherently contaminated with random noise.

Social implications

The identification of the conductivity of inhomogeneous and orthotropic media will be of great interest to the inverse problems community with applications in geophysics, groundwater flow and heat transfer.

Originality/value

The current investigation advances the field of coefficient identification problems by generalizing the conductivity to be anisotropic in addition of being heterogeneous. The originality lies in performing, for the first time, numerical simulations of inversion to find the orthotropic and inhomogeneous thermal conductivity from noisy temperature measurements. Further value and physical significance are brought in by determining the degree of cure in a resin transfer molding process, in addition to obtaining the inhomogeneous thermal conductivity of the tested material.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 29 no. 1
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 22 February 2022

U. Siva Rama Krishna and Naga Satish Kumar Ch

The ultra-thin white topping (UTW) is a cement concrete overlay of the thickness of 50–100 mm on bituminous concrete pavements with surface failures. This is a long-lasting…

Abstract

Purpose

The ultra-thin white topping (UTW) is a cement concrete overlay of the thickness of 50–100 mm on bituminous concrete pavements with surface failures. This is a long-lasting solution without having short-term failures. This paper aims to design an ultra-thin cement concrete overlay using a developed critical stress model with sustainable concrete materials for low-volume roads.

Design/methodology/approach

In this research paper, a parametric study was conducted using the ultra-thin concrete overlay finite element model developed with ANSYS software, considering the significant parameters affecting the performance and development. The non-linear regression equation was formed using a damped least-squares method to predict critical stress due to the corner load of 51 kN.

Findings

The parametric study results indicate that with a greater elastic modulus of bituminous concrete, granular layer along with 100 mm thickness of concrete layer reduces the critical corner stress, interface shear stress in a significant way responsible for debonding of concrete overlay, elastic strains in the pavement further the concrete overlay can bear infinite load repetitions. From validation, it is understood that the non-linear regression equation developed is acceptable with similar research work done.

Originality/value

From the semi-scale experimental study, it is observed that the quaternary blended sustainable concrete overlay having a high modulus of rupture of 6.34 MPa is competent with conventional cement concrete overlay in terms of failure load. So, concrete overlay with sustainable materials of 100 mm thickness and higher elastic modulus of the layers can perform in a sustainable way meeting the environmental and long-term performance requirements.

Article
Publication date: 1 February 1996

Alejandro B. Engel, Eduardo Massad and Petronio Pulino

Proposes a modified Hill model for the oxyhaemoglobin dissociation curve. The model fits normal oxyhaemoglobin dissociation experimental data quite accurately, and can easily be…

Abstract

Proposes a modified Hill model for the oxyhaemoglobin dissociation curve. The model fits normal oxyhaemoglobin dissociation experimental data quite accurately, and can easily be adapted to experimental data of subjects suffering haemoglobinopathies when available. Discusses the Adair equation, as well as correcting factors for varying temperature and pH.

Details

Kybernetes, vol. 25 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 4 July 2016

Adil Baykasoglu and Cengiz Baykasoglu

The purpose of this paper is to develop a new multi-objective optimization procedure for crashworthiness optimization of thin-walled structures especially circular tubes with…

Abstract

Purpose

The purpose of this paper is to develop a new multi-objective optimization procedure for crashworthiness optimization of thin-walled structures especially circular tubes with functionally graded thickness.

Design/methodology/approach

The proposed optimization approach is based on finite element analyses for construction of sample design space and verification; gene-expression programming (GEP) for generating algebraic equations (meta-models) to compute objective functions values (peak crash force and specific energy absorption) for design parameters; multi-objective genetic algorithms for generating design parameters alternatives and determining optimal combination of them. The authors have also utilized linear and non-linear least square regression meta-models as a benchmark for GEP.

Findings

It is shown that the proposed approach is able to generate Pareto optimal designs which are in a very good agreement with the actual results.

Originality/value

The paper presents the application of a genetic programming-based method, namely, GEP first time in the literature. The proposed approach can be used to all kinds of related crashworthiness problems.

1 – 10 of over 3000