Search results

1 – 10 of over 1000
Article
Publication date: 10 July 2019

Meihua Zuo, Hongwei Liu, Hui Zhu and Hongming Gao

The purpose of this paper is to identify potential competitive relationships among brands by analyzing the dynamic clicking behavior of consumers.

Abstract

Purpose

The purpose of this paper is to identify potential competitive relationships among brands by analyzing the dynamic clicking behavior of consumers.

Design/methodology/approach

Consumer sequential online click data, collected from JD.com, is used to analyze the dynamic competitive relationship between brands. It is found that the competition intensity across categories of products can differ considerably. Consumers exhibit big differences in purchasing time of durable-like goods, that is, the purchasing probability of such products changes considerably over time. The local polynomial regression model (LPRM) is used to analyze the relationship between brand competition of durable-like goods and the purchasing probability of a particular brand.

Findings

The statistical results of collective behaviors show that there is a 90/10 rule for the category durable-like goods, implying that ten percent of the brands account for 90 percent market share in terms of both clicking and purchasing behavior. The dynamic brand cognitive process of impulsive consumers displays an inverted V shape, while cautious consumers display a double V shaped cognitive process. The dynamic consumers’ cognition illustrates that when the brands capture a half of the click volume, the brands’ competitiveness reaches to its peak and makes no significant different from brands accounting for 100 percent of the click volume in terms of the purchasing probability.

Research limitations/implications

There are some limitations to the research, including the limitations imposed by the data set. One of the most serious problems in the data set is that the collected click-stream is desensitized severely, restricting the richness of the conclusions of this study. Second, the data set consists of many other consumer behavioral data, but only the consumer’s clicking behavior is analyzed in this study. Therefore, in future research, the parameters brand browsing by consumers and the time of browsing in each brand should be added as indicators of brand competitive intensity.

Practical implications

The authors study brand competitiveness by analyzing the relationship between the click rate and the purchase likelihood of individual brands for durable-like products. When the brand competitiveness is less than 50 percent, consumers tend to seek a variety of new brands, and their purchase likelihood is positively correlated with the brand competitiveness. Once consumers learn about a particular brand excessively among all other brands at a period of time, the purchase likelihood of its products decreases due to the thinner consumer’s short-term loyalty the brand. Till the brand competitiveness runs up to 100 percent, consumers are most likely to purchase a brand and its product. That indicates brand competitiveness maintain 50 percent of the whole market is most efficient to be profitable, and the performance of costing more to improve the brand competitiveness might make no difference.

Originality/value

There are many studies on brand competition, but most of these research works analyze the brand’s marketing strategy from the perspective of the company. The limitation of this research is that the data are historical and failure to reflect time-variant competition. Some researchers have studied brand competition through consumer behavior, but the shortcoming of these studies is that it does not consider sequentiality of consumer behavior as this study does. Therefore, this study contributes to the literature by using consumers’ sequential clicking behavior and expands the perspective of brand competition research from the angle of consumers. Simultaneously, this paper uses the LPRM to analyze the relationship between consumer clicking behavior and brand competition for the first time, and expands the methodology accordingly.

Details

Industrial Management & Data Systems, vol. 119 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 6 July 2015

Zeyu Ma, Jinglai Wu, Yunqing Zhang and Ming Jiang

The purpose of this paper is to provide a new computational method based on the polynomial chaos (PC) expansion to identify the uncertain parameters of load sensing proportional…

190

Abstract

Purpose

The purpose of this paper is to provide a new computational method based on the polynomial chaos (PC) expansion to identify the uncertain parameters of load sensing proportional valve (LSPV), which is commonly used to improve the efficiency of brake system in heavy truck.

Design/methodology/approach

For this investigation, the mathematic model of LSPV is constructed in the form of state space equation. Then the estimation process is implemented relying on the experimental measurements. With the coefficients of the PC expansion obtained by the numerical implementation, the output observation function can be transformed into a linear and time-invariant form. The uncertain parameter recursively update functions based on Newton method can therefore be derived fit for computer calculation. To improve the estimation accuracy and stability, the Newton method is modified by employing the acceptance probability to escape from the local minima during the estimation process.

Findings

The accuracy and effectiveness of the proposed parameter estimation method are confirmed by model validation compared with other estimation methods. Meanwhile, the influence of measurement noise on the robustness of the estimation methods is taken into consideration, and it is shown that the estimation approach developed in this paper could achieve impressive stability without compromising the convergence speed and accuracy too much.

Originality/value

The model of LSPV is originally developed in this paper, and then the authors propose a novel effective strategy for recursively estimating uncertain parameters of complicate pneumatic system based on the PC theory.

Details

Engineering Computations, vol. 32 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 17 July 2009

Emmanuel Blanchard, Adrian Sandu and Corina Sandu

The purpose of this paper is to propose a new computational approach for parameter estimation in the Bayesian framework. A posteriori probability density functions are obtained…

Abstract

Purpose

The purpose of this paper is to propose a new computational approach for parameter estimation in the Bayesian framework. A posteriori probability density functions are obtained using the polynomial chaos theory for propagating uncertainties through system dynamics. The new method has the advantage of being able to deal with large parametric uncertainties, non‐Gaussian probability densities and nonlinear dynamics.

Design/methodology/approach

The maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. Direct stochastic collocation is used as a less computationally expensive alternative to the traditional Galerkin approach to propagate the uncertainties through the system in the polynomial chaos framework.

Findings

The new approach is explained and is applied to very simple mechanical systems in order to illustrate how the Bayesian cost function can be affected by the noise level in the measurements, by undersampling, non‐identifiablily of the system, non‐observability and by excitation signals that are not rich enough. When the system is non‐identifiable and an a priori knowledge of the parameter uncertainties is available, regularization techniques can still yield most likely values among the possible combinations of uncertain parameters resulting in the same time responses than the ones observed.

Originality/value

The polynomial chaos method has been shown to be considerably more efficient than Monte Carlo in the simulation of systems with a small number of uncertain parameters. This is believed to be the first time the polynomial chaos theory has been applied to Bayesian estimation.

Details

Engineering Computations, vol. 26 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 21 March 2019

Fatemehalsadat Afsahhosseini

The theory of competitiveness of cities is based on Porter’s Diamond Theory. There is a relation between housing and urban competitiveness. The adequacy of land supply and…

Abstract

Purpose

The theory of competitiveness of cities is based on Porter’s Diamond Theory. There is a relation between housing and urban competitiveness. The adequacy of land supply and allocation of land for new housing development is integral. This paper aims to estimate the required number of housing units to secure housing needs in Tehran for the next four years in 1400 H.Sh (2021 A.D.). The research methodology is carried out using qualitative and quantitative approaches based on the given data. First, the population of Tehran in 1400 H.Sh was predicted using nonlinear quadratic polynomial, Gompertz and logistic models. Then, a Logistic model is proposed to estimate the number of housing units in Tehran. The calculations of residential units related to the population obtained from the Gompertz model equivalent to 663141 is suggested as a criterion for local authority to future decision making and planning for urban development.

Design/methodology/approach

The present research is an applied research in terms of the purpose a descriptive research in terms of the nature and methodology and a descriptive-analytical research in terms of attitude and approach toward the research problem (Hafeznia, 2013, 58, 63 and 71). To provide the required information for the analytical stage, a documentary method, related to the use of internal and external books and papers, has been applied. First, the population of Tehran in 1400 H.Sh is estimated using three nonlinear models of quadratic polynomials, Gompertz and logistic. Then, among them, the options that were more consistent with the estimation of the new comprehensive plan of Tehran (1386 H.Sh), which is the most important plan of this city, were chosen. After that, by using the logistic model, which is an appropriate expression of saturable phenomena and a suitable method of estimating the number of residential units in a city and based on the past trend, the future of housing is predicted, and the number of required residential units is determined.

Findings

Any city for competitiveness must seek the search and development of a set of unique strategies and practices that will shape its status from other cities. No single action for all cities is feasible. In fact, the most important challenge is to propose a unique value proposition and to formulate a strategy that distinguishes that city from the rest. Among the measures taken around the world is attention to infrastructure. From the point of view of competitiveness, different types of investment in infrastructure are important for different types of cities and in different stages of development of a city. Large cities need targeted investments in housing issues to overcome the segments associated with the poorer neighborhoods. Without investment in desirable housing, there will be holes in competitive advantage. In this paper, the number of residential units in Tehran was projected for 2021. The city’s population was originally estimated for 2021. In addition to the models used to predict and estimate necessary, it is necessary to consider the area, land use map, future development lines and […] city. To this end, the city can continue to meet the needs of residents’ diversification and the city’s needs. We cannot accept any predictions about the population and, consequently, the number of residential units. Providing predictions can provide the most predictive, or more prudent, and different scenarios that can emerge, which will lead to flexibility in the presentation of plans and programs. Among the models that were used to predict the population, the result obtained from second-order polynomial and Gompartz models was found to be appropriate for the estimation of the new comprehensive design of Tehran (2007). But the prediction of the population of the logistic model was beyond the prediction of the new comprehensive plan of Tehran (2007) and thus was not considered appropriate. The number of residential units required according to the predicted population of the second order polynomial models, Gompartz and the population considered in the new comprehensive plan of Tehran (2007). After the finalization of the proposed population, using the logistic model, the number of residential units needed in Tehran was projected for 2021. Since these three estimates are somewhat close to each other, it is suggested that Gompertz model calculations, equivalent to 663,141 residential units, are proposed, and according to that, local authorities are planning to supply land to achieve economic competitiveness (urban). As it is shown in the conceptual model of the paper in Figure 1, after determining the need for housing, it is necessary to ask whether the adequacy of the supply and allocation of land, as well as the importance of maintaining it for the development of housing by local authorities, is clear. Also, is there any suitable planning for that? Despite the severe shortage of ready-made land for the city of Tehran, a large volume of land is a large area owned by natural and legal persons, and, in particular, state-owned enterprises of semipublic and public institutions, which have been abandoned in cities for years without use and in the form of barren. According to municipal management laws, municipalities can receive land, taxes and fees that are included in the annual budget of the Tehran Municipality. According to the figure obtained from this study, which states that 663,141 residential units are needed for Tehran in 2021, large landowners in Tehran need to supply their land to the market. According to the Population and Housing Census in Tehran in 2011, there are 245,769 inhabited vacancies in Tehran; hence there are two scenarios for the provision of residential units in the city of Tehran in 2021, assuming that these units in the housing market require 417,372 units Another residence will be for Tehran, otherwise 663141 residential units will be needed for Tehran in 2021. Other possibl

Originality/value

Tehran is the largest city and the capital of Iran, and it is also the capital of the province Tehran. In the southern foothills of the Alborz Mountains within a longitude of 51 degrees and 2 minutes East to 51 degrees and 36 minutes East, with an approximate length of 50 kilometers and latitude 35 degrees and 34 minutes North to 35 degrees and 50 minutes North with an approximate width of 30 kilometers. The area of this city is 730 km2. This is one of the largest cities in West Asia, the 25th the most populous city, and the 27th greatest city to the world. The administrative structure of Iran has been concentrated in this city. The city has been divided into 22 zones, 134 areas (including Rey and Tajrish), and 370 districts (Wikipedia). The problem of housing in the city of Tehran has always been one of the important issues that less has been planned for it. The result is housing shortage, high housing prices and so on, due to the excessive expansion of the city, its population increase and so on.

Details

International Journal of Housing Markets and Analysis, vol. 12 no. 4
Type: Research Article
ISSN: 1753-8270

Keywords

Article
Publication date: 23 November 2010

Jeoung‐Nae Choi, Sung‐Kwun Oh and Hyun‐Ki Kim

The purpose of this paper is to propose an improved optimization methodology of information granulation‐based fuzzy radial basis function neural networks (IG‐FRBFNN). In the…

Abstract

Purpose

The purpose of this paper is to propose an improved optimization methodology of information granulation‐based fuzzy radial basis function neural networks (IG‐FRBFNN). In the IG‐FRBFNN, the membership functions of the premise part of fuzzy rules are determined by means of fuzzy c‐means (FCM) clustering. Also, high‐order polynomial is considered as the consequent part of fuzzy rules which represent input‐output relation characteristic of sub‐space and weighted least squares learning is used to estimate the coefficients of polynomial. Since the performance of IG‐RBFNN is affected by some parameters such as a specific subset of input variables, the fuzzification coefficient of FCM, the number of rules and the order of polynomial of consequent part of fuzzy rules, we need the structural as well as parametric optimization of the network. The proposed model is demonstrated with the use of two kinds of examples such as nonlinear function approximation problem and Mackey‐Glass time‐series data.

Design/methodology/approach

The type of polynomial of each fuzzy rule is determined by selection algorithm by considering the local error as performance index. In addition, the combined local error is introduced as a performance index considered by two kinds of parameters such as the polynomial type of each rule and the number of polynomial coefficients of each rule. Besides this, other structural and parametric factors of the IG‐FRBFNN are optimized to minimize the global error of model by means of the hierarchical fair competition‐based parallel genetic algorithm.

Findings

The performance of the proposed model is illustrated with the aid of two examples. The proposed optimization method leads to an accurate and highly interpretable fuzzy model.

Originality/value

The proposed hybrid optimization methodology is interesting for designing an accurate and highly interpretable fuzzy model. Hybrid optimization algorithm comes in the form of the combination of the combined local error and the global error.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 2 August 2013

S. Iqbal, A. Javed, A.R. Ansari and A.M. Siddiqui

The authors' objective in this paper is to find the numerical solutions of obstacle, unilateral and contact second‐order boundary‐value problems.

Abstract

Purpose

The authors' objective in this paper is to find the numerical solutions of obstacle, unilateral and contact second‐order boundary‐value problems.

Design/methodology/approach

To achieve this, the authors formulate a spatially adaptive grid refinement scheme following Galerkin's finite element method based on a weighted‐residual. A residual based a‐posteriori error estimation scheme has been utilized for checking the approximate solutions for various finite element grids. The local element balance has been considered as an error assessment criterion. The approach utilizes piece‐wise linear approximations utilizing linear Langrange polynomials. Numerical experiments indicate that local errors are large in regions where the gradients are large.

Findings

A comparison of the spatially adaptive grid refinement with that of uniform meshing for second order obstacle boundary value problems confirms the superiority of the scheme without increasing the number of unknown coefficients.

Originality/value

The authors believe the work has merit not only in terms of the approach but also of the problem solved in the paper.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 23 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 30 October 2018

Long Thanh Cung, Nam Hoang Nguyen, Pierre Yves Joubert, Eric Vourch and Pascal Larzabal

The purpose of this paper is to propose an approach, which is easy to implement, for estimating the thickness of the air layer that may separate metallic parts in some…

Abstract

Purpose

The purpose of this paper is to propose an approach, which is easy to implement, for estimating the thickness of the air layer that may separate metallic parts in some aeronautical assemblies, by using the eddy current method.

Design/methodology/approach

Based on an experimental study of the coupling of a magnetic cup core coil sensor with a metallic layered structure (consisting of first metal layer/air layer/second metal layer), which is confirmed by finite element modelling simulations, an inversion technique relying on a polynomial forward model of the coupling is proposed to estimate the air layer thickness. The least squares and the nonnegative least squares algorithms are applied and analysed to obtain the estimation results.

Findings

The choice of an appropriate inversion technique to optimize the estimation results is dependent on the signal-to-noise ratio of measured data. The obtained estimation error is smaller than a few percent, for both simulated and experimental data. The proposed approach can be used to estimate both the air layer thickness and the second metal layer thickness simultaneously/separately.

Originality/value

This model-based approach is easy to implement and available to all types of eddy current sensors.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 38 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 9 November 2012

Octavio Andrés González‐Estrada, Juan José Ródenas, Stéphane Pierre Alain Bordas, Marc Duflot, Pierre Kerfriden and Eugenio Giner

The purpose of this paper is to assess the effect of the statical admissibility of the recovered solution and the ability of the recovered solution to represent the singular…

1200

Abstract

Purpose

The purpose of this paper is to assess the effect of the statical admissibility of the recovered solution and the ability of the recovered solution to represent the singular solution; also the accuracy, local and global effectivity of recovery‐based error estimators for enriched finite element methods (e.g. the extended finite element method, XFEM).

Design/methodology/approach

The authors study the performance of two recovery techniques. The first is a recently developed superconvergent patch recovery procedure with equilibration and enrichment (SPR‐CX). The second is known as the extended moving least squares recovery (XMLS), which enriches the recovered solutions but does not enforce equilibrium constraints. Both are extended recovery techniques as the polynomial basis used in the recovery process is enriched with singular terms for a better description of the singular nature of the solution.

Findings

Numerical results comparing the convergence and the effectivity index of both techniques with those obtained without the enrichment enhancement clearly show the need for the use of extended recovery techniques in Zienkiewicz‐Zhu type error estimators for this class of problems. The results also reveal significant improvements in the effectivities yielded by statically admissible recovered solutions.

Originality/value

The paper shows that both extended recovery procedures and statical admissibility are key to an accurate assessment of the quality of enriched finite element approximations.

Article
Publication date: 10 April 2007

Marc Schober and Manfred Kasper

This paper aims to show that simple geometry‐based hp‐algorithms using an explicit a posteriori error estimator are efficient in wave propagation computation of complex structures…

Abstract

Purpose

This paper aims to show that simple geometry‐based hp‐algorithms using an explicit a posteriori error estimator are efficient in wave propagation computation of complex structures containing geometric singularities.

Design/methodology/approach

Four different hp‐algorithms are compared with common h‐ and p‐adaptation in electrostatic and time‐harmonic problems regarding efficiency in number of degrees of freedom and runtime. An explicit a posteriori error estimator in energy norm is used for adaptive algorithms.

Findings

Residual‐based error estimation is sufficient to control the adaptation process. A geometry‐based hp‐algorithm produces the smallest number of degrees of freedom and results in shortest runtime. Predicted error algorithms may choose inappropriate kind of refinement method depending on p‐enrichment threshold value. Achieving exponential error convergence is sensitive to the element‐wise decision on h‐refinement or p‐enrichment.

Research limitations/implications

Initial mesh size must be sufficiently small to confine influence of phase lag error.

Practical implications

Information on implementation of hp‐algorithm and use of explicit error estimator in electromagnetic wave propagation is provided.

Originality/value

The paper is a resource for developing efficient finite element software for high‐frequency electromagnetic field computation providing guaranteed error bound.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 26 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 August 2002

Kerstin Weinberg and Ulrich Gabbert

The paper presents a new technique for a compatible transition from a h‐refined to a p‐refined finite element mesh. At one or more faces of particularly designed pNh‐transition…

Abstract

The paper presents a new technique for a compatible transition from a h‐refined to a p‐refined finite element mesh. At one or more faces of particularly designed pNh‐transition elements a low order h‐discretization may be combined with a usual p‐mesh in the other parts of the elements. The pNh‐elements are conform finite elements which can be applied in an adaptive scheme controlled by a residue based error estimate. Typical applications which require strongly a local mesh refinement within a p‐finite element mesh are, e.g. the approximation of high gradients and the determination of contact areas. Numerical examples demonstrate the efficiency of the pNh‐element technique for such problems.

Details

Engineering Computations, vol. 19 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 1000