Search results
1 – 10 of over 50000Robert Kozielski, Michał Dziekoński, Michał Medowski, Jacek Pogorzelski and Marcin Ostachowski
Companies spend millions on training their sales representatives. Thousands of textbooks have been published; thousands of training videos have been recorded. Hundreds of good…
Abstract
Companies spend millions on training their sales representatives. Thousands of textbooks have been published; thousands of training videos have been recorded. Hundreds of good pieces of advice and tips for sales representatives have been presented along with hundreds of sales methods and techniques. Probably the largest number of indicators and measures are applied in sales and distribution. On the one hand, this is a result of the fact that sales provide revenue and profit to a company; on the other hand, the concept of management by objectives turns out to be most effective in regional sales teams with reference to sales representatives and methods of performance evaluation. As a result, a whole array of indices has been created which enable the evaluation of sales representatives’ work and make it possible to manage goods distribution in a better way.
The indices presented in this chapter are rooted in the consumer market and are applied most often to this type of market (particularly in relation to fast-moving consumer goods at the level of retail trade). Nevertheless, many of them can be used on other markets (services, means of production) and at other trade levels (wholesale).
Although the values of many indices presented herein are usually calculated by market research agencies and delivered to companies in the form of synthetic results, we have placed the emphasis on the ability to determine them independently, both in descriptive and exemplifying terms. We consider it important to understand the genesis of indices and build the ability to interpret them on that basis. What is significant is that the indices can be interpreted differently; the same index may provide a different assessment of a product’s, brand or company’s position in the market depending on the parameters taken into account. Therefore, we strive to show a certain way of thinking rather than give ready-made recipes and cite ‘proven’ principles. Sales and distribution are dynamic phenomena, and limiting them within the framework of ‘one proper’ interpretation would be an intellectual abuse.
Details
Keywords
Venugopal Haridoss and Kandasamy Subramani
– The purpose of this paper is to present the optimal double sampling attribute plan using the weighted Poisson distribution.
Abstract
Purpose
The purpose of this paper is to present the optimal double sampling attribute plan using the weighted Poisson distribution.
Design/methodology/approach
For the given AQL and LQL, sum of producer’s and consumer’s risks have been attained. Based on the weighted Poisson distribution, the sum of these risks has been optimized.
Findings
In the final inspection, the producer and the consumer represent the same party. So, the sum these two risks should be minimized. In this paper, the sum of risks has been tabulated using the weighted Poisson distribution for different operating ratios. These tabulated values are comparatively less than the sum of risks derived using Poisson distribution.
Originality/value
The sampling plan presented in this paper is particularly useful for testing the quality of finished products in shop floor situations.
Details
Keywords
Kandasamy Subramani and Venugopal Haridoss
The purpose of this paper is to present the single sampling attribute plan for given acceptance quality level (AQL) and limiting quality level (LQL) involving minimum sum of risks…
Abstract
Purpose
The purpose of this paper is to present the single sampling attribute plan for given acceptance quality level (AQL) and limiting quality level (LQL) involving minimum sum of risks using weighted Poisson distribution.
Design/methodology/approach
For the given AQL and LQL, sum of producer's and consumer's risks have been attained. Based on weighted Poisson distribution, the sum of these risks has been arrived at, along with the acceptance number and the rejection number. Also, the operating characteristic function for the single sampling attribute sampling plan, using weighted Poisson distribution, has been derived.
Findings
In the final inspection, the producer and the consumer represent the same party. So, the sum these two risks should be minimized. In this paper, the sum of risks has been tabulated using weighted Poisson distribution for different operating ratios. These tabulated values are comparatively less than the sum of risks derived using Poisson distribution.
Originality/value
The sampling plan presented in this paper is particularly useful for testing the quality of finished products in shop floor situations.
Details
Keywords
Xuelian Sun, Enmin Feng, Jianguo Liu and Bing Wang
The purpose of this paper is to study some evolving mechanisms for producing weighted networks, as well as to analyze the statistical properties of the networks.
Abstract
Purpose
The purpose of this paper is to study some evolving mechanisms for producing weighted networks, as well as to analyze the statistical properties of the networks.
Design/methodology/approach
A simple one‐parameter evolution model of weighted networks is proposed, in which the topological growth combines with the variation of weights. Based on weight‐driven dynamics, the model can generate scale‐free distributions of the degree, node strength and edge weight, as confirmed in many real networks.
Findings
The exponent of the edge weight can be widely tuned. The unique parameter p controls the edge weight dynamical growth. The authors also obtain the non‐trivial weighted clustering coefficient and the weighted average to the nearest neighbors' degree.
Research limitations/implications
Accessibility and availability of data are the main limitations which apply to the figures.
Practical implications
The new evolving networks method may be beneficial for understanding real networks.
Originality/value
The paper proposes a new approach of explaining the evolving mechanisms of the real networks.
Details
Keywords
Vito Peragine and Laura Serlenga
Purpose: This paper aims at studying the degree of equality of educational opportunity in the Italian university system.Methodology: We build on the approaches developed by…
Abstract
Purpose: This paper aims at studying the degree of equality of educational opportunity in the Italian university system.
Methodology: We build on the approaches developed by Peragine (2004, 2005) and Lefranc et al. (2006a, 2006b) and focus on the equality of educational opportunities for individuals of different social background. We propose different definitions of equality of opportunity in education. Then, we provide testable conditions with the aim of (i) testing for the existence of equality of opportunity (EOp) in a given distribution and (ii) ranking distributions on the basis of EOp. Definitions and conditions resort to standard stochastic conditions that are tested by using nonparametric tests developed by Beach and Davidson (1983) and Davidson and Duclos (2000).
Findings: Our empirical results show a strong family effect on the performances of students in the higher education and on the transition of graduates in the labor market. Moreover the inequality of opportunity turns out to be more severe in the South than in the regions of the North-Center.
Originality: This work contributes to the literature in three ways: first, it proposes a definition of equality of educational opportunities. Second, the paper develops a methodology in order to test for the existence of equality of opportunity in a given distribution and to rank distributions according to equality of opportunity. Third, we present empirical evidence on the degree of equality of educational opportunity in the Italian university system.
Esfandiar Maasoumi and Le Wang
Building on recent advances in inverse probability weighted identification and estimation of counterfactual distributions, the authors examine the history of wage earnings for…
Abstract
Building on recent advances in inverse probability weighted identification and estimation of counterfactual distributions, the authors examine the history of wage earnings for women and their potential wage distributions in the United States. These potentials are two counterfactuals, what if women received men’s market “rewards” for their own “skills,” and what if they received the women’s rewards but for men’s characteristics? Using the Current Population Survey data from 1976 to 2013, the authors analyze the entire counterfactual distributions to separate the “structure” and human capital “composition” effect. In contrast to Maasoumi and Wang (2019), the reference outcome in these decompositions is women’s observed earnings distribution, and inverse probability methods are employed, rather than the conditional quantile approaches. The authors provide decision theoretic measures of the distance between two distributions, to complement assessments based on mean, median, or particular quantiles. We assess uniform rankings of alternate distributions by tests of stochastic dominance in order to identify evaluations robust to subjective measures. Traditional moment-based measures severely underestimate the declining trend of the structure effect. Nevertheless, dominance rankings suggest that the structure (“discrimination”?) effect is bigger than human capital characteristics.
Details
Keywords
Mohd Azri Pawan Teh, Nazrina Aziz and Zakiyah Zain
This paper introduces group chain acceptance sampling plans (GChSP) for a truncated life test at preassumed time by using the minimum angle method. The proposed method is an…
Abstract
Purpose
This paper introduces group chain acceptance sampling plans (GChSP) for a truncated life test at preassumed time by using the minimum angle method. The proposed method is an approach, where both risks associated with acceptance sampling, namely consumers’ and producer’s risks, are considered. Currently, the GChSP only considers the consumer's risk (CR), which means the current plan only protects the consumer not the producer since it does not take into account the producer's risk (PR) at all.
Design/methodology/approach
There are six phases involved when designing the GChSP, which are (1) identifying the design parameters, (2) implementing the operating procedures, (3) deriving the probability of lot acceptance, (4) deriving the probability of zero or one defective, (5) deriving the proportion defective and (6) measuring the performance.
Findings
The findings show that the optimal number of groups obtained satisfies both parties, i.e. consumer and producer, compared to the established GChSP, where the number of group calculated only satisfies the consumer not the producer.
Research limitations/implications
There are three limitations identified for this paper. The first limitation is the distribution, in which this paper only proposes the GChSP for generalized exponential distribution. It can be extended to different distribution available in the literature. The second limitation is that the paper uses binomial distribution when deriving the probability of lot acceptance. Also, it can be derived by using different distributions such as weighted binomial distribution, Poisson distribution and weighted Poisson distribution. The final limitation is that the paper adopts the mean as a quality parameter. For the quality parameter, researchers have other options such as the median and the percentile.
Practical implications
The proposed GChSP should provide an alternative for the industrial practitioners and for the inspection activity, as they have more options of the sampling plans before they finally decide to select one.
Originality/value
This is the first paper to propose the minimum angle method for the GChSP, where both risks, CR and PR, are considered. The GChSP has been developed since 2015, but all the researchers only considered the CR in their papers.
Details
Keywords
Francesco Caracciolo and Marilena Furno
Several approaches have been proposed to evaluate treatment effect, relying on matching methods propensity score, quantile regression, influence function, bootstrap and various…
Abstract
Purpose
Several approaches have been proposed to evaluate treatment effect, relying on matching methods propensity score, quantile regression, influence function, bootstrap and various combinations of the above. This paper considers two of these approaches to define the quantile double robust (DR) estimator: the inverse propensity score weights, to compare potential output of treated and untreated groups; the Machado and Mata quantile decomposition approach to compute the unconditional quantiles within each group – treated and control. Two Monte Carlo studies and an empirical application for the Italian job labor market conclude the analysis. The paper aims to discuss these issue.
Design/methodology/approach
The DR estimator is extended to analyze the tails of the distribution comparing treated and untreated groups, thus defining the quantile based DR estimator. It allows us to measure the treatment effect along the entire outcome distribution. Such a detailed analysis uncovers the presence of heterogeneous impacts of the treatment along the outcome distribution. The computation of the treatment effect at the quantiles, points out variations in the impact of treatment along the outcome distributions. Indeed it is often the case that the impact in the tails sizably differs from the average treatment effect.
Findings
Two Monte Carlo studies show that away from average, the quantile DR estimator can be profitably implemented. In the real data example, the nationwide results are compared with the analysis at a regional level. While at the median and at the upper quartile the nationwide impact is similar to the regional impacts, at the first quartile – the lower incomes – the nationwide effect is close to the North-Center impact but undervalues the impact in the South.
Originality/value
The computation of the treatment effect at various quantiles allows to point out discrepancies between treatment and control along the entire outcome distributions. The discrepancy in the tails may differ from the divergence between the average values. Treatment can be more effective at the lower/higher quantiles. The simulations show the performance at the quartiles of quantile DR estimator. In a wage equation comparing long and short term contracts, this estimator shows the presence of an heterogeneous impact of short term contracts. Their impact changes depending on the income level, the outcome quantiles, and on the geographical region.
Details
Keywords
Naveen Donthu, Satish Kumar, Riya Sureka and Rohit Joshi
This study aims to map the major research constituents and trends for the Journal of Business and Industrial Marketing (JBIM) during its 34-year history (1986–2019). It also…
Abstract
Purpose
This study aims to map the major research constituents and trends for the Journal of Business and Industrial Marketing (JBIM) during its 34-year history (1986–2019). It also identifies JBIM’s thematic structure and the key factors affecting the impact of its articles.
Design/methodology/approach
The Scopus database is used to identify the bibliographic data of JBIM. The most prolific authors, institutions and countries in the journal are analyzed through weighted distributions of articles. The thematic structure of the journal is evaluated by means of bibliographic coupling analysis. The study also examines the factors influencing citations of JBIM articles through regression modeling.
Findings
JBIM publishes contributions from around the world, though the most prolific contributors are affiliated with the USA, UK and Finland. Thematic analysis divided JBIM articles into five major themes. Citation analysis reveals that article age, special issue appearance, number of author keywords and number of references are prominent factors explaining an article’s impact.
Research limitations/implications
This study uses data from the Scopus database, and limitations of the database have implications for the findings.
Originality/value
This is the first comprehensive study to identify the thematic structure and the factors affecting citations of JBIM articles.
Details
Keywords
Naomi Friedman-Sokuler and Claudia Senik
Using the American and the French time-use surveys, we examine whether people have a preference for a more diversified mix of activities, in the sense that they experience greater…
Abstract
Using the American and the French time-use surveys, we examine whether people have a preference for a more diversified mix of activities, in the sense that they experience greater well-being when their time schedule contains many different activities rather than is concentrated on a very small number. This could be due to decreasing marginal utility, as is assumed for goods consumption, if each episode of time is conceived as yielding a certain level of utility per se. With returns to specialization, people would then face a trade-off between efficiency and diversity in choosing how to allocate time. We examine these issues and investigate potential gender differences, considering both instantaneous feelings and life satisfaction.
Details