Search results

1 – 10 of over 11000
Article
Publication date: 30 April 2019

Shalini Srivastava, Ajay K. Jain and Sherry Sullivan

Although considerable research has been completed on employee voice, relatively few studies have investigated employee silence. The purpose of this paper is to examine the…

1896

Abstract

Purpose

Although considerable research has been completed on employee voice, relatively few studies have investigated employee silence. The purpose of this paper is to examine the relationship between employee silence and job burnout as well as the possible mediating role of emotional intelligence (EI) on the silence-burnout relationship.

Design/methodology/approach

This paper reports the findings of an empirical study based upon the survey of 286 managers working in four different states in India. Correlational and mediated regression analyses were performed to test four hypotheses.

Findings

Contrary to findings from studies conducted in Western countries in which employee silence was positively related to undesirable work outcomes, in this study, employee silence was negatively related to job burnout. Additionally, results indicated that the relationship between employee silence and job burnout was mediated by EI. These findings suggest the importance of considering country context and potential mediating variables when investigating employee silence.

Practical implications

This study demonstrates how Indian employees may strategically choose employee silence in order to enhance job outcomes.

Originality/value

This study is one of the few efforts to investigate employee silence in a non-western country. This is first study that has examined the role of EI as a mediating variable of the relationship between employee silence and job burnout in India.

Details

Personnel Review, vol. 48 no. 4
Type: Research Article
ISSN: 0048-3486

Keywords

Article
Publication date: 7 November 2019

Andika Rachman and R.M. Chandima Ratnayake

Corrosion loop development is an integral part of the risk-based inspection (RBI) methodology. The corrosion loop approach allows a group of piping to be analyzed simultaneously…

Abstract

Purpose

Corrosion loop development is an integral part of the risk-based inspection (RBI) methodology. The corrosion loop approach allows a group of piping to be analyzed simultaneously, thus reducing non-value adding activities by eliminating repetitive degradation mechanism assessment for piping with similar operational and design characteristics. However, the development of the corrosion loop requires rigorous process that involves a considerable amount of engineering man-hours. Moreover, corrosion loop development process is a type of knowledge-intensive work that involves engineering judgement and intuition, causing the output to have high variability. The purpose of this paper is to reduce the amount of time and output variability of corrosion loop development process by utilizing machine learning and group technology method.

Design/methodology/approach

To achieve the research objectives, k-means clustering and non-hierarchical classification model are utilized to construct an algorithm that allows automation and a more effective and efficient corrosion loop development process. A case study is provided to demonstrate the functionality and performance of the corrosion loop development algorithm on an actual piping data set.

Findings

The results show that corrosion loops generated by the algorithm have lower variability and higher coherence than corrosion loops produced by manual work. Additionally, the utilization of the algorithm simplifies the corrosion loop development workflow, which potentially reduces the amount of time required to complete the development. The application of corrosion loop development algorithm is expected to generate a “leaner” overall RBI assessment process.

Research limitations/implications

Although the algorithm allows a part of corrosion loop development workflow to be automated, it is still deemed as necessary to allow the incorporation of the engineer’s expertise, experience and intuition into the algorithm outputs in order to capture tacit knowledge and refine insights generated by the algorithm intelligence.

Practical implications

This study shows that the advancement of Big Data analytics and artificial intelligence can promote the substitution of machines for human labors to conduct highly complex tasks requiring high qualifications and cognitive skills, including inspection and maintenance management area.

Originality/value

This paper discusses the novel way of developing a corrosion loop. The development of corrosion loop is an integral part of the RBI methodology, but it has less attention among scholars in inspection and maintenance-related subjects.

Details

Journal of Quality in Maintenance Engineering, vol. 26 no. 3
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 25 August 2020

Samta Jain, Smita Kashiramka and P.K. Jain

The purpose of this paper is to examine the impact of cross-border acquisitions (CBAs) on the financial and operating performance of acquiring firms from emerging economies in the…

Abstract

Purpose

The purpose of this paper is to examine the impact of cross-border acquisitions (CBAs) on the financial and operating performance of acquiring firms from emerging economies in the long-term; the acquiring firms have been segregated into frequent (multiple) and first-time (single) acquirers based on their prior cross-border experience. The intent is to identify if overseas activities bring over and above advantage to multiple acquirers in terms of enhanced financial synergies and reduced costs, motivating them to engage in sequential international transactions.

Design/methodology/approach

The paper analyses the impact of CBAs announced and completed during 2004–2013 by Indian companies listed on the NIFTY 500 index. The post-acquisition financial and operating performance of Indian cross-border acquirers has been compared with their pre-acquisition performance. The average performance over three-years immediately preceding the acquisition year constitutes the benchmark for the post-acquisition performance. The post-acquisition period includes a year of integration followed by three successive post-integration years. Therefore, in operational terms, the research period extends from 2001–2017. The long-term performance of frequent (multiple) and first-time (single) Indian acquirers has been investigated comprehensively using a set of 16 financial ratios. The performance has been assessed using the secondary data collected from financial statements of acquiring companies; the financial statements and the list of CBAs by Indian companies have been obtained from Thomson Reuter’s EIKON database.

Findings

The financial and operating performance of frequent as well as first-time acquirers have depicted a similarly deteriorating trend during the post-acquisition period. These findings indicate that the international expansion of Indian companies is not guided by synergy creation potential and may be pushed by the overconfidence or over-optimism and agency conflicts of managers. This, perhaps, indicates that firms are being imprudent in investing free cash flows available with them.

Originality/value

The study is the first of its kind. No study, to the best of the authors’ knowledge, has analysed the performance of acquiring firms by segregating them into frequent and first-time acquirers using accounting measures of performance. More so, an extensive analysis of the long-term financial and operating performance of acquiring companies is rare to come across in the extant literature.

Details

Review of International Business and Strategy, vol. 30 no. 4
Type: Research Article
ISSN: 2059-6014

Keywords

Article
Publication date: 26 June 2019

Mamta Kayest and Sanjay Kumar Jain

Document retrieval has become a hot research topic over the past few years, and has been paid more attention in browsing and synthesizing information from different documents. The…

Abstract

Purpose

Document retrieval has become a hot research topic over the past few years, and has been paid more attention in browsing and synthesizing information from different documents. The purpose of this paper is to develop an effective document retrieval method, which focuses on reducing the time needed for the navigator to evoke the whole document based on contents, themes and concepts of documents.

Design/methodology/approach

This paper introduces an incremental learning approach for text categorization using Monarch Butterfly optimization–FireFly optimization based Neural Network (MB–FF based NN). Initially, the feature extraction is carried out on the pre-processed data using Term Frequency–Inverse Document Frequency (TF–IDF) and holoentropy to find the keywords of the document. Then, cluster-based indexing is performed using MB–FF algorithm, and finally, by matching process with the modified Bhattacharya distance measure, the document retrieval is done. In MB–FF based NN, the weights in the NN are chosen using MB–FF algorithm.

Findings

The effectiveness of the proposed MB–FF based NN is proven with an improved precision value of 0.8769, recall value of 0.7957, F-measure of 0.8143 and accuracy of 0.7815, respectively.

Originality/value

The experimental results show that the proposed MB–FF based NN is useful to companies, which have a large workforce across the country.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 5 June 2017

Ajay K. Dhamija, Surendra S. Yadav and P.K. Jain

Certified emission reduction (CER) survey studies in the literature are quite restrictive in scope. These studies are based on convenience sampling and, therefore, cannot be…

Abstract

Purpose

Certified emission reduction (CER) survey studies in the literature are quite restrictive in scope. These studies are based on convenience sampling and, therefore, cannot be relied upon. The current study comprehensively surveys the strengths, weaknesses and suggestive measures for clean development mechanism (CDM). This paper systematically aims to conduct the survey on top 50 companies in terms of CER volume.

Design/methodology/approach

The survey is aimed to target top 50 companies which account for 55 per cent of total number of CERs of all the Indian projects. The online survey link was sent to all 50 companies, and the finance managers were followed up regularly over a period of one year. Finally, 37 responses (a response rate of 72 per cent) have been received.

Findings

“CER is cheaper than EUA for Emission Compliance” is rated as topmost strength and “Methodology of Financial Additionality is Subjective” is rated as topmost weakness of CER mechanism. Removal of Quantitative Restrictions on CERs is rated as the topmost suggestive measure for stabilization of CER. Companies overwhelmingly favored continuation of banking and inclusion of carbon emission cost as one of the internal cost of business.

Practical implications

The current study throws light on future policy interventions for minimization of carbon footprint and efficient energy management.

Social implications

This study gives vital reflections for stabilization of CDM. This will help sustainable development, generation of green energy, mitigation of carbon emission at the least cost and employment generation in developing countries because of CDM project development.

Originality/value

This study differs from earlier studies because it comprehensively surveys the pertinent issues relating to strengths, weaknesses and suggestive measures for CDM. It also differs from them because it is not based on convenience sampling. It conducts the survey systematically on top 50 companies in terms of CER volume. Therefore, unlike previous studies of questionable validities, the findings of this study can be safely considered for policy interventions

Details

International Journal of Energy Sector Management, vol. 11 no. 2
Type: Research Article
ISSN: 1750-6220

Keywords

Article
Publication date: 19 June 2017

Khai Tan Huynh, Tho Thanh Quan and Thang Hoai Bui

Service-oriented architecture is an emerging software architecture, in which web service (WS) plays a crucial role. In this architecture, the task of WS composition and…

Abstract

Purpose

Service-oriented architecture is an emerging software architecture, in which web service (WS) plays a crucial role. In this architecture, the task of WS composition and verification is required when handling complex requirement of services from users. When the number of WS becomes very huge in practice, the complexity of the composition and verification is also correspondingly high. In this paper, the authors aim to propose a logic-based clustering approach to solve this problem by separating the original repository of WS into clusters. Moreover, they also propose a so-called quality-controlled clustering approach to ensure the quality of generated clusters in a reasonable execution time.

Design/methodology/approach

The approach represents WSs as logical formulas on which the authors conduct the clustering task. They also combine two most popular clustering approaches of hierarchical agglomerative clustering (HAC) and k-means to ensure the quality of generated clusters.

Findings

This logic-based clustering approach really helps to increase the performance of the WS composition and verification significantly. Furthermore, the logic-based approach helps us to maintain the soundness and completeness of the composition solution. Eventually, the quality-controlled strategy can ensure the quality of generated clusters in low complexity time.

Research limitations/implications

The work discussed in this paper is just implemented as a research tool known as WSCOVER. More work is needed to make it a practical and usable system for real life applications.

Originality/value

In this paper, the authors propose a logic-based paradigm to represent and cluster WSs. Moreover, they also propose an approach of quality-controlled clustering which combines and takes advantages of two most popular clustering approaches of HAC and k-means.

Article
Publication date: 2 February 2015

Ajay K. Jain

The purpose of this paper is to investigate the impact of motives for volunteerism and organizational culture on organizational commitment (OC) and organizational citizenship…

8310

Abstract

Purpose

The purpose of this paper is to investigate the impact of motives for volunteerism and organizational culture on organizational commitment (OC) and organizational citizenship behavior (OCB) in Indian work context.

Design/methodology/approach

The data were collected from 248 middle and senior managers of a public sector organization in India. The self and other reported questionnaires were used to collect the data.

Findings

Results of hierarchical regression analysis have shown that personal development dimension of volunteerism was found to be the positive predictor of OC and OCB both. However, career enhancement, empathy and community concern dimensions of volunteerism had mixed effects on both the criterion variables. Furthermore, culture had not shown a significant impact on OCB; however, it had a positive influence on affective and continuance commitment. Moreover, demographic variables (age, education and tenure) had strong impact on OC than OCB.

Practical implications

OC and OCB are highly desirable forms of employees’ behavior in which motivation for volunteerism and organizational culture can play a significant role. However, both OC and OCB are differentially predicted by these antecedent variables.

Originality/value

This is the first study which has explored the impact of motives for volunteerism on OC and OCB in the field of organizational behavior in a non-western work context such as India.

Article
Publication date: 2 October 2017

Ajay Kumar Dhamija, Surendra S. Yadav and P.K. Jain

The purpose of this paper is to find out the best method for forecasting European Union Allowance (EUA) returns and determine its price determinants. The previous studies in this…

Abstract

Purpose

The purpose of this paper is to find out the best method for forecasting European Union Allowance (EUA) returns and determine its price determinants. The previous studies in this area have focused on a particular subset of EUA data and do not take care of the multicollinearities. The authors take EUA data from all three phases and the continuous series, adopt the principal component analysis (PCA) to eliminate multicollinearities and fit seven different homoscedastic models for a comprehensive analysis.

Design/methodology/approach

PCA is adopted to extract independent factors. Seven different linear regression and auto regressive integrated moving average (ARIMA) models are employed for forecasting EUA returns and isolating their price determinants. The seven models are then compared and the one with minimum (root mean square error is adjudged as the best model.

Findings

The best model for forecasting the EUA returns of all three phases is dynamic linear regression with lagged predictors and that for forecasting EUA continuous series is ARIMA errors. The latent factors such as switch to gas (STG) and clean spread (capturing the effects of the clean dark spread, clean spark spread, switching price and natural gas price), National Allocation Plan announcements events, energy variables, German Stock Exchange index and extreme temperature events have been isolated as the price determinants of EUA returns.

Practical implications

The current study contributes to effective carbon management by providing a quantitative framework for analyzing cap-and-trade schemes.

Originality/value

This study differs from earlier studies mainly in three aspects. First, instead of focusing on a particular subset of EUA data, it comprehensively analyses the data of all the three phases of EUA along with the EUA continuous series. Second, it expressly adopts PCA to eliminate multicollinearities, thereby reducing the error variance. Finally, it evaluates both linear and non-linear homoscedastic models incorporating lags of predictor variables to isolate the price determinants of EUA.

Details

Journal of Advances in Management Research, vol. 14 no. 4
Type: Research Article
ISSN: 0972-7981

Keywords

Book part
Publication date: 4 December 2020

Abstract

Details

Application of Big Data and Business Analytics
Type: Book
ISBN: 978-1-80043-884-2

Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

1 – 10 of over 11000