Search results
1 – 10 of 277Lutz Bornmann, Adam Ye and Fred Ye
The purpose of this paper is to propose an approach for identifying landmark papers in the long run. These publications reach a very high level of citation impact and are able to…
Abstract
Purpose
The purpose of this paper is to propose an approach for identifying landmark papers in the long run. These publications reach a very high level of citation impact and are able to remain on this level across many citing years. In recent years, several studies have been published which deal with the citation history of publications and try to identify landmark publications.
Design/methodology/approach
In contrast to other studies published hitherto, this study is based on a broad data set with papers published between 1980 and 1990 for identifying the landmark papers. The authors analyzed the citation histories of about five million papers across 25 years.
Findings
The results of this study reveal that 1,013 papers (less than 0.02 percent) are “outstandingly cited” in the long run. The cluster analyses of the papers show that they received the high impact level very soon after publication and remained on this level over decades. Only a slight impact decline is visible over the years.
Originality/value
For practical reasons, approaches for identifying landmark papers should be as simple as possible. The approach proposed in this study is based on standard methods in bibliometrics.
Details
Keywords
Yanni Ping, Alexander Buoye and Ahmad Vakil
The purpose of this study is to present a methodology for enhancing the quality and usefulness of online reviews for prospective customers to investigate how this contemporary…
Abstract
Purpose
The purpose of this study is to present a methodology for enhancing the quality and usefulness of online reviews for prospective customers to investigate how this contemporary form of instrumental support can be facilitated to strengthen customer-to-customer support.
Design/methodology/approach
This study develops an analytics framework with applications of machine learning models using customer review data from Amazon.com. Linear regression is commonly used for review helpfulness and sales prediction. In this study, Random Forest model is applied because of its strong performance and reliability. To advance the methodology, a custom script in Python is created to generate Partial Dependence Plots for intensive exploration of the dependency interpretations of review helpfulness and sales. The authors also apply K-Means to cluster reviewers and use the results to generate reviewer qualification scores and collective reviewer scores, which are incorporated into the review facilitation process.
Findings
The authors find the average helpfulness ratio of the reviewer as the most important determinant of reviewer qualification. The collective reviewer qualification for a product created based on reviewers’ characteristics is found important to customers’ purchase intentions and can be used as a metric for product comparison.
Practical implications
The findings of this study suggest that service improvement efforts can be performed by developing software applications to monitor reviewer qualifications dynamically, bestowing a badge to top quality reviewers, redesigning review sorting interfaces and displaying the consumer rating distribution on the product page, resulting in improved information reliability and consumer trust.
Originality/value
This study adds to the research on customer-to-customer support in the service literature. As customer reviews perform as a contemporary form of instrumental support, the authors validate the determinants of review helpfulness and perform an intensive exploration of its dependency interpretation. Reviewer qualification and the collective reviewer qualification scores are generated as new predictors and incorporated into the helpfulness-based review facilitation services.
Details
Keywords
Avi Herbon, Shalom Moalem, Haim Shnaiderman and Joseph Templeman
The purpose of this paper is to develop a user‐oriented decision‐supporting applicable tool for selection of a single supplier out of a group of potential suppliers in a dynamic…
Abstract
Purpose
The purpose of this paper is to develop a user‐oriented decision‐supporting applicable tool for selection of a single supplier out of a group of potential suppliers in a dynamic business environment over a finite planning horizon.
Design/methodology/approach
A qualitative and quantitative description of the impact of a change in one or several business environment parameters on current and future supplier choice; the methodology is accompanied by a visual representation of those impacts for the decision maker. The paper presents extended simulation experiments to test the proposed methodology.
Findings
A strategy of replacing suppliers over a definite planning horizon based on a forecast of the business environment is significantly (2‐9 per cent) more efficient than a strategy of relying on a single leading supplier throughout the planning horizon. This efficiency gain is greater the more the business environment is dynamic.
Practical implications
The proposed methodology is applicable to a broad range of service and manufacturing organizations that operate in dynamic business environments and rely on complex purchasing systems. Thanks to its simplicity, it can be applied to very large systems with a broad range of selection and/or environmental parameters.
Originality/value
Although the supplier selection process has been extensively studied, the literature still lacks appropriate reference to the effects of a dynamic business environment on this process.
Details
Keywords
Shan Jiang, Xi Zhang, Yihang Cheng, Dongming Xu, Patricia Ordoñez De Pablos and Xuyan Wang
Social loafing in knowledge contribution (namely, knowledge contribution loafing [KCL]) usually happens in group context, especially in the mobile collaboration tasks. KCL shows…
Abstract
Purpose
Social loafing in knowledge contribution (namely, knowledge contribution loafing [KCL]) usually happens in group context, especially in the mobile collaboration tasks. KCL shows dynamic features over time. However, most previous studies are based on static assumption, that is, KCL will not change over time. This paper aims to reveal the dynamics of KCL in mobile collaboration and analyze how network centrality influences KCL states considering the current loafing state.
Design/methodology/approach
This study is based on empirical study design. Real mobile collaboration behavioral data related to knowledge contribution were collected to investigate the dynamic relationship between network centrality and KCL. In total, 4,127 chat contents were collected through Slack (a mobile collaboration APP). The text data were first analyzed using the text analysis method and then analyzed by a machine learning method called hidden Markov model.
Findings
First, the results reveal the inner structure of KCL, showing that it has three states (low, medium and high). Second, it is found that network centrality positively influences individuals involved in medium and high loafing state, while it has a negative influence on individuals with low loafing state.
Research limitations/implications
The limitations are related to the single machine learning method and no subdivision of social network. First, this paper only uses one kind of text classification model (TF-IDF) to divide chat contents, which may not be superior to other classification models. This paper considers the eigenvector centrality, and not further divides the social network into advice network and expressive network.
Practical implications
This study helps companies infer tendency of different KCL and dynamically re-organize a mobile collaborative team for better knowledge contribution.
Originality/value
First, previous studies based on static assumptions regarding KCL as static and the relationship between loafing reducing mechanisms and team members KCL does not change over time. This study relaxes static assumptions and allows KCL to change during the process of collaboration. Second, this study allows the impact of network centrality to be different when members are in different KCL states.
Details
Keywords
Yi-Hung Liu, Sheng-Fong Chen and Dan-Wei (Marian) Wen
Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case…
Abstract
Purpose
Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case report information can assist the general public in appropriately managing their diseases. Therefore, this paper aims to introduce a novel deep learning-based method that allows non-professionals to make inquiries using ordinary vocabulary, retrieving the most relevant case reports for accurate and effective health information.
Design/methodology/approach
The dataset of case reports was collected from both the patient-generated research network and the digital medical journal repository. To enhance the accuracy of obtaining relevant case reports, the authors propose a retrieval approach that combines BERT and BiLSTM methods. The authors identified representative health-related case reports and analyzed the retrieval performance, as well as user judgments.
Findings
This study aims to provide the necessary functionalities to deliver relevant health case reports based on input from ordinary terms. The proposed framework includes features for health management, user feedback acquisition and ranking by weights to obtain the most pertinent case reports.
Originality/value
This study contributes to health information systems by analyzing patients' experiences and treatments with the case report retrieval model. The results of this study can provide immense benefit to the general public who intend to find treatment decisions and experiences from relevant case reports.
Details
Keywords
M.I. Okoroh, B.D. Ilozor and P.P. Gombera
To evaluate the use of neural networks in healthcare facilities risk management.
Abstract
Purpose
To evaluate the use of neural networks in healthcare facilities risk management.
Design/methodology/approach
The data used to develop the input to the national health service facilities risk exposure system (NHSFRES) was solicited from 60 healthcare managers. Risk exposure system has been developed using the risk knowledge that was articulated from experienced healthcare operators through postal questionnaires and repertory grid interviews. This knowledge was then transformed and represented in Trajan 4.0, an expert system shell that uses artificial neural networks as its modelling technique.
Findings
It provides healthcare facilities operators an avenue to evaluate their own risk management method (point score system) based on their own healthcare business knowledge/judgment and corporate objectives for various FM service operations.
Research limitations/implications
The key issue that should always be noted by NHSFRES users is that, the concept of measuring or evaluating business risks will always be uncertain. Professional judgment, based on sound information, is an essential element in interpreting and using the system.
Practical implications
The model provides healthcare facilities managers a vehicle for predicting pre‐ and post‐facilities risk‐factors in healthcare operations before they occur. A clear understanding of the risk signals would mean that appropriate management course of action should to be considered that will improve FM operators' business performance.
Originality/value
The NHSFRES is developed using the risk knowledge that was articulated from experienced healthcare operators through postal questionnaires and repertory grid interviews. It provides a reasonable early warning signal to the healthcare managers, and can be used by decision makers to evalute the severity of risks on healthcare facilities business operations.
Details
Keywords
Sara M. González-Betancor and Pablo Dorta-González
The two most used citation impact indicators in the assessment of scientific journals are, nowadays, the impact factor and the h-index. However, both indicators are not field…
Abstract
Purpose
The two most used citation impact indicators in the assessment of scientific journals are, nowadays, the impact factor and the h-index. However, both indicators are not field normalized (vary heavily depending on the scientific category). Furthermore, the impact factor is not robust to the presence of articles with a large number of citations, while the h-index depends on the journal size. These limitations are very important when comparing journals of different sizes and categories. The purpose of this paper is to propose an alternative citation impact indicator, based on the percentage of highly cited articles in the journal.
Design/methodology/approach
This alternative indicator is empirically compared with the impact factor and the h-index, considering different time windows and citation percentiles (levels of citation for considering an article as highly cited compared to others in the same year and category). The authors use four journal categories (Clarivate Analytics Web of Science) which are quite different according to the publication profiles and citation levels (Information Science & Library Science, Operations Research & Management Science, Ophthalmology, and Physics Condensed Matter).
Findings
After analyzing 20 different indicators, depending on the citation percentile and the time window in which citations are counted, the indicator that seems to best homogenize the categories is the one that considers a time window of two years and a citation level of 10 percent.
Originality/value
The percentage of highly cited articles in a journal is field normalized (comparable between scientific categories), independent of the journal size and also robust to the presence of articles with a high number of citations.
Details
Keywords
Ibrahim Yahaya Wuni and Geoffrey Qiping Shen
Modular integrated construction (MiC) projects are co-created by a network of organizations and players providing different roles, information and activities throughout the supply…
Abstract
Purpose
Modular integrated construction (MiC) projects are co-created by a network of organizations and players providing different roles, information and activities throughout the supply chains. Hence, a successful delivery of MiC projects can hardly be decoupled from effective supply chain management (SCM). This study investigated the critical success determinants of effective SCM in MiC projects.
Design/methodology/approach
Comprehensive literature research and expert review identified 20 candidate success determinants, which formed the basis for a structured questionnaire survey of experts in eighteen countries. The study computed the mean scores, normalized mean values and significance indices of success determinants for SCM in MiC projects.
Findings
The analysis revealed that design for SCM, effective communication and information sharing, organizational readiness and familiarity with MiC, seamless integration and coordination of supply chain, early involvement of critical supply chain stakeholders and extensive supply chain planning are the top five critical success determinants of effective SCM in MiC projects. The 20 success determinants are categorized into five: project strategy, bespoke competencies, process management, stakeholder management and risk management.
Research limitations/implications
The study has some limitations. The smaller sample size could affect the generalizability of the results. The generalized analysis of the success determinants overlooked their sensitivities to specific contexts, industry climates and project types.
Originality/value
The study established a novel set of critical success determinants for SCM in MiC projects that have not been explicitly discussed in the MiC success literature and described their hypothetical dynamic linkages. It contributes to a better understanding of how best to manage the MiC project supply chain effectively.
Details
Keywords
Citra Ongkowijoyo and Hemanta Doloi
The purpose of this paper is to develop a novel risk analysis method named fuzzy critical risk analysis (FCRA) for assessing the infrastructure risks from a risk-community network…
Abstract
Purpose
The purpose of this paper is to develop a novel risk analysis method named fuzzy critical risk analysis (FCRA) for assessing the infrastructure risks from a risk-community network perspective. The basis of this new FCRA method is the integration of existing risk magnitude analysis with the novel risk impact propagation analysis performed in specific infrastructure systems to assess the criticality of risk within specific social-infrastructure interrelated network boundary.
Design/methodology/approach
The FCRA uses a number of scientific methods such as failure mode effect and criticality analysis (FMECA), social network analysis (SNA) and fuzzy-set theory to facilitate the building of risk evaluation associated with the infrastructure and the community. The proposed FCRA approach has been developed by integrating the fuzzy-based social network analysis (FSNA) method with conventional fuzzy FMECA method to analyse the most critical risk based on risk decision factors and risk impact propagation generated by various stakeholder perceptions.
Findings
The application of FSNA is considered to be highly relevant for investigating the risk impact propagation mechanism based on various stakeholder perceptions within the infrastructure risk interrelation and community networks. Although conventional FMECA methods have the potential for resulting in a reasonable risk ranking based on its magnitude value within the traditional risk assessment method, the lack of considering the domino effect of the infrastructure risk impact, the various degrees of community dependencies and the uncertainty of various stakeholder perceptions made such methods grossly ineffective in the decision-making of risk prevention (and mitigation) and resilience context.
Research limitations/implications
The validation of the model is currently based on a hypothetical case which in the future should be applied empirically based on a real case study.
Practical implications
Effective functioning of the infrastructure systems for seamless operation of the society is highly crucial. Yet, extreme events resulted in failure scenarios often undermine the efficient operations and consequently affect the community at multiple levels. Current risk analysis methodologies lack to address issues related to diverse impacts on communities and propagation of risks impact within the infrastructure system based on multi-stakeholders’ perspectives. The FCRA developed in this research has been validated in a hypothetical case of infrastructure context. The proposed method will potentially assist the decision-making regarding risk governance, managing the vulnerability of the infrastructure and increasing both the infrastructure and community resilience.
Social implications
The new approach developed in this research addresses several infrastructure risks assessment challenges by taking into consideration of not only the risk events associated with the infrastructure systems but also the dependencies of various type communities and cascading effect of risks within the specific risk-community networks. Such a risk-community network analysis provides a good basis for community-based risk management in the context of mitigation of disaster risks and building better community resilient.
Originality/value
The novelty of proposed FCRA method is realized due to its ability for improving the estimation accuracy and decision-making based on multi-stakeholder perceptions. The process of assessment of the most critical risks in the hypothetical case project demonstrated an eminent performance of FCRA method as compared to the results in conventional risk analysis method. This research contributes to the literature in several ways. First, based on a comprehensive literature review, this work established a benchmark for development of a new risk analysis method within the infrastructure and community networks. Second, this study validates the effectiveness of the model by integrating fuzzy-based FMECA with FSNA. The approach is considered useful from a methodological advancement when prioritizing similar or competing risk criticality values.
Details
Keywords
Marshal H. Wright, Mihai C. Bocarnea and Julie K. Huntley
This study examined donor development processes in a faithbased, 501(c)(3) publicly-supported, tax-exempt organizational setting. The conceptual framework is relationship…
Abstract
This study examined donor development processes in a faithbased, 501(c)(3) publicly-supported, tax-exempt organizational setting. The conceptual framework is relationship marketing theory as informed from a systems theory alignment perspective. Organization-public relationship (OPR) dynamically predicts donor willingness to contribute unrestricted funds. It is proffered that the discrepancy variable, “values-fit incongruence,” significantly affects this dynamic. This contention is explored by asking the following two questions: (a) does donor-organization values-fit incongruence significantly negatively predict donor willingness to contribute unrestricted funds, and b) is the OPR construct strengthened with the patent inclusion of values-fit incongruence as an interactive moderator variable. Results suggest values-fit incongruence significantly negatively predicts donor willingness to contribute unrestricted funds. The results also suggest the OPR model is not strengthened by patently including the values-fit incongruence variable, as it may already be latently accounted for.