Search results
1 – 10 of over 15000Anna Kosmützky and Georg Krücken
Traditional studies in the sociology of science have highlighted the self-organized character of the academic community. This article focuses on recent interrelated changes that…
Abstract
Traditional studies in the sociology of science have highlighted the self-organized character of the academic community. This article focuses on recent interrelated changes that alter that distinctive governance structure and its related patterns of competition and cooperation. The changes that we identify here are contractualization and large-scale cooperative research. We use different data sources to exemplify these new patterns and discuss the illustrative role of research clusters in German academia. Research clusters as funded by the German Research Foundation (DFG) are both a highly prestigious scarce good in the competition for reputation and resources and a means of fostering cooperation. Our analysis of this German example reveals that this new institutional configuration of universities as organizations, academic researchers, and the state has a profound effect on organizational practices. We discuss the implications of our empirical findings with regard to collegiality in academia. Ultimately, we anticipate a further weakening of collegial bonds, not only because universities and the state have become more active in shaping the nature of academic competition and cooperation but also because of the increasing strategic and individualistic orientation of academic researchers. In the final section, we summarize our findings and address the need for further research and an international comparative perspective.
Details
Keywords
The main purpose of the paper is to offer a personal view on the development of documentation/information and documentation (IuD) in Germany, while pointing out the need to…
Abstract
Purpose
The main purpose of the paper is to offer a personal view on the development of documentation/information and documentation (IuD) in Germany, while pointing out the need to further investigate the specific features of its development paths. The methodology is based on critical review of the available literature sources in the German language.
Design/methodology/approach
The paper uses the method of critical review of published documents in journals (especially in Nachrichten für Dokumentation), books and reports of state and provincial administrations that are directly related to monitoring and/or encouraging the development of the young field of documentation.
Findings
The paper offers a review and interpretation of the most significant development phases, the contributions of individuals and the influence of the official state and information policy based on the consulted sources.
Research limitations/implications
This research is limited to the literature written in German language.
Practical implications
The paper could be of interest to researchers and professionals who are interested in the development of documentation.
Social implications
The paper covers the period after the World War II until the end of 1980s that is especially interesting from the social point of view in divided Germany.
Originality/value
To the author’s knowledge, there is no comprehensive history of documentation in German-speaking countries written in English. This paper is the result of a research project started three years ago with colleagues from Germany, Austria and Switzerland, that aims to cover all phases of the appearance and development of information science in German-speaking countries and could be understood as a kind of introduction to papers planned to follow.
Details
Keywords
The purpose of this study is to propose a methodological approach for modeling catastrophic consequences caused by black swan events, based on complexity science, and framed on…
Abstract
Purpose
The purpose of this study is to propose a methodological approach for modeling catastrophic consequences caused by black swan events, based on complexity science, and framed on Feyerabend’s anarchistic theory of knowledge. An empirical application is presented to illustrate the proposed approach.
Design/methodology/approach
Thom’s nonlinear differential equations of morphogenesis are used to develop a theoretical model of the impact of catastrophes on international business (IB). The model is then estimated using real-world data on the performance of multinational airlines during the SARS-CoV-2 (COVID-19) pandemic.
Findings
The catastrophe model exhibits a remarkable capability to simultaneously capture complex linear and nonlinear relationships. Through empirical estimations and simulations, this approach enables the analysis of IB phenomena under normal conditions, as well as during black swan events.
Originality/value
To the best of the author’s knowledge, this study is the first attempt to estimate the impact of black swan events in IB using a catastrophe model grounded in complexity theory. The proposed model successfully integrates the abrupt and profound effects of catastrophes on multinational corporations, offering a critical perspective on the theoretical and practical use of complexity science in IB.
Details
Keywords
Tiago F.A.C. Sigahi and Laerte Idal Sznelwar
The purpose of this paper is twofold: (1) to map and analyze existing complexity typologies and (2) to develop a framework for characterizing complexity-based approaches.
Abstract
Purpose
The purpose of this paper is twofold: (1) to map and analyze existing complexity typologies and (2) to develop a framework for characterizing complexity-based approaches.
Design/methodology/approach
This study was conducted in three stages: (1) initial identification of typologies related to complexity following a structured procedure based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol; (2) backward and forward review to identify additional relevant typologies and (3) content analysis of the selected typologies, categorization and framework development.
Findings
Based on 17 selected typologies, a comprehensive overview of complexity studies is provided. Each typology is described considering key concepts, contributions and convergences and differences between them. The epistemological, theoretical and methodological diversity of complexity studies was explored, allowing the identification of the main schools of thought and authors. A framework for characterizing complexity-based approaches was proposed including the following perspectives: ontology of complexity, epistemology of complexity, purpose and object of interest, methodology and methods and theoretical pillars.
Originality/value
This study examines the main typologies of complexity from an integrated and multidisciplinary perspective and, based on that, proposes a novel framework to understanding and characterizing complexity-based approaches.
Details
Keywords
Yanqing Shi, Hongye Cao and Si Chen
Online question-and-answer (Q&A) communities serve as important channels for knowledge diffusion. The purpose of this study is to investigate the dynamic development process of…
Abstract
Purpose
Online question-and-answer (Q&A) communities serve as important channels for knowledge diffusion. The purpose of this study is to investigate the dynamic development process of online knowledge systems and explore the final or progressive state of system development. By measuring the nonlinear characteristics of knowledge systems from the perspective of complexity science, the authors aim to enrich the perspective and method of the research on the dynamics of knowledge systems, and to deeply understand the behavior rules of knowledge systems.
Design/methodology/approach
The authors collected data from the programming-related Q&A site Stack Overflow for a ten-year period (2008–2017) and included 48,373 tags in the analyses. The number of tags is taken as the time series, the correlation dimension and the maximum Lyapunov index are used to examine the chaos of the system and the Volterra series multistep forecast method is used to predict the system state.
Findings
There are strange attractors in the system, the whole system is complex but bounded and its evolution is bound to approach a relatively stable range. Empirical analyses indicate that chaos exists in the process of knowledge sharing in this social labeling system, and the period of change over time is about one week.
Originality/value
This study contributes to revealing the evolutionary cycle of knowledge stock in online knowledge systems and further indicates how this dynamic evolution can help in the setting of platform mechanics and resource inputs.
Details
Keywords
Chao Zhang, Fang Wang, Yi Huang and Le Chang
This paper aims to reveal the interdisciplinarity of information science (IS) from the perspective of the evolution of theory application.
Abstract
Purpose
This paper aims to reveal the interdisciplinarity of information science (IS) from the perspective of the evolution of theory application.
Design/methodology/approach
Select eight representative IS journals as data sources, extract the theories mentioned in the full texts of the research papers and then measure annual interdisciplinarity of IS by conducting theory co-occurrence network analysis, diversity measure and evolution analysis.
Findings
As a young and vibrant discipline, IS has been continuously absorbing and internalizing external theoretical knowledge and thus formed a high degree of interdisciplinarity. With the continuous application of some kernel theories, the interdisciplinarity of IS appears to be decreasing and gradually converging into a few neighboring disciplines. Influenced by big data and artificial intelligence, the research paradigm of IS is shifting from a theory centered one to a technology centered one.
Research limitations/implications
This study helps to understand the evolution of the interdisciplinarity of IS in the past 21 years. The main limitation is that the data were collected from eight journals indexed by the Social Sciences Citation Index and a small amount of theories might have been omitted.
Originality/value
This study identifies the kernel theories in IS research, measures the interdisciplinarity of IS based on the evolution of the co-occurrence network of theory source disciplines and reveals the paradigm shift being happening in IS.
Details
Keywords
Xin (Robert) Luo and Fang-Kai Chang
The purpose of this study is to demonstrate that Strategic Enterprise Management (SEM) and Business Intelligence (BI) have the potential to integrate management decisions…
Abstract
Purpose
The purpose of this study is to demonstrate that Strategic Enterprise Management (SEM) and Business Intelligence (BI) have the potential to integrate management decisions vertically through an organization’s hierarchy. This study also aims to present a design theory framework and build a model dimension using eight principles serving as mid-range theories.
Design/methodology/approach
This study uses a design science perspective to posit how organizations can successfully implement SEMBI (a union of SEM and BI). This study then completes the design theory by building the method dimension using two principles. Finally, the study presents testable hypotheses for the theory and an evaluation using stakeholder attitudes and judgments as proxies for objective measures.
Findings
In the search for a prescription for SEMBI success, this study finds that the notion of the Capability Maturity Model (CMM) is a good artifact with which to organize the principles the authors are seeking. CMM has since been adapted to suit different contexts by incorporating relevant principles from those domains. Hereafter, this study refers to SEMBI–CMM as the adapted solution for SEMBI's success.
Originality/value
This study coins and uses the term SEMBI to represent the union of SEM and BI. This term retains its distinct identities and principles and forms a holistic and integrated view of SEM and BI implementation strategies. In an effort to advance this line of research, this study employs a design science perspective to address the question of how an organization can successfully implement SEMBI.
Details
Keywords
Reza Edris Abadi, Mohammad Javad Ershadi and Seyed Taghi Akhavan Niaki
The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of…
Abstract
Purpose
The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of unstructured data in research information systems, it is necessary to divide the information into logical groupings after examining their quality before attempting to analyze it. On the other hand, data quality results are valuable resources for defining quality excellence programs of any information system. Hence, the purpose of this study is to discover and extract knowledge to evaluate and improve data quality in research information systems.
Design/methodology/approach
Clustering in data analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found. In this study, data extracted from an information system are used in the first stage. Then, the data quality results are classified into an organized structure based on data quality dimension standards. Next, clustering algorithms (K-Means), density-based clustering (density-based spatial clustering of applications with noise [DBSCAN]) and hierarchical clustering (balanced iterative reducing and clustering using hierarchies [BIRCH]) are applied to compare and find the most appropriate clustering algorithms in the research information system.
Findings
This paper showed that quality control results of an information system could be categorized through well-known data quality dimensions, including precision, accuracy, completeness, consistency, reputation and timeliness. Furthermore, among different well-known clustering approaches, the BIRCH algorithm of hierarchical clustering methods performs better in data clustering and gives the highest silhouette coefficient value. Next in line is the DBSCAN method, which performs better than the K-Means method.
Research limitations/implications
In the data quality assessment process, the discrepancies identified and the lack of proper classification for inconsistent data have led to unstructured reports, making the statistical analysis of qualitative metadata problems difficult and thus impossible to root out the observed errors. Therefore, in this study, the evaluation results of data quality have been categorized into various data quality dimensions, based on which multiple analyses have been performed in the form of data mining methods.
Originality/value
Although several pieces of research have been conducted to assess data quality results of research information systems, knowledge extraction from obtained data quality scores is a crucial work that has rarely been studied in the literature. Besides, clustering in data quality analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found.
Details
Keywords
Han Sun, Song Tang, Xiaozhi Qi, Zhiyuan Ma and Jianxin Gao
This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose…
Abstract
Purpose
This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose estimation accuracy and improve the overall system performance in outdoor environments.
Design/methodology/approach
Distinct from traditional approaches, MCFilter emphasizes enhancing point cloud data quality at the pixel level. This framework hinges on two primary elements. First, the D-Tracker, a tracking algorithm, is grounded on multiresolution three-dimensional (3D) descriptors and adeptly maintains a balance between precision and efficiency. Second, the R-Filter introduces a pixel-level attribute named motion-correlation, which effectively identifies and removes dynamic points. Furthermore, designed as a modular component, MCFilter ensures seamless integration into existing LiDAR SLAM systems.
Findings
Based on rigorous testing with public data sets and real-world conditions, the MCFilter reported an increase in average accuracy of 12.39% and reduced processing time by 24.18%. These outcomes emphasize the method’s effectiveness in refining the performance of current LiDAR SLAM systems.
Originality/value
In this study, the authors present a novel 3D descriptor tracker designed for consistent feature point matching across successive frames. The authors also propose an innovative attribute to detect and eliminate noise points. Experimental results demonstrate that integrating this method into existing LiDAR SLAM systems yields state-of-the-art performance.
Details