Search results

1 – 10 of 848
Article
Publication date: 1 February 2002

Alexander Backlund

The definition of complexity is discussed, both in general and, mostly, in relation to organisations and information systems. Considering the subjective nature of complexity

2328

Abstract

The definition of complexity is discussed, both in general and, mostly, in relation to organisations and information systems. Considering the subjective nature of complexity, complexity is defined as the effort (as it is perceived) that is required to understand and cope with something, but several other definitions and characterisations are also considered. Complexity is related to variety and information. The infological relation is reversed, and it is argued that there is a certain connection between information loss and the complexity of an information system. It is also argued that organisations and information systems can benefit from being simple, even if that would mean a decrease in variety. There seems to be a (somewhat obscure) connection between the limitations of our short‐term memory and what we perceive as complex. It is desirable to empirically verify the characterisations of complexity that previous researchers have made.

Details

Kybernetes, vol. 31 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 27 May 2014

Carson Grubaugh

To ask if humans have a categorical obligation to maintain narratives, especially personal narratives of identity? Or, are we better off evolving past such things? This question…

Abstract

Purpose

To ask if humans have a categorical obligation to maintain narratives, especially personal narratives of identity? Or, are we better off evolving past such things? This question is closely related to a discussion of how and why humans tend to place value on a certain amount of organization in content, semantic and otherwise. The paper aims to discuss these issues.

Design/methodology/approach

Examples from a personal art projects set the stage for the major questions of the paper regarding the ordering of content.

Findings

One can create a Cartesian mapping of content, with Shannon Entropy and Kolmogorov Complexity as the two vectors, thus constructing a histogram that is useful for the analysis of content of all sorts. Humanity seems to value/find useful, a certain sector of this histogram. The reason for this appears to supervene on biological imperatives for survival. These commitments are now put under pressure due to an increase in variety and complexity in personal and societal narratives, causing an unease about the future.

Research limitations/implications

This paper formulates the relevant moral questions by providing a structure within to ask them. Answering these questions is for another time.

Practical implications

The histogram helps situate various degrees of content organization and is thus a useful tool for analyzing content. It could lead to something akin to Birkhoff's Aesthetic Measure, but for content of all sorts rather than just aesthetic objects.

Originality/value

The format is singular for a journal article.

Article
Publication date: 20 February 2007

Vladimir S. Lerner

Science of systems requires a specific and constructive mathematical model and language, which describe jointly such systemic categories as adaptation, self‐organization…

1209

Abstract

Purpose

Science of systems requires a specific and constructive mathematical model and language, which describe jointly such systemic categories as adaptation, self‐organization, complexity, evolution, and bring the applied tools for building a system model for each specific object of a diverse nature. This formalism should be connected directly with a world of information and computer applications of systemic model, developed for a particular object. The considered information systems theory (IST) is aimed at building a bridge between the mathematical systemic formalism and information technologies to develop a constructive systemic model of revealing information regularities and specific information code for each object.

Design/methodology/approach

To fulfill this goal and the considered systems' definition, the IST joins two main concepts: unified information description of interacted flows, initiated by the sources of different nature, with common information language and systems modeling methodology, applied to distinct interdisciplinary objects; general system's information formalism for building the model, which allows expressing mathematically the system's regularities and main systemic mechanisms.

Findings

The formalism of informational macrodynamics (IMD), based of the minimax variational principle, reveals the system model's main layers: microlevel stochastics, macrolevel dynamics, hierarchical dynamic network (IN) of information structures, its minimal logic, and optimal code of communication language, generated by the IN hierarchy, dynamics, and geometry. The system's complex dynamics originate information geometry and evolution with the functional information mechanisms of ordering, cooperation, mutation, stability, diversity, adaptation, self‐organization, and the double helix's genetic code.

Practical implications

The developed IMD's theoretical computer‐based methodology and the software has been applied to such areas as technology, communications, computer science, intelligent processes, biology, economy, management, and other nonphysical and physical subjects.

Originality/value

The IMD's macrodynamics of uncertainties connect randomness and regularities, stochastic and determinism, reversibility and irreversibility, symmetry and asymmetry, stability and instability, structurization and stochastization, order and disorder, as a result of micro‐macrolevel's interactions for an open system, when the external environment can change the model's structure.

Details

Kybernetes, vol. 36 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 16 October 2018

Jing Liu, Zhiwen Pan, Jingce Xu, Bing Liang, Yiqiang Chen and Wen Ji

With the development of machine learning techniques, the artificial intelligence systems such as crowd networks are becoming more autonomous and smart. Therefore, there is a…

1200

Abstract

Purpose

With the development of machine learning techniques, the artificial intelligence systems such as crowd networks are becoming more autonomous and smart. Therefore, there is a growing demand for developing a universal intelligence measurement so that the intelligence of artificial intelligence systems can be evaluated. This paper aims to propose a more formalized and accurate machine intelligence measurement method.

Design/methodology/approach

This paper proposes a quality–time–complexity universal intelligence measurement method to measure the intelligence of agents.

Findings

By observing the interaction process between the agent and the environment, we abstract three major factors for intelligence measure as quality, time and complexity of environment.

Originality/value

This paper proposes a calculable universal intelligent measure method through considering more than two factors and the correlations between factors which are involved in an intelligent measurement.

Details

International Journal of Crowd Science, vol. 2 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Book part
Publication date: 25 March 2010

S.B. von Helfenstein

As global economic systems become increasingly more complex and dynamic and the universal language of historical accounting is being profoundly altered, the theory and tools we…

Abstract

As global economic systems become increasingly more complex and dynamic and the universal language of historical accounting is being profoundly altered, the theory and tools we use in neo-classical economics, traditional finance, and valuation are beginning to prove inadequate to the tasks being required of them. Hence, there is a need to consider new avenues of thought and new tools. In this conceptual chapter, I explore the use of real options “in” engineering systems design as a means to achieve more rigorous and insightful results in the design and valuation of economic systems, particularly that of the firm. In the process, I gain further insight into the causes and cures for systemic disturbances generated by the presence and selection of real options in economic systems.

Details

Research in Finance
Type: Book
ISBN: 978-1-84950-726-4

Article
Publication date: 1 December 2005

Thomas J. Housel and Sarah K. Nelson

The purpose of this paper is to provide a review of an analytic methodology (knowledge valuation analysis, i.e. KVA), based on complexity and information theory, that is capable…

3073

Abstract

Purpose

The purpose of this paper is to provide a review of an analytic methodology (knowledge valuation analysis, i.e. KVA), based on complexity and information theory, that is capable of quantifying value creation by corporate intellectual capital. It aims to use a real‐world case to demonstrate this methodology within a consulting context.

Design/methodology/approach

The fundamental assumptions and theoretical constructs underlying KVA are summarized. The history of the concept, a case application, limitations, and implications for the methodology are presented.

Findings

Although well‐known financial analytic tools were used to justify IT investment proposals, none provided a satisfying result because none offered an unambiguous way to tie IT performance to value creation. KVA provided a means to count the amount of corporate knowledge, in equivalent units, required to produce the outputs of client core processes. This enabled stakeholders to assign revenue streams to IT, develop IT ROIs, and decide with clarity where to invest.

Practical implications

When stakeholders can assign revenue streams to sub‐corporate processes, they have a new context for making IC investment decisions. “Cost centers” and decisions based on cost containment can be replaced. Concepts such as a knowledge market, the knowledge asset pricing model, k‐betas, and a sub‐corporate equities market can be developed and applied. Some of the limitations related to real options analysis can be resolved.

Originality/value

This paper introduces an approach to measuring corporate intellectual capital that solves some long‐standing IC valuation problems.

Details

Journal of Intellectual Capital, vol. 6 no. 4
Type: Research Article
ISSN: 1469-1930

Keywords

Open Access
Article
Publication date: 2 July 2020

Zheming Yang and Wen Ji

The multiple factors of intelligence measurement are critical in intelligent science. The intelligence measurement is typically built as a model based on multiple factors. The…

Abstract

Purpose

The multiple factors of intelligence measurement are critical in intelligent science. The intelligence measurement is typically built as a model based on multiple factors. The different agent is generally difficult to measure because of the uncertainty between multiple factors. The purpose of this paper is to solve the problem of uncertainty between multiple factors and propose an effective method for universal intelligence measurement for the different agents.

Design/methodology/approach

In this paper, the authors propose a universal intelligence measurement method based on meta-analysis for crowd network. First, the authors get study data through keywords in the database and delete the low-quality data. Second, they compute the effect value by odds ratio, relative risk and risk difference. Then, they test the homogeneity by Q-test and analyze the bias by funnel plots. Third, they select the fixed effect and random effect as a statistical model. Finally, through the meta-analysis of time, complexity and reward, the weight of each factor in the intelligence measurement is obtained and then the meta measurement model is constructed.

Findings

This paper studies the relationship among time, complexity and reward through meta-analysis and effectively combines the measurement of heterogeneous agents such as human, machine, enterprise, government and institution.

Originality/value

This paper provides a universal intelligence measurement model for crowd network. And it can provide a theoretical basis for the research of crowd science.

Details

International Journal of Crowd Science, vol. 4 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 2 October 2007

Xiangyang Li and Charu Chandra

Large supply and computer networks contain heterogeneous information and correlation among their components, and are distributed across a large geographical region. This paper…

3043

Abstract

Purpose

Large supply and computer networks contain heterogeneous information and correlation among their components, and are distributed across a large geographical region. This paper aims to investigate and develop a generic knowledge integration framework that can handle the challenges posed in complex network management. It also seeks to examine this framework in various applications of essential management tasks in different infrastructures.

Design/methodology/approach

Efficient information and knowledge integration technologies are key to capably handling complex networks. An adaptive fusion framework is proposed that takes advantage of dependency modelling, active configuration planning and scheduling, and quality assurance of knowledge integration. The paper uses cases of supply network risk management and computer network attack correlation (NAC) to elaborate the problem and describe various applications of this generic framework.

Findings

Information and knowledge integration becomes increasingly important, enabled by technologies to collect and process data dynamically, and faces enormous challenges in handling escalating complexity. Representing these systems into an appropriate network model and integrating the knowledge in the model for decision making, directed by information and complexity measures, provide a promising approach. The preliminary results based on a Bayesian network model support the proposed framework.

Originality/value

First, the paper discussed and defined the challenges and requirements faced by knowledge integration in complex networks. Second, it proposed a knowledge integration framework that systematically models various network structures and adaptively integrates knowledge, based on dependency modelling and information theory. Finally, it used a conceptual Bayesian model to elaborate the application to supply chain risk management and computer NAC of this promising framework.

Details

Industrial Management & Data Systems, vol. 107 no. 8
Type: Research Article
ISSN: 0263-5577

Keywords

Book part
Publication date: 19 September 2019

Charles F. Hofacker

Given that value exchange in virtually every sector of the economy is increasingly dominated by software, the goals of this chapter are to bring software to the attention of the…

Abstract

Given that value exchange in virtually every sector of the economy is increasingly dominated by software, the goals of this chapter are to bring software to the attention of the academic marketing community, to discuss the unusual product attributes of software, and to therefore suggest some research topics related to software as a product attribute. Software allows service to be physically stored and allows physical objects to perform services. Managing products that have evolved into software products creates difficult challenges for managers as software does not resemble either tangible goods or intangible services in terms of production, operations, cost structure, or prescribed strategy. Every time a business replaces an employee with an e-service interaction, and every time a business adds a line of code to a previously inert object, the nature of that business changes. And as software gets more capable, its nature as a product changes as well by adding unique product characteristics summarized as complexity, intelligence, autonomy, and agency.

Details

Marketing in a Digital World
Type: Book
ISBN: 978-1-78756-339-1

Keywords

Article
Publication date: 18 January 2016

Paweł Fiedor and Artur Hołda

– This paper aims to present a framework enriching currency risk analyses based on information theory.

1411

Abstract

Purpose

This paper aims to present a framework enriching currency risk analyses based on information theory.

Design/methodology/approach

Information-theoretic measures of predictability (entropy rate) and co-dependence (mutual information) are used to enhance existing methods of analysing and measuring currency risk.

Findings

The currency exchange rates have varying degrees of predictability, which should be accounted for in currency risk analyses. In case of baskets of currencies, a network approach rooted in portfolio theory may be useful.

Research limitations/implications

The currency exchange rate time series must be discretised for the information-theoretic analysis (although the results are robust). An agent-based simulation may be a necessary further study to show what the impact of accounting for predictability in managing currency risk is.

Practical implications

Practical analyses measuring currency risk should take predictability of currency rate changes into account wherever the currency exposure is actively managed.

Originality/value

The paper introduces predictability into measuring currency risk, which has previously been ignored, despite the nature of the risk being inherently tied to uncertainty of the currency rate changes. The paper also introduces a portfolio theory-based approach to quantifying currency risk, which accounts for non-linear co-dependence in the currency markets.

Details

The Journal of Risk Finance, vol. 17 no. 1
Type: Research Article
ISSN: 1526-5943

Keywords

1 – 10 of 848