Search results

1 – 10 of over 19000
Article
Publication date: 1 April 1981

LAWRENCE J. MAZLACK

It is often argued that anything observable may be simulated on a computer. Using this as a basis, workers in artificial intelligence (AI) often go forward to maintain…

Abstract

It is often argued that anything observable may be simulated on a computer. Using this as a basis, workers in artificial intelligence (AI) often go forward to maintain that machines can be made intelligent by machine simulation of human intelligence processes. There are two difficulties with this concept. The first difficulty lies in the knowledge of human intelligence processes that we have presently obtained and may possibly obtain in the near future. A more basic question is of the sufficiency of the concept itself. Simulation in itself is not sufficient to produce intelligent action where perhaps modelling might be. There are fundamental difficulties in the problem of establishing an adequate mapping function. It is held that there is insufficient correspondence between human and machine intelligence processes to allow human intelligence modelling on existing digital computers.

Details

Kybernetes, vol. 10 no. 4
Type: Research Article
ISSN: 0368-492X

Open Access
Article
Publication date: 16 August 2019

Morteza Moradi, Mohammad Moradi, Farhad Bayat and Adel Nadjaran Toosi

Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent…

3155

Abstract

Purpose

Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive.

Design/methodology/approach

According to the extensive acceptance and efficiency of crowdsourcing, hybrid intelligence and distributed computing concepts, the authors have come up with the (complementary) idea of collective hybrid intelligence. In this regard, besides providing a brief review of the efforts made in the related contexts, conceptual foundations and building blocks of the proposed framework are delineated. Moreover, some discussion on architectural and realization issues are presented.

Findings

The paper describes the conceptual architecture, workflow and schematic representation of a new hybrid computing concept. Moreover, by introducing three sample scenarios, its benefits, requirements, practical roadmap and architectural notes are explained.

Originality/value

The major contribution of this work is introducing the conceptual foundations to combine and integrate collective intelligence of humans and machines to achieve higher efficiency and (computing) performance. To the best of the authors’ knowledge, this the first study in which such a blessing integration is considered. Therefore, it is believed that the proposed computing concept could inspire researchers toward realizing such unprecedented possibilities in practical and theoretical contexts.

Details

International Journal of Crowd Science, vol. 3 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 17 October 2022

Kirill Krinkin, Yulia Shichkina and Andrey Ignatyev

This study aims to show the inconsistency of the approach to the development of artificial intelligence as an independent tool (just one more tool that humans have…

Abstract

Purpose

This study aims to show the inconsistency of the approach to the development of artificial intelligence as an independent tool (just one more tool that humans have developed); to describe the logic and concept of intelligence development regardless of its substrate: a human or a machine and to prove that the co-evolutionary hybridization of the machine and human intelligence will make it possible to reach a solution for the problems inaccessible to humanity so far (global climate monitoring and control, pandemics, etc.).

Design/methodology/approach

The global trend for artificial intelligence development (has been) was set during the Dartmouth seminar in 1956. The main goal was to define characteristics and research directions for artificial intelligence comparable to or even outperforming human intelligence. It should be able to acquire and create new knowledge in a highly uncertain dynamic environment (the real-world environment is an example) and apply that knowledge to solving practical problems. Nowadays artificial intelligence overperforms human abilities (playing games, speech recognition, search, art generation, extracting patterns from data etc.), but all these examples show that developers have come to a dead end. Narrow artificial intelligence has no connection to real human intelligence and even cannot be successfully used in many cases due to lack of transparency, explainability, computational ineffectiveness and many other limits. A strong artificial intelligence development model can be discussed unrelated to the substrate development of intelligence and its general properties that are inherent in this development. Only then it is to be clarified which part of cognitive functions can be transferred to an artificial medium. The process of development of intelligence (as mutual development (co-development) of human and artificial intelligence) should correspond to the property of increasing cognitive interoperability. The degree of cognitive interoperability is arranged in the same way as the method of measuring the strength of intelligence. It is stronger if knowledge can be transferred between different domains on a higher level of abstraction (Chollet, 2018).

Findings

The key factors behind the development of hybrid intelligence are interoperability – the ability to create a common ontology in the context of the problem being solved, plan and carry out joint activities; co-evolution – ensuring the growth of aggregate intellectual ability without the loss of subjectness by each of the substrates (human, machine). The rate of co-evolution depends on the rate of knowledge interchange and the manufacturability of this process.

Research limitations/implications

Resistance to the idea of developing co-evolutionary hybrid intelligence can be expected from agents and developers who have bet on and invested in data-driven artificial intelligence and machine learning.

Practical implications

Revision of the approach to intellectualization through the development of hybrid intelligence methods will help bridge the gap between the developers of specific solutions and those who apply them. Co-evolution of machine intelligence and human intelligence will ensure seamless integration of smart new solutions into the global division of labor and social institutions.

Originality/value

The novelty of the research is connected with a new look at the principles of the development of machine and human intelligence in the co-evolution style. Also new is the statement that the development of intelligence should take place within the framework of integration of the following four domains: global challenges and tasks, concepts (general hybrid intelligence), technologies and products (specific applications that satisfy the needs of the market).

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 27 June 2008

Somparn Promta and Kenneth Einar Himma

The purpose of this paper is to explore the possibility and desirability of artificial intelligence (AI) by considering western literature on AI and Buddhist doctrine.

1590

Abstract

Purpose

The purpose of this paper is to explore the possibility and desirability of artificial intelligence (AI) by considering western literature on AI and Buddhist doctrine.

Design/methodology/approach

The paper argues that these issues can best be considered examined from a variety of philosophical and religious viewpoints and that resolution of those issues depends on which point of view the questions are addressed from. There are a number of philosophical questions involving AI usually considered by philosophers: what is the definition of AI, what is a status of an AI as compared with human intelligence, is there a legitimate purpose for creating AI; if so, what is that purpose? Buddhism is a religion that is deeply philosophical and, perhaps to the surprise of western readers, has a lot to say about the nature of human mind and human intelligence. Although Buddhism does not talk explicitly about AI, the richness of its philosophical views concerning human nature and the nature of the physical world sheds considerable light on the philosophical questions stated above.

Findings

The paper explains how Buddhist teaching would answer the four questions above.

Originality/value

The paper is the first to clarify the Buddhist position on AI, and perhaps represents the first attempt to explore the relationships between any major religion and the AI agenda.

Details

Journal of Information, Communication and Ethics in Society, vol. 6 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Book part
Publication date: 6 December 2021

Phoebe V. Moore

Most scholarly and governmental discussions about artificial intelligence (AI) today focus on a country’s technological competitiveness and try to identify how this…

Abstract

Most scholarly and governmental discussions about artificial intelligence (AI) today focus on a country’s technological competitiveness and try to identify how this supposedly new technological capability will improve productivity. Some discussions look at AI ethics. But AI is more than a technological advancement. It is a social question and requires philosophical inquiry. The producers of AI who are software engineers and designers, and software users who are human resource professionals and managers, unconsciously as well as consciously project direct forms of intelligence onto machines themselves, without considering in any depth the practical implications of this when weighed against human actual or perceived intelligences. Neither do they think about the relations of production that are required for the development and production of AI and its capabilities, where data-producing human workers are expected not only to accept the intelligences of machines, now called ‘smart machines’, but also to endure particularly difficult working conditions for bodies and minds in the process of creating and expanding the datasets that are required for the development of AI itself. This chapter asks, who is the smart worker today and how does she contribute to AI through her quantified, but embodied labour?

Details

The Quantification of Bodies in Health: Multidisciplinary Perspectives
Type: Book
ISBN: 978-1-80071-883-8

Keywords

Article
Publication date: 1 January 1986

Emerson Hilker

We have long been obsessed with the dream of creating intelligent machines. This vision can be traced back to Greek civilization, and the notion that mortals somehow can…

1527

Abstract

We have long been obsessed with the dream of creating intelligent machines. This vision can be traced back to Greek civilization, and the notion that mortals somehow can create machines that think has persisted throughout history. Until this decade these illusions have borne no substance. The birth of the computer in the 1940s did cause a resurgence of the cybernaut idea, but the computer's role was primarily one of number‐crunching and realists soon came to respect the enormous difficulties in crafting machines that could accomplish even the simplest of human tasks.

Details

Collection Building, vol. 7 no. 3
Type: Research Article
ISSN: 0160-4953

Article
Publication date: 17 November 2021

Andrea Paesano

This study aims to investigate about the use of artificial intelligence (AI) (man machine relationship) regarding organizational behavior. The aim of this research paper…

Abstract

Purpose

This study aims to investigate about the use of artificial intelligence (AI) (man machine relationship) regarding organizational behavior. The aim of this research paper is to analyze whether the current AI is used also to replace man in “creative” activities.

Design/methodology/approach

This study is based on a qualitative and explorative approach. It is made a review of the literature with “Scopus” and “Web of Science” databases. The research fields are AI, organizational behavior, man-machine relationship and creativity.

Findings

Analyzing whether the intensive use of AI in organizational behavior can replace human work in creative activities.

Research limitations/implications

The connection of AI with creative activities within the organization is only just beginning. For this reason, other sources, like Harvard Business Review, public reports and professional papers found on the internet have been considered. The most important limitation of this paper is that all the results presented here do not concern a single case study.

Practical implications

In this paper, there are some examples that can show the use of AI in creative activities; however, this does not complete the situation facing companies in any sector because the AI technologies used within enterprises are constantly evolving. It is possible to continue to do research in this field.

Originality/value

The paper is meaningful because highlights the development of AI toward creative activities typically of human resources. It is also interesting because it analyzes the exploratory use of AI in increasingly human work, generating positive and negative externalities.

Details

International Journal of Organizational Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1934-8835

Keywords

Article
Publication date: 28 May 2019

Anthon P. Botha

The purpose of this paper is to address the possible future evolution of innovation from a human-only initiative, to human–machine co-innovation, to autonomous machine

1133

Abstract

Purpose

The purpose of this paper is to address the possible future evolution of innovation from a human-only initiative, to human–machine co-innovation, to autonomous machine innovation and to arrive at a conceptual mind model that outlines the role of innovation regimes and innovation agents.

Design/methodology/approach

This is a concept paper where a theoretical “thought experiment” is done, using future thinking principles and data that originate from the literature.

Findings

A conceptual mind model is developed to facilitate a better understanding of complexity at the edge of innovation where intelligent machines will emerge as innovators of the cyber world. It was found that innovation will gradually evolve from a human-only activity, to human–machine co-innovation, to incidences of autonomous machine innovation, based on the growth of machine intelligence and the adoption of human–machine partnership management models in future.

Research limitations/implications

Very little information is available in the literature on intelligent machines doing innovation. The work is based on a theoretical approach that presents new concepts to be debated, but have not been tested in engineering and technology management practice, except for a conference presentation and academic discussion.

Practical implications

The current world view is that future “smartness” is only possible through the creative abilities that humans have, but as machines are entering the workplace and our daily lives, not only as static robots on a manufacturing line, but as intelligent systems with the potential to replace lawyers and accountants, doctors and teachers, companions and partners, their role in innovation in complex environments needs to be explored.

Social implications

Human–machine interaction is often an emotional social issue of concern in terms of the replacement of human intelligence with machine intelligence. It should be asked whether humans will or should remain in control of innovation? Artificial intelligence (AI) may complement and even substitute human intelligence, but huge value is embedded in the new goods, services and innovations AI will enable, especially in manufacturing, where value embedded in the project becomes complex and dynamic.

Originality/value

The thinking presented in this paper is original and should lead to debate to question the way innovation systems will work in future and inspires thinking about AI and innovation.

Details

Journal of Manufacturing Technology Management, vol. 30 no. 8
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 8 October 2018

Karim Jebari and Joakim Lundborg

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the…

440

Abstract

Purpose

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk.

Design/methodology/approach

This paper distinguishes between “intelligence” or the cognitive capacity of an individual and “techne”, a more general ability to solve problems using, for example, technological artifacts. While human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive.

Findings

Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations.

Originality/value

If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

1 – 10 of over 19000