Search results

1 – 10 of over 55000
Article
Publication date: 4 April 2023

Byron W. Keating and Marjan Aslan

The service recovery literature provides little guidance to firms on how users of self-service technology (SST) perceive assistance provided by human and non-human service agents

Abstract

Purpose

The service recovery literature provides little guidance to firms on how users of self-service technology (SST) perceive assistance provided by human and non-human service agents following a service obstacle. This research responds by addressing two important research questions about SST recovery: (1) how are perceptions of assistance provided following a service obstacle influenced by a customer's psychological needs? and (2) does supporting the psychological needs of customers positively impact continuance intentions following a service obstacle?

Design/methodology/approach

Data are collected to address the research questions via five experiments that explore how assistance provided by a non-human (vs human vs no assistance) service agent contributes to perceptions of psychological support and continuance intentions following a service obstacle while volitionally using SST.

Findings

The results show that while users of SST would prefer to do so without an obstacle requiring intervention of a service agent, if assistance is required then the psychological need support elicited from a non-human service agent was vital to an effective recovery. Further, the findings highlight some boundary conditions for this relationship, with the impact of customer perceived need support on continuance intentions found to be sensitive to fit between the task and assistance provided and the complexity of the task being completed.

Originality/value

Much of the prior service recovery literature has emphasized the different types of tactics that can be used (e.g. apologizing, monetary compensation and explaining what happened), failing to appreciate the role of different types of service agents or the underlying psychological process that explain the relative merit of such tactics. The present research shows that for these tactics to influence continuance intentions, they must be provided by a relevant service agent and support a customer's psychological need for autonomy, competence and relatedness. The hypothesized impact of psychological need support on continuance intentions was also observed to be contingent upon the fit between the task and the type of assistance provided, where the level of task complexity attenuated this fit.

Article
Publication date: 7 April 2020

Dale Richards

The ability for an organisation to adapt and respond to external pressures is a beneficial activity towards optimising efficiency and increasing the likelihood of achieving set…

Abstract

Purpose

The ability for an organisation to adapt and respond to external pressures is a beneficial activity towards optimising efficiency and increasing the likelihood of achieving set goals. It can also be suggested that this very ability to adapt to one's surroundings is one of the key factors of resilience. The nature of dynamically responding to sudden change and then to return to a state that is efficient may be termed as possessing the characteristic of plasticity. Uses of agent-based systems in assisting in organisational processes may have a hand in facilitating an organisations' plasticity, and computational modelling has often been used to try and predict both agent and human behaviour. Such models also promise the ability to examine the dynamics of organisational plasticity through the direct manipulation of key factors. This paper discusses the use of such models in application to organisational plasticity and in particular the relevance to human behaviour and perception of agent-based modelling. The uses of analogies for explaining organisational plasticity is also discussed, with particular discussion around the use of modelling. When the authors consider the means by which the authors can adopt theories to explain this type of behaviour, models tend to focus on aspects of predictability. This in turn loses a degree of realism when we consider the complex nature of human behaviour, and more so that of humanagent behaviour.

Design/methodology/approach

The methodology and approach used for this paper is reflected in the review of the literature and research.

Findings

The use of humanagent behaviour models in organisational plasticity is discussed in this paper.

Originality/value

The originality of this paper is based on the importance of considering the humanagent-based models. When compared to agent-based model approaches, analogy is used as a narrative in this paper.

Details

Evidence-based HRM: a Global Forum for Empirical Scholarship, vol. 9 no. 2
Type: Research Article
ISSN: 2049-3983

Keywords

Article
Publication date: 1 February 1999

D.M. Wilkes, A. Alford, M.E. Cambron, T.E. Rogers, R.A. Peters and K. Kawamura

For the past ten years, the Intelligent Robotics Laboratory (IRL) at Vanderbilt University has been developing service robots that interact naturally, closely and safely with human

Abstract

For the past ten years, the Intelligent Robotics Laboratory (IRL) at Vanderbilt University has been developing service robots that interact naturally, closely and safely with human beings. Two main issues for research have arisen from this prior work. The first is how to achieve a high level of interaction between the human and robot. The result has been the philosophy of human directed local autonomy (HuDL), a guiding principle for research, design, and implementation of service robots. The human‐robot relationship we seek to achieve is symbiotic in the sense that both the human and the robot work together to achieve goals, for example as aids to the elderly or disabled. The second issue is the general problem of system integration, with a specific focus on integrating humans into the service robotic system. This issue has led to the development of the Intelligent Machine Architecture (IMA), a novel software architecture specifically designed to simplify the integration of the many diverse algorithms, sensors, and actuators necessary for intelligent interactive service robots.

Details

Industrial Robot: An International Journal, vol. 26 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 13 November 2017

Salama A. Mostafa, Mohd Sharifuddin Ahmad, Aida Mustapha and Mazin Abed Mohammed

The purpose of this paper is to propose a layered adjustable autonomy (LAA) as a dynamically adjustable autonomy model for a multi-agent system. It is mainly used to efficiently…

Abstract

Purpose

The purpose of this paper is to propose a layered adjustable autonomy (LAA) as a dynamically adjustable autonomy model for a multi-agent system. It is mainly used to efficiently manage humans’ and agents’ shared control of autonomous systems and maintain humans’ global control over the agents.

Design/methodology/approach

The authors apply the LAA model in an agent-based autonomous unmanned aerial vehicle (UAV) system. The UAV system implementation consists of two parts: software and hardware. The software part represents the controller and the cognitive, and the hardware represents the computing machinery and the actuator of the UAV system. The UAV system performs three experimental scenarios of dance, surveillance and search missions. The selected scenarios demonstrate different behaviors in order to create a suitable test plan and ensure significant results.

Findings

The results of the UAV system tests prove that segregating the autonomy of a system as multi-dimensional and adjustable layers enables humans and/or agents to perform actions at convenient autonomy levels. Hence, reducing the adjustable autonomy drawbacks of constraining the autonomy of the agents, increasing humans’ workload and exposing the system to disturbances.

Originality/value

The application of the LAA model in a UAV manifests the significance of implementing dynamic adjustable autonomy. Assessing the autonomy within three phases of agents run cycle (task-selection, actions-selection and actions-execution) is an original idea that aims to direct agents’ autonomy toward performance competency. The agents’ abilities are well exploited when an incompetent agent switches with a more competent one.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 14 March 2017

Dale Richards

The increasing use of robotics within modern factories and workplaces not only sees us becoming more dependent on this technology but it also introduces innovative ways by which…

Abstract

Purpose

The increasing use of robotics within modern factories and workplaces not only sees us becoming more dependent on this technology but it also introduces innovative ways by which humans interact with complex systems. As agent-based systems become more integrated into work environments, the traditional human team becomes more integrated with agent-based automation and, in some cases, autonomous behaviours. This paper discusses these interactions in terms of team composition and how a human-agent collective can share goals via the delegation of authority between human and agent team members.

Design/methodology/approach

This paper highlights the increasing integration of robotics in everyday life and examines the nature of how new novel teams may be constructed with the use of intelligent systems and autonomous agents.

Findings

Areas of human factors and human-computer interaction are used to discuss the benefits and limitations of human-agent teams.

Research limitations/implications

There is little research in (human–robot) (H–R) teamwork, especially from a human factors perspective.

Practical implications

Advancing the author’s understanding of the H–R team (and associated intelligent agent systems) will assist in the integration of such systems in everyday practices.

Social implications

H–R teams hold a great deal of social and organisational issues that need further exploring. Only through understanding this context can advanced systems be fully realised.

Originality/value

This paper is multidisciplinary, drawing on areas of psychology, computer science, robotics and human–computer Interaction. Specific attention is given to an emerging field of autonomous software agents that are growing in use. This paper discusses the uniqueness of the human-agent teaming that results when human and agent members share a common goal within a team.

Details

Team Performance Management: An International Journal, vol. 23 no. 1/2
Type: Research Article
ISSN: 1352-7592

Keywords

Article
Publication date: 31 August 2005

Keith Miller and David Larson

Traditionally, philosophers have ascribed moral agency almost exclusively to humans (Eshleman, 2004). Early writing about moral agency can be traced to Aristotle (Louden, 1989…

303

Abstract

Traditionally, philosophers have ascribed moral agency almost exclusively to humans (Eshleman, 2004). Early writing about moral agency can be traced to Aristotle (Louden, 1989) and Aquinas (1997). In addition to human moral agents, Aristotle discussed the possibility of moral agency of the Greek gods and Aquinas discussed the possibility of moral agency of angels. In the case of angels, a difficulty in ascribing moral agency was that it was suspected that angels did not have enough independence from God to ascribe to the angels genuine moral choices. Recently, new candidates have been suggested for non‐human moral agency. Floridi and Sanders (2004) suggest that artificially intelligence (AI) programs that meet certain criteria may attain the status of moral agents; they suggest a redefinition of moral agency to clarify the relationship between artificial and human agents. Other philosophers, as well as scholars in Science and Technology Studies, are studying the possibility that artifacts that are not designed to mimic human intelligence still embody a kind of moral agency. For example, there has been a lively discussion about the moral intent and the consequential effects of speed bumps (Latour, 1994; Keulartz et al., 2004). The connections and distributed intelligence of a network is another candidate being considered for moral agency (Allen, Varner & Zinser, 2000). These philosophical arguments may have practical consequences for software developers, and for the people affected by computing. In this paper, we will examine ideas about artificial moral agency from the perspective of a software developer.

Details

Journal of Information, Communication and Ethics in Society, vol. 3 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

Book part
Publication date: 30 December 2004

Barry G. Silverman

The fields of virtual reality and microworld simulation have advanced significantly in the past decade. Today, computer generated personas or agents that populate these worlds and…

Abstract

The fields of virtual reality and microworld simulation have advanced significantly in the past decade. Today, computer generated personas or agents that populate these worlds and interact with human operators are now used in many endeavors and avenues of investigation. A few of many example application areas are Hollywood animations for movies, cartoons, and advertising (von-Neuman & Morganstern, 1947); immersive industrial and safety training simulations (Fudenberg & Tirole, 2000; Silverman et al., 2001); distributed, interactive military war games and mission rehearsals (Johns & Silverman, 2001); and personal assistant agents to reduce technologic complexity for the general public, among others (Weaver, Silverman, Shin & Dubois, 2001).

Details

The Science and Simulation of Human Performance
Type: Book
ISBN: 978-1-84950-296-2

Book part
Publication date: 20 September 2018

Arthur C. Graesser, Nia Dowell, Andrew J. Hampton, Anne M. Lippert, Haiying Li and David Williamson Shaffer

This chapter describes how conversational computer agents have been used in collaborative problem-solving environments. These agent-based systems are designed to (a) assess the…

Abstract

This chapter describes how conversational computer agents have been used in collaborative problem-solving environments. These agent-based systems are designed to (a) assess the students’ knowledge, skills, actions, and various other psychological states on the basis of the students’ actions and the conversational interactions, (b) generate discourse moves that are sensitive to the psychological states and the problem states, and (c) advance a solution to the problem. We describe how this was accomplished in the Programme for International Student Assessment (PISA) for Collaborative Problem Solving (CPS) in 2015. In the PISA CPS 2015 assessment, a single human test taker (15-year-old student) interacts with one, two, or three agents that stage a series of assessment episodes. This chapter proposes that this PISA framework could be extended to accommodate more open-ended natural language interaction for those languages that have developed technologies for automated computational linguistics and discourse. Two examples support this suggestion, with associated relevant empirical support. First, there is AutoTutor, an agent that collaboratively helps the student answer difficult questions and solve problems. Second, there is CPS in the context of a multi-party simulation called Land Science in which the system tracks progress and knowledge states of small groups of 3–4 students. Human mentors or computer agents prompt them to perform actions and exchange open-ended chat in a collaborative learning and problem-solving environment.

Details

Building Intelligent Tutoring Systems for Teams
Type: Book
ISBN: 978-1-78754-474-1

Keywords

Article
Publication date: 25 February 2020

Isabella Seeber, Lena Waizenegger, Stefan Seidel, Stefan Morana, Izak Benbasat and Paul Benjamin Lowry

This article reports the results from a panel discussion held at the 2019 European Conference on Information Systems (ECIS) on the use of technology-based autonomous agents in…

2195

Abstract

Purpose

This article reports the results from a panel discussion held at the 2019 European Conference on Information Systems (ECIS) on the use of technology-based autonomous agents in collaborative work.

Design/methodology/approach

The panelists (Drs Izak Benbasat, Paul Benjamin Lowry, Stefan Morana, and Stefan Seidel) presented ideas related to affective and cognitive implications of using autonomous technology-based agents in terms of (1) emotional connection with these agents, (2) decision-making, and (3) knowledge and learning in settings with autonomous agents. These ideas provided the basis for a moderated panel discussion (the moderators were Drs Isabella Seeber and Lena Waizenegger), during which the initial position statements were elaborated on and additional issues were raised.

Findings

Through the discussion, a set of additional issues were identified. These issues related to (1) the design of autonomous technology-based agents in terms of human–machine workplace configurations, as well as transparency and explainability, and (2) the unintended consequences of using autonomous technology-based agents in terms of de-evolution of social interaction, prioritization of machine teammates, psychological health, and biased algorithms.

Originality/value

Key issues related to the affective and cognitive implications of using autonomous technology-based agents, design issues, and unintended consequences highlight key contemporary research challenges that allow researchers in this area to leverage compelling questions that can guide further research in this field.

Article
Publication date: 26 September 2008

Debajyoti Chakrabarty

The purpose of this paper is to provide a theory that can explain the persistence of inequality in an economy where household agents are identical in terms of their preferences…

Abstract

Purpose

The purpose of this paper is to provide a theory that can explain the persistence of inequality in an economy where household agents are identical in terms of their preferences and have access to the same production technology.

Design/methodology/approach

An overlapping generations model is developed where agents are imperfectly altruistic and face uncertain lifetimes. The rate of time preference of an agent depends on her probability of survival which is increasing in her level of consumption. An agent's initial endowment of human capital jointly determines her patience and willingness to accumulate human capital and other productive assets.

Findings

It was found that inequality may persist in the economy as a result of endogenous rate of time preference. Agents with low initial endowment of human capital are impatient and choose not to invest in human capital. Agents with initial endowment of human capital above a certain threshold choose to invest in human capital as they expect to survive long enough to reap the benefits of their investment.

Originality/value

This paper add to the present literature by providing an alternative mechanism to explain the persistence of inequality. In future research this model framework can be used to evaluate the impact of alternative government and tax policies on the long run distributions of income and wealth.

Details

Indian Growth and Development Review, vol. 1 no. 2
Type: Research Article
ISSN: 1753-8254

Keywords

1 – 10 of over 55000