The purpose of this paper is to examine how robotics program developers perceived the role of emulation of human ethics when programming robots for use in educational settings. A purposive sampling of online robotics program developer professional sites which focused on the role of emulation of human ethics used when programming robots for use in educational settings was included in the study. Content related to robotics program developers’ perceptions on educational uses of robots and ethics were analyzed.
The design for this study was a qualitative summative content analysis. The researchers analyzed keywords related to a phenomenon. The phenomenon was the emulation of human ethics programmed in robots. Articles selected to be analyzed in this study were published by robotics program developers who focused on robots and ethics in the education. All articles analyzed in this study were posted online, and the public has complete access to the studies.
Robotics program developers viewed the importance of situational human ethics interpretations and implementations. To facilitate flexibility, robotics program developers programmed robots to search computer-based ethics related research, frameworks and case studies. Robotics program developers acknowledged the importance of human ethics, but they felt more flexibility was needed in the role of how classroom human ethical models were created, developed and used. Some robotic program developers expressed questions and concerns about the implementations of flexible robot ethical accountability levels and behaviors in the educational setting. Robotics program developers argued that educational robots were not designed or programmed to emulate human ethics.
One limitation of the study was 32 online, public articles written by robotics program designers analyzed through qualitative content analysis to find themes and patterns. In qualitative content analysis studies, findings may not be as generalizable as in quantitative studies. Another limitation was only a limited number of articles written by robotics programs existed which addressed robotics and emulation of human ethics in the educational setting.
The significance of this study is the need for a renewed global initiative in education to promote debates, research and on-going collaboration with scientific leaders on ethics and programming robots. The implication for education leaders is to provide ongoing professional development on the role of ethics in education and to create best practices for using robots in education to promote increased student learning and enhance the teaching process.
The implications of this study are global. All cultures will be affected by the robotics’ shift in how students are taught ethical decision making in the educational setting. Robotics program developers will create computational educational moral models which will replace archetypal educational ethics frameworks. Because robotics program developers do not classify robots as human, educators, parents and communities will continue to question the use of robots in educational settings, and they will challenge robotics ethical dilemmas, moral standards and computational findings. The examination of robotics program developers’ perspectives through different lens may help close the gap and establish a new understanding among all stakeholders.
Four university doctoral faculty members conducted this content analysis study. After discussions on robotics and educational ethics, the researchers discovered a gap in the literature on the use of robots in the educational setting and the emulation of human ethics in robots. Therefore, to explore the implications for educators, the researchers formed a group to research the topic to learn more about the topic. No personal gains resulted from the study. All research was original. All cultures will be affected by the robotics’ shift in how students are taught ethical decision making in the educational setting. Robotics program developers will create computational educational moral models which will replace archetypal educational ethics frameworks. Because robotics program developers do not classify robots as human, educators, parents and communities will continue to question the use of robots in educational settings, and they will challenge robotics ethical dilemmas, moral standards, and computational findings. The examination of robotics program developers’ perspectives through different lens may help close the gap and establish a new understanding among all stakeholders.
Fedock, B., Paladino, A., Bailey, L. and Moses, B. (2018), "Perceptions of robotics emulation of human ethics in educational settings: a content analysis", Journal of Research in Innovative Teaching & Learning, Vol. 11 No. 2, pp. 126-138. https://doi.org/10.1108/JRIT-02-2018-0004Download as .RIS
Emerald Publishing Limited
Copyright © 2018, Barbara Fedock, Armando Palandino, Liston Bailey and Belinda Moses
Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Though scientific leaders strive to make advances robotics at a steady, ongoing pace, these leaders fail to fully investigate the wide-ranging implications of programming robots to emulate human ethics, especially in the educational setting (Ashrafian, 2015). Educators play a significant role in the formation of learner ethical perspectives and models; however, the ethical effects of the use of robots in education leave unanswered questions (Kubilinskiene, 2017).
Questions arise related to how ethical standards will be determined and what model will be used to emulate human ethics. If educational leaders fail to evaluate, monitor and play a dominant role in the creation of human ethics in robots, negative outcomes as well as some positive results may influence the determination of ethical standards for future generations (Hersh, 2014). When researchers’ findings and perceptions related to the use of human ethics to program robots used to promote learning are not examined, issues and challenges related to programmed ethics may tend to be underestimated or ignored (Bogue, 2014). Therefore, a content analysis of robotics program developers’ perceptions on the role of emulation of human ethics when programming robots for use in educational settings was explored in this qualitative study.
The purpose of this qualitative summative content analysis was to examine how robotics program developers perceived the role of emulation of human ethics when programming robots for use in educational settings. A purposive sampling of online robotics program developer professional sites which focused on the role of emulation of human ethics used when programming robots for use in educational settings was included in the study. Content related to robotics program developers’ perceptions on educational uses of robots and ethics were analyzed.
For this study’s purpose, the qualitative summative content analysis was the most appropriate. In the qualitative method, perceptions on phenomenon are examined, and content is analyzed to find themes and patterns (Yin, 2003). A summative content analysis approach is appropriate when little is known about a topic, and the identification of keywords will be useful in the identification of themes or patterns (Nandy and Sarvela, 1997). In this study, a content analysis was used to examine keywords related to the phenomena of emulation of human ethics programmed in robots used in educational settings. No participants were interviewed. In total, 32 online robotics program developer professional sites which focused on the role of emulation of human ethics used when programming robots for use in educational settings were selected, and the content was analyzed. The phenomena were analyzed to find themes and patterns related to the perceptions of robotics program developers who shared online findings or implications on the role of emulating human ethics in robots in educational settings.
Four university doctoral faculty members conducted this content analysis study. After discussions on robotics and educational ethics, the researchers discovered a gap in the literature on the use of robots in the educational setting and the emulation of human ethics in robots. Therefore, to explore the implications for educators, they formed a group to research the topic to learn more about the topic. No personal gains resulted from the study.
Robotics in education
Robotics is a means of maximizing daily used skill sets of collaboration, problem solving, project management and critical thinking to inspire individuals to get through mundane tasks at hand (Eguchi, 2015). Robotics in education as a learning tool promotes knowledge at all levels, and the use of robotics helps engage learners in the learning process. In the literature, implications for the use of robotics in education include findings to support the promotion of robotics to develop traits to be successful in a twenty-first century world of innovators, such as is demanded in a highly driven technology society. Educational robotics provides an environment conducive to a hands-on learning modality in which students confront critical thinking scenarios that inspire them to develop new and nontraditional solutions, tested/validated results using robotics and a cyclical process used to come to successfully solution/resolutions problems of a more technical nature. Gura (2013) considered educational robotics to be “the most perfect instructional approach currently available” (para. 2). However, Gura noted that room for improvements in the learning environment must be made to accommodate transformation needed to meet twenty-first century requirement.
Accordingly, the successful adoption of an educational innovation such as the conception of robotics has enhanced the ability to make meet the true challenges. However, studies on developing curriculum content and how to incorporate the use of robotics in any educational framework is limited in the literature. Robotics is only one tool; therefore, the use of robotics must be aligned with correct theories of learning, educational philosophy, comprehensive curricula and supportive learning documentation to ensure cohesive and successful approach (Shannon, 2015). Though limited studies exist on the role of emulation of human ethics programmed in robots identified educational challenges involved the need to make a shift from technology usage to collaboration of human intelligence in education while changing the pedagogy that embraces technology and learning theories, such as constructivism and constructionism (Kubilinskiene, 2017).
In the literature, the consensus is robotic technologies should not be viewed as ordinary tools but as tools to promote the development of innovative ways to enhance the learning environment. Though few studies exist in which educational leaders’ perceptions on the role of ethics and the use of robotics are investigated, in the literature educators tend to agree that robotics learners engage in opportunities that enhance critical thinking skills (Julià and Antolí, 2016). Additionally, educators note the importance of students’ adaptation to a technical inspired society. In existing findings, researchers demonstrated that students are afforded the opportunity to enhance technical abilities will augmenting learning strategies and problem solving within the twenty-first century workforce requirements (Somyürek, 2015).
In existing robotic studies, a paradigm shift from traditional education in which the curriculum guidance failed to adequately address the needs of technical requirements to the use of robotics throughout the curriculum is evident. Leaders show how the growth of robotic use in education changed the educational landscape Rihtarsic et al. (2016). In the past, educators tended to view the use of technology in the classroom as mechanical instruments, which were referred to as black box technology. Black box technology is the use of computers, laptops or IPods to find information, engage in activities programmed to provide information, questions and answers. With the advancement of robotics, black box technology tends to be less effective than white-box technology which is designed to stimulate higher levels of creativity and engagement than participating in black box technology (Alimisis, 2012).
Artificial intelligence (AI)
AI is a computer which functions based on human intelligence commonly referred to as AI. AI focuses on the integration of computer science, mathematics, psychology, cognition and biology fields. In the literature, an extension number of studies have been conducted on how AI uses a range of knowledge from these fields to promote the rapid progress of emulation of human intelligence traits (Nimberka et al., 2016).
Educational ethical standards
Few studies exist on educational leaders and ethical standards. In existing studies, study findings are limited and not directly related to educators’ perceptions on how ethical standards are viewed. Additionally, data on the impact of ethical standards on changes in education are scarce. Though ethical standards are included in policy sections of handbooks, educational leaders tend to focus on specific legalities rather than on innovative learning techniques and possible ethical issues related to learning outcomes such as the use of robotics or AI. Two ethical standards are commonly outlined in most educational institutes. One area is instructor performance, and the second topic is centered on criminal-like behaviors (Zirkel, 2014). Generally, in educational institutes, ethical standards are defined and classified under three umbrellas: civil rights, specific local, state and national laws, and contractual standards (Umpstead et al., 2013).
Educational ethical frameworks
With the infusion of technology in the 1990s, the educational learning environment changed rapidly, and educational leaders failed to keep up with the need to review and re-evaluate ethical issues and challenges related to technological advances and robotics (Bottino, 2016). Though changes may have included the emulation of human ethics in robots, in the literature, few studies are focused on ethical issues and the need for updated theoretical and conceptual frameworks for ethics in education. In the limited number of published studies, findings are indicators that a need exists for balanced ethical frameworks which address twenty-first century technological issues and challenges in education (Martinov-Bennie and Mladenovic, 2015).
Educational leaders tend to select theoretical and conceptual frameworks which include moral decision-making theories and ethical guidelines that can be adapted for the educational setting (McBride and Hoffman, 2016). Kitchener (1984) created a decision making moral principles framework which addresses five basic principle-based guidelines. The guidelines include focus on individual autonomy, justice for all, caring for others, no maleficent intentions and trustworthiness. Kitchener’s (1984) principles are highly favored by educational leaders. Though the framework was created before the emulation of human ethics was used by robots in educational settings, educators tend to create ethical policies which are based on Kitchener’s framework (Bates, 2004).
Human ethics in the educational setting
Baxter et al. (2015) identified three areas for research when using robots in the classroom: Pedagogy: to enhance the teaching environment. Methodology: Try to prevent bias in the teaching environment caused by inflating children’s expectations prior to the interaction with the robots. Ethics: What are the roles of robots in class? Is the robot a peer or someone (or something) that cannot be trusted?
What should be the robot behavior in the classroom? Gelin (2013) cited Issacs Asimov three laws of the robotics suggesting that to be the norm “1 – A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2 – A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3 – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws” (p. 69). These laws are easy to conceptualize, but programming robots is not easy, and those expectations can only be accomplished by programming robots that can act and behave like human beings (Jordon and McDaniel, 2014).
Ethical considerations for the use of robots in the classroom
The ethical use of robots to teach is an area that merits greater exploration and study. Since the 1950s and 1960 when B.F. Skinner’s pioneered concepts related to teaching machines educators have contemplated the efficiencies that a computer or robot might provide to better optimize student learning in the classroom (Rutherford, 2003). A constant question debated by roboticists is the extent to which robots should be designed to nudge a user in terms of their thinking and decision making (Borenstein and Arkin, 2016). For example, researchers have contemplated the effectiveness of humanoid robots within a socio-cognitive paradigm intended to support knowledge acquisition, like Skinner’s theory of teaching machines and programmed instruction, robots can provide forms of feedback and other learning advantages for students in the classroom (Mazzoni and Benvenuti, 2015).
One recent comparative study of robots in school-based learning found that the use of an actual humanoid robot in the classroom provided qualitative advantages related to social impact over just using simulators (Bacivarov and Ilian, 2012). Robots potentially allow course and training designers to provide the addition of a humanoid embodiment along with tools for social interaction to the learning context beyond a PC-based computer program. In addition, anthropomorphic features of humanoid robots along with characteristics related to repeatability, flexibility and digitization may be engaging as well as motivating to young students within the classroom (Chih-Wei et al., 2010; Toh et al., 2016; Kazakoff et al., 2013).
The ability of robots to make ethical decisions is also a consideration for their use in situations that involve humans. For example, when it comes to artificial agents (including robots) an emerging field of machine ethics has developed around a somewhat elusive goal of potentially programming ethical patterns of thought into systems that are being developed (Torrance, 2008). Moreover, roboticists continue to explore the prospect of robots autonomously making ethical decisions because machines do not have emotions and are unable to consider the emotions of individuals who may be adversely impacted by their actions (Anderson and Anderson, 2010).
Method and design appropriateness
Qualitative researchers explore phenomena by trying to make sense of them (Yin, 2003). In this study, online scholarly researchers’ perceptions of a phenomenon were analyzed. The quantitative method was not appropriate for this study. Quantitative researchers use numbers, instead of the examination of phenomena, to investigate the statistical significance of an event (Yin, 2003).
Unlike in quantitative studies, numbers or statistics would not be significant when a researcher examines perceptions about an event such as the role of the emulation of human ethics programmed in robots used in educational settings. The use of numbers would be appropriate for a quantitative researcher who wanted to begin with a hypotheses and theories to describe a statistically significant comparison or correlation between types of robots and ethics.
The design for this study was a qualitative summative content analysis. In summative content analysis, the researcher analyzes keywords related to a phenomenon (Nandy and Sarvela, 1997). In this study, the phenomenon was the emulation of human ethics programmed in robots. Articles selected to be analyzed in this study were published by robotics program developers who focused on robots and ethics in the education. In 2001, in the USA, the use of robotics in the educational setting increased. Science, Technology, Engineering and Mathematical (STEM) schools were established, and STEM leaders promoted the adaption and use of robotics in the STEM school curriculum (Holbrook, 2005). All articles in this study were posted online, and the public has complete access to the studies.
The case study or phenomenological design were not suitable for this study. Participants’ perceptions were not examined in this study, and interviews were not conducted. To identify themes and patterns, the focus was on the analysis of keywords related to robots and ethics in online articles selected by the researchers. Whereas, case study researchers examine participants’ perceptions on external events, and phenomenological researchers explore how participants feel internally about an event (Yin, 2003). In this study, a range of keywords related to robots and ethics were analyzed.
The sampling method was purposive sampling, and 32 articles were selected for this study (Weber, 1985). To find themes and patterns, Weber (1985) recommended the use of at least 30 documents. Researchers conducted an inquiry on scholarly researchers’ perceptions on the emulation of human ethics programmed in robots used in educational settings. To obtain perceptions from the selected articles, the content analysis data method was used to collect data from 30 subjects: 10 in robotics, 12 in ethics and robotics, and 10 in ethics and robots used in educational settings.
Informed consent form was not appropriate for this qualitative summative content analysis. The researchers were the primary instrument in this qualitative study. Hsieh and Shannon (2005) defined qualitative content analysis as an organized approach to cipher and classify text content and analyze data. The content analysis approach was purposefully designed to aid the researcher in interpreting words, phrases and sentences in a subjective manner.
To code content analysis study data, five steps are appropriate (Weber, 1985). In this study, units to be used for recording such as words, phrases, sentences were established in the first step, and categories were created in the second step. The third step was the identification of samples to be used for data coding. During the fourth step, peer debriefing was used to test themes and pattern agreement. In the fifth step, content analysis was re-analyzed, and revisions were made.
In this study, the researchers critically read the selected studies and took notes from each study. Keywords and phrases in each study related to the use of robots and ethics were carefully noted. For each study, a list was created to show different types of information discovered. List entries were descriptively categorized, and categories were closely analyzed to determine whether categories are related and, if so, how. Related categories were combined, and major themes, as well as minor themes, were identified. The major and minor themes were compared, and this process was conducted for each study. When major and minor themes were determined for each study, the major and minor themes found in each study were collected, compared and contrasted. Similar major themes were merged, and similar minor themes will be combined. Careful analysis will be conducted to ensure relevancy and fit. After major and minor themes are identified, researchers will closely examine the content of the selected studies at least five times to assure a thorough content analysis.
After the researchers analyze keywords and phrases for themes and patterns, NVIVO10 qualitative software was used to help the researchers identify and organize themes and patterns. In content analysis studies, triangulation is used for the collection of data from different sources, such as revision and retesting, use of raw data to compare interpretations of data, varied observations and member checking (Lincoln and Guba, 1985). This study was triangulated though rich thematic descriptions. Quotations from articles were used to support themes. For member checking, researchers re-checked words, phrases, sentences, categories, themes and patterns for the content analysis of each study included in the study, which means that data in each study were analyzed by four researchers. After each analysis was completed, researchers conducted a peer debriefing, which acted as a measure to test agreement on themes and patterns. The study was not generalizable to other content analysis studies. In a qualitative study, the sample is small and not as generalizationable as in quantitative studies.
The central research question was used to guide the study. The qualitative researcher’s findings at the end of the data collection and data analysis provide answers to the central research questions and the sub-questions. In this study, the central research question is as follows:
How do robotics program developers perceive the role of emulation of human ethics when programming robots for use in educational settings?
Open-ended sub-questions will be the following:
How do robotics program developers perceive the role of human ethics?
How do robotics program developers perceive the role of human ethics in educational settings?
How do robotics program developers perceive the role emulation of human ethics in robots?
Educational leaders are concerned with moral decision making and how to establish ethical guidelines for all educational stakeholders (McBride and Hoffman, 2016). Kitchener (1984) created a decision making moral principles framework which addressed five areas. The first principle deals with the rights of autonomy and how individuals act. In the study, findings on robotics in education and the emulation of human ethics will be explored. Themes and patterns that result from content analyses may include themes related to robotic programmers’ ethical considerations and how the educational community perceive ethical coding, as well as how students view the role of ethical emulation. Kitchener argued individuals have the right to act in the way they deem appropriate, unless they impede the well-being of others.
In the second principle, individuals do not have the right to cause “physical or psychological harm” to other people (Kitchener, 1984). Articles explored in this study focused on the role of the reproduction of human ethics in robots used for educational purposes and psychological implications for students. Robotics program developers’ views on robots, ethics and dangers of physical harm to students in the classroom were examined.
The third principle is focused on helping other people. Individuals are responsible for the development and “enhancement of welfare of others” (Kitchener, 1984). In this study, the examination of robotics program developers’ perceptions provided insights on how ethics were programmed in robots. Keywords were analyzed to find themes related to specific categories and outcomes, which included categories on robotic behaviors and data which influenced ethical decision making.
The Golden Rule is the basis for the fourth principle. All people must be treated fairly and equally (Kitchener, 1984). The role of human ethical imitations in robots used in education may be a theme that results in this study. Equality is an ethical goal included in educational mission statements, and the role of fairness and the use of robotics in the classroom may be examined by researchers. If equality for all students is not a resulting theme, educational leaders may use the study as a guideline to re-evaluate mission statements and the use of robots in the classroom.
Trust is the foundation for the last principle. All persons must respect others, be truthful and seek to keep their words (Kitchener, 1984). When robotics is used in the educational classroom, educational leaders must trust robotic program developers who create ethical standards for robotic-based learning. How trust is perceived was a pattern which arose in the content analyses of this study.
The principles are idealist; however, in real world ethical situations, issues and dilemmas are constantly in flux. No constant standards can be maintained. Thus, applications may vary, and principles may be forsaken if a “higher moral purpose” is sought (Kitchener, 1984). Based on educators’ lack of knowledge on ethical designs for programming robots, leaders may need to analyze each moral principle area to evaluate which principles may need enhancement, clarification or a new direction when considering ethical standards for the emulation of human ethics programmed in robots used in educational settings.
Theme 1: computational educational moral agents verses human educational moral agents
Theme 1 was computational educational moral agents verse human educational moral agents. In this study, robotics program developers agreed that educational robots were moral agents who made moral judgments based on a multitude of collected right and wrong ideas. Robotics program developers defined the educational robotic moral agent model as “a moral decision-making procedure modeled by a computational process.” During the moral decision process, educational robots are designed to search a wide range of online research-based frameworks and accountability levels, and highly effective moral decisions are made by matching individual situations to research-based data. Other programmers defined educational robots “[…] as better moral creatures which can make decisions more consistently that humans.” A third of the robotics program developers contended robots used in the educational setting act as reliable data-based moral agents who lack inclinations to make ethics decisions based on cultural bias. One developer explained the differences between robots and human educators by noting that “age, ethnicity, educational levels, gender and personal preferences play major roles in how humans view themselves as moral agents.” Computational educational moral agents verse human educational moral agent theme was a major argument which influenced robotics program developers’ perceptions on the use of robots in educational setting and the emulation of human ethics.
The robotics moral agent model decision-making process espouses rational, unbiased decision making. In the computational educational moral agents’ model, robotics program developers found that educational robots minimized possibilities of personal or cultural influences. According to one programmer, “Unlike human-based ethics, robot computational moral reasoning decisions generate from globally recognized and accepted philosophical, educational, and psychological critical thinking and problem-solving frameworks.” In the computational moral education agent model, educational robotics program developers tend to eliminate human-based ethical decision making based on personal choices and environmental influences.
Robots as bias free moral agents will affect how future generations view ethics. One programmer posited robotics educational program developers “are actually changing society,” and the robotics moral agent change model will determine how ethics are taught in educational settings. To help developers understand the impact of the computational moral agent model, other programmers suggested that educators should work in conjunction with computer science and robotics education program developers to teach those educators a basic understanding of human ethics. Unbiased robots acting as moral agents will act as generational change agents.
Though the question of robotic moral agency and status raises questions for educators, benefits of using robots in educational settings outweigh the concerns. One group of programmers outlined the potential benefits of robots acting as moral agents to support both individual and collaborative learning activities exceeds concerns related to traditional, often non-situational, educational more agent frameworks. Robotics program developers acknowledged educators’ questions related to the ability of robots to make teacher decisions and control children’s behaviors in the classroom. However, given the vast array of case studies and research from which robots made decisions, robotics program developers perceived robot teachers as being highly capable of moral decision making.
Theme 2: perceived lack of robotics accountability
Theme 2 focused on a perceived lack of robotics accountability needed to maintain ethical standards and privacy in the educational setting. Because computational searches involve data-based searches of an infinite amount of material, robotics educational program developers noted that they do not attempt to design, regulate or monitor specific robotic accountability levels. After reflecting on how robotics accountability levels, one programmer pondered that an ongoing question which robotics program developers must consider is in terms of accountability “Can the robot be used as a form of human replacement in the classroom?” When discussing accountability and privacy classroom concerns related to vulnerable populations such as the poor or disabled, other programmers asked a similar question. Summing up robotics program developers concerns on the perceived lack of robotic accountability and privacy, a programmer noted that information stored on robots might be “subsequently accessed by others.” Robotics program developers recognize developers must continue to ask questions and monitor how accountability and privacy levels are perceived.
In robotics, accountability is based on predetermined rigid scales. Educational robotics program developers indicated computational ethics robots are programed to utilize flexible, situational moral agency skills. Several programmers posited robotic brains rely on basic problem solving and critical thinking accountability models, and educational robotic ethical accountability depends on outcomes supported by computational theoretical frameworks, case studies and research-based studies searches. Another programmer noted human are “accountable for actions taken and having the ability to individually make right and wrong decisions.” When educators perceive robots in human terms, accountability levels appear diminished.
Lingering, unanswered accountability questions and educator perceptions on the importance of establishing accountability levels create concerns about the use of robots in the educational setting. Robotics developers identified specific questions of concern, and they acknowledged why educators perceived a possible lack of accountability. One programmer agreed that in the classroom “ethical regulations and traceability of robot actions is of concern.” Because robotics program developers program robots to serve as classroom tutors, tools or peers, F7 believed the answer to the question that asks, “Who will bear ethical and social responsibility for robot behaviors” has not been determined. Ongoing, unanswered accountability questions increase concerns related to the use of robots in classrooms.
Designs of robots used in the classroom impact accountability perceptions. Robotic program developers perceive a positive relationship between classroom robots and students. F29 explained, “Computationally, educational robots are designed to appear as if they understand and care for learners and masquerade as friends and companions” by promoting appropriate social and ethical behaviors. Because robots are programmed to emulate caring, ethical behaviors, students tend to perceive a genuine affection. F12 emphasized that “children tend to form strong attachments to robot companions and robot teachers.” Learners’ positive perceptions and attachments to robots create a high level of accountability expectations.
Theme 3: robots as non-human
The third theme was robots as non-human. Traditionally, educational ethics are defined by a set of rules established for all stakeholders in all situations. Robotics program developers visualized the role of ethics through a different lens. Programmers emphasized that “robots are not humans,” and robotics objectives are not based on the emulation of human educational ethics. F16 firmly acknowledged that robots emulate ethics based on a “multitude of research, theoretical frameworks, and case study situations.” Robotics program developers believed the use of the computational process stimulated increased flexibility in ethical decision making. F9 noted a need for ethical decision making which is “flexible in a changeable environment.” Robotic program developers perceived a need for a more flexible ethical decision-making process.
Emulation of human ethics to program non-human robots used in the educational setting is not a robotics program developers’ objective. To help educators understand the focus on non-human ethical objectives, robotics programmers created “roboethics” roadmaps for educators. According to F25, the roadmaps will “provide a systematic assessment of the ethical issues involved in Robotics R&D, increase understanding of the problems at stake, and promote further studying and transdisciplinary research.” Robotics program developers recognize that robots are non-human, and the emulation of human ethics does not meet robotic objectives.
Non-human moral agents play a distinct and needed role in the educational setting. Robotic developers argued the importance non-human robotics moral agency. F11 posited, “Robot ethics depends upon the notion that robots might in some sense be moral agents in their own rights.” Educational robots are accountable for recognizing, synthesizing, and processing well-known and highly appraised models which are the most appropriate for ethical situations. The role of non-human moral agents in educational settings is a role which robots can fulfill.
The study findings answered the main research question. The main research question is as follows:
How do robotics program developers perceive the role of emulation of human ethics when programming robots for use in educational settings?
Robotic program developers perceived robots in the educational setting as non-human moral agents. Therefore, the role of robotic program developers was not to emulate human ethics in the educational setting. Robotics program developers viewed the need for flexible ethical decision making in the educational setting, and, to avoid the emulation of human ethics, they perceived the role of robots was to use a multitude of data base searches of research, theoretical frameworks and case studies for ethical decision making.
The findings addressed RQ1a. Robotics program developers viewed the importance of situational human ethics interpretations and implementations. To facilitate flexibility, robotics program developers programmed robots to search computer-based ethics-related research, frameworks and case studies.
For the second sub-question, RQ1b, robotics program developers acknowledged the importance of human ethics, but they felt more flexibility was needed in the role of how classroom human ethical models were created, developed and used. However, some robotic program developers expressed questions and concerns about the implementations of flexible robot ethical accountability levels and behaviors in the educational setting.
In response to the last sub-question on how robotics program developers perceive the role emulation of human ethics in robots, robotics program developers did not perceive that a role existed. Robotics program developers argued that educational robots were not designed or programmed to emulate human ethics.
Limitations, significance and implications of study
One limitation of the study was 32 online, public articles written by robotics program designers were analyzed through qualitative content analysis to find themes and patterns. In qualitative content analysis studies, findings may not be as generalizable as in quantitative studies. Another limitation was only a limited number of articles written by robotics programs existed which addressed robotics and emulation of human ethics in the educational setting.
The significance of this study is the need for a renewed global initiative in education to promote debates, research and on-going collaboration with scientific leaders on ethics and programming robots. The implication for education leaders is to provide ongoing professional development on the role of ethics in education and to create best practices for using robots in education to promote increased student learning and enhance the teaching process (Vollmer et al., 2016).
Traditional and widely accepted educational conceptual frameworks are idealistic, and ethical models related to the use of robotics are non-existent (Hersh, 2014). Educational leaders must initiate a change program which will include the re-evaluation of mission statements and the possible implications of the role of emulation of human ethics by robots used in the education setting. During the re-evaluation, issues of possible “physical or psychological harm” to students and the learning community must be examined, as well as how the use of robots by educators may benefit learners (Kitchener, 1984).
Change programs must include all stakeholders who understand the need for investigating and establishing high levels of expectations for the emulation of human ethics by robots. Leaders must convince stakeholders that a moral reason for change is needed, and leaders must demonstrate how change will benefit global educational leaders, instructors, students, parents, the community and robotic programmers. A trusting foundation for collaboration among all stakeholders, including robotic programmers, must be built, and methods for on-going communications must be established and developed (Fullan, 2006).
The implications of this study are global. All cultures will be affected by the robotics’ shift in how students are taught ethical decision making in the educational setting. Robotics program developers will create computational educational moral models which will replace archetypal educational ethics frameworks. Because robotics program developers do not classify robots as human, educators, parents and communities will continue to question the use of robots in educational settings, and they will challenge robotics ethical dilemmas, moral standards and computational findings. The examination of robotics program developers’ perspectives through different lens may help close the gap and establish a new understanding among all stakeholders (Borenstein and Arkin, 2016).
Alimisis, D. (2012), “Robotics in education & education in robotics: shifting focus from technology to pedagogy”, Presented at 3rd International Conference on Robotics in Education, Prague.
Anderson, M. and Anderson, S.L. (2010), “Robot be good”, Scientific American, Vol. 303 No. 4, pp. 72-77.
Ashrafian, H. (2015), “ALonAI: a humanitarian law of artificial intelligence and robotics”, Science and Engineering Ethics, Vol. 21 No. 1, pp. 29-40.
Bacivarov, I. and Ilian, V. (2012), “The paradigm of utilizing robots in the teaching process: a comparative study”, International Journal of Technology & Design Education, Vol. 22 No. 4, pp. 531-540, doi: 10.1007/s10798-011-9157-5.
Bates, R. (2004), “A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence”, Evaluation and Program Planning, Vol. 27 No. 3, pp. 341-347.
Baxter, P., Ashurst, E., Kennedy, J., Senft, E., Lemaignan, S. and Belpaeme, T. (2015), “The wider supportive role of social robots in the classroom for teachers”, Presented at 7th International Conference on Social Robotics, Paris.
Bogue, R. (2014), “Robot ethics and the law: part one: ethics”, An International Journal, Vol. 41 No. 4, pp. 325-339.
Borenstein, J. and Arkin, R. (2016), “Robotic nudges: the ethics of engineering a more socially just human being”, Science & Engineering Ethics, Vol. 22 No. 1, pp. 31-46, doi: 10.1007/s11948-015-9636-2.
Bottino, R. (2016), “Societal challenges and new technologies: education in a changing world”, International Journal of Cyber Ethics in Education, Vol. 4 No. 1, pp. 46-55.
Chih-Wei, C., Jih-Hsien, L., Po-Yao, C., Chin-Yeh, W. and Gwo-Dong, C. (2010), “Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school”, Journal of Educational Technology & Society, Vol. 13 No. 2, pp. 13-24.
Eguchi, A. (2015), “Integrating educational robotics to enhance learning for gifted and talented students”, in Lennex, L. and Nettleton, K. (Eds), Cases on Instructional Technology in Gifted and Talented Education, IGI Global, Hershey, PA, pp. 54-90.
Fullan, M. (2006), “Change theory: a force for school improvement”, Centre for Strategic Education, pp. 3-14.
Gelin, R. (2013), “Robotics supporting autonomy”, Journal International de Bioéthique, Vol. 24 No. 4, pp. 59-70, 180-181.
Gura, M. (2013), Student Robotics and the K-12 Curriculum, George Lucas Educational Foundation, available at: www.edutopia.org/blog/student-robotics-k-12-curriculum-mark-gura
Hersh, M. (2014), “Science, technology and values: promoting ethics and social responsibility”, AI & Society, Vol. 29 No. 2, pp. 167-183.
Holbrook, B. (2005), “Assessing the science-relation: the case of the US science foundation’s second merit review criterion”, Technology in Science, Vol. 27 No. 4, pp. 437-451.
Hsieh, H.-F. and Shannon, S.E. (2005), “Three approaches to qualitative content analysis”, Qualitative Health Research, Vol. 15 No. 9, pp. 1277-1288.
Jordon, M.E. and McDaniel, R.R. (2014), “Managing uncertainty during collaborative problem solving in elementary school teams: the role of peer influence in robotics engineering activity”, Journal of Learning Sciences, Vol. 23 No. 4, pp. 490-537.
Julià, C. and Antolí, J.Ò. (2016), “Spatial ability learning through educational robotics”, International Journal of Technology and Design Education, Vol. 26 No. 2, pp. 185-203, available at: http://dx.doi.org/10.1007/s10798-015-9307-2
Kazakoff, E., Sullivan, A. and Bers, M. (2013), “The effect of a classroom-based intensive robotics and programming workshop on sequencing ability in early childhood”, Early Childhood Education Journal, Vol. 41 No. 4, pp. 245-255, doi: 10.1007/s10643-012-0554-5.
Kitchener, K.S. (1984), “Intuition, critical evaluation, and ethical principles: the foundation for ethical decisions in counseling psychology”, Counseling Psychologist, Vol. 12 No. 3, pp. 43-55.
Kubilinskiene, S. (2017), “Applying robotics in school education: a systematic review”, Journal of Peer Learning, Vol. 5 No. 1, pp. 50-69.
Lincoln, Y.S. and Guba, E.G. (1985), Naturalistic Inquiry, Sage Publications, Beverly Hills, CA.
McBride, N. and Hoffman, R.R. (2016), “Bridging the ethical gap: from human principles to robot instructions”, IEEE Intelligent Systems, Vol. 31 No. 3, pp. 76-82.
Martinov-Bennie, N. and Mladenovic, R. (2015), “Investigation of the impact of an ethical framework and an integrated ethics education on accounting students’ ethical sensitivity and judgment”, Journal of Business Ethics, Vol. 127 No. 1, pp. 189-203.
Mazzoni, E. and Benvenuti, M. (2015), “A robot-partner for preschool children learning English using socio-cognitive conflict”, Journal of Educational Technology & Society, Vol. 18 No. 4, pp. 474-485.
Nandy, B.R. and Sarvela, P.D. (1997), “Content analysis reexamined: a relevant research method for health education”, American Journal of Health Behavior, Vol. 21 No. 3, pp. 222-234.
Nimberka, R., Khairnar, S. and Mhatre, M. (2016), “Artificial intelligence in robotics”, Imperial Journal of Interdisciplinary Research, Vol. 2 No. 11, pp. 158-161.
Rihtarsic, D., Avsec, S. and Kocijancic, S. (2016), “Experiential learning of electronics subject matter in middle school robotics courses”, International Journal of Technology and Design Education, Vol. 26 No. 2, pp. 205-224, available at: http://dx.doi.org/10.1007/s10798-015-9310-7
Rutherford, A. (2003), “B. F. Skinner’s technology of behavior in American life: from consumer culture to counterculture”, Journal of the History of the Behavioral Sciences, Vol. 39 No. 1, pp. 1-23, doi: 10:1002/jhbs.10090.
Shannon, L. (2015), “BEST robotics practices”, International Journal of Information and Education Technology, Vol. 5 No. 3, pp. 179-183, available at: http://dx.doi.org/10.7763/IJIET.2015.V5.498
Somyürek, S. (2015), “An effective educational tool: construction kits for fun and meaningful learning”, International Journal of Technology and Design Education, Vol. 25 No. 1, pp. 25-41, available at: http://dx.doi.org/10.1007/s10798-014-9272-1
Toh, L.P.E., Causo, A., Tzuo, P.W., Chen, I.M. and Yeo, S.H. (2016), “A review on the use of robots in education and young children”, Educational Technology & Society, Vol. 19 No. 2, pp. 148-163.
Torrance, S. (2008), “Ethics and consciousness in artificial agents”, AI & Society, Vol. 22 No. 4, pp. 495-521, doi: 10.1007/s00146-007-0091-8.
Umpstead, R., Brady, K., Lugg, E., Klinker, J. and Thompson, D. (2013), “Educator ethics: a comparison of teacher professional responsibility laws in four states”, Journal of Law & Education, Vol. 42 No. 2, pp. 183-225.
Vollmer, A., Wrede, B., Rohlfing, K. and Oudeyer, P. (2016), “Pragmatic frames for teaching and learning in human-robot interaction: review and challenges”, Frontiers in Neurorobotics, Vol. 10, pp. 1-20.
Weber, P.W. (1985), Basic Content Analysis, Sage, Beverly Hills, CA.
Yin, R.K. (2003), Case Study Research: Design and Methods, 3rd ed., Sage, Thousand Oaks, CA.
Zirkel, P.A. (2014), “State ethical codes for school leaders”, Journal of Law and Education, Vol. 43 No. 4, pp. 503-534.
Altin, H. and Pedaste, M. (2013), “Learning approaches to applying robotics in science education”, Journal of Baltic Science Education, Vol. 12 No. 3, pp. 365-377.