Abstract
Purpose
This exploratory study innovates the pedagogy of undergraduate business research courses by integrating Generative Artificial Intelligence (GAI) tools, guided by human-centered artificial intelligence, social-emotional learning, and authenticity principles.
Design/methodology/approach
An insider case study approach was employed to examine an undergraduate business research course where 72 students utilized GAI for coursework. Thematic analysis was applied to their meta-reflective journals.
Findings
Students leverage GAI tools as brainstorming partners, co-writers, and co-readers, enhancing research efficiency and comprehension. They exhibit authenticity and human-centered AI principles in their GAI engagement. GAI integration imparts relevant AI skills to students.
Research limitations/implications
Future research could explore how teams collectively interact with GAI tools.
Practical implications
Incorporating meta-reflections can promote responsible GAI usage and develop students' self-awareness, critical thinking, and ethical engagement.
Social implications
Open discussions about social perceptions and emotional responses surrounding GAI use are necessary. Educators can foster a learning environment that nurtures students' holistic development, preparing them for technological challenges while preserving human learning and growth.
Originality/value
This study fills a gap in exploring the delivery and outcomes of AI-integrated undergraduate education, prioritizing student perspectives over the prevalent focus on educators' viewpoints. Additionally, it examines the teaching and application of AI for undergraduate research, diverging from current studies that primarily focus on research applications for academics.
Keywords
Citation
Aure, P.A. and Cuenca, O. (2024), "Fostering social-emotional learning through human-centered use of generative AI in business research education: an insider case study", Journal of Research in Innovative Teaching & Learning, Vol. 17 No. 2, pp. 168-181. https://doi.org/10.1108/JRIT-03-2024-0076
Publisher
:Emerald Publishing Limited
Copyright © 2024, Patrick Adriel Aure and Oriana Cuenca
License
Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
Generative Artificial Intelligence (GAI) has long been used to supplement learning in Higher Education Institutions (HEI). GAI is utilized through learning analytics, aiding in curriculum sequencing, instructional design, and student clustering while leveraging big data analysis to enhance teaching strategies (Crompton and Burke, 2023). In recent years, a new generation of powerful Large Language Models (LLMs) have the potential to revolutionize higher education through their ability to generate human-like text that is indistinguishable from writing produced by humans (Alshater, 2022; Benuyenah, 2023; Rudolph et al., 2023; Zhai, 2022).
Given the growing accessibility and advancing capabilities of these tools, new opportunities emerge for their utilization in applied learning, such as doing research. GAI can enhance research efficiency by assisting in the analysis of data, generating simulations, and communicating findings effectively (Alshater, 2022), while also optimizing the research processes through proofreading, citation management, and manuscript editing (Abd-Elsalam and Abdel-Momen, 2023). However, existing studies primarily focus on GAI’s utilization by established academics, leaving limited exploration of its potential for research instruction, particularly at the undergraduate level where students are at the preliminary stages of building proficiency.
Furthermore, there are notable concerns regarding the integration of GAI in education which present challenges to its use as a learning tool. Researchers have noted issues with biases (Belenguer, 2022), lack of transparency in how the tools generate text, and limited ability to reason or show actual thinking (Alshater, 2022; Dehouche, 2021; Skavronskaya et al., 2023). There are also risks of academic misconduct should students utilize GAI chatbots like ChatGPT to generate whole papers that remain undetectable by plagiarism detection software (Dehouche, 2021; Rudolph et al., 2023; Zhai, 2022), thereby compromising the ability of academic staff to assess the student’s understanding and analysis of the subject matter (Perkins, 2023). Several recommendations have been made to create pedagogical strategies that embrace generative AI tools in constructive ways (Cotton et al., 2023; Farrelly and Baker, 2023; Skavronskaya et al., 2023; Zhai, 2022). A key challenge with these studies lies in their speculative nature or reliance on recommendations derived from existing research, which has not been tested for efficacy in real classroom settings. As a result, the implications and challenges of these pedagogical strategies must be explored further. Compounding the issue, Strzelecki (2023) underscores a prevalent scholarly focus on the viewpoints of educators and researchers, with insufficient attention paid to students' perspectives.
Thus, this exploratory paper aims to bridge the gap by examining the delivery and outcomes of an undergraduate business research program where students were guided to utilize GAI to fulfill their coursework requirements. This study endeavors to enhance current pedagogical practices for responsible AI use by analyzing students' reflections on their utilization of GAI during the course, guided by a concept of responsibility grounded in Human-Centered Artificial Intelligence (HCAI) and authenticity. To achieve these aims, the study was steered by the following research question:
How can instructors innovate the pedagogy of undergraduate business research courses guided by the principles of human-centered artificial intelligence, social-emotional learning, and authenticity?
Literature review
Capabilities, impacts, and limitations of generative AI tools
GenAI tools like ChatGPT hold substantial promise for reshaping higher education (Alshater, 2022; Mollick and Mollick, 2023). Existing literature on AI integration in education primarily focuses on utilizing AI as a teaching aid and supplementary learning tool. In Kim and Bennekins' (2016) study, AI chatbots were used to tailor responses based on students' needs in goal formation, initiation, action control, and emotional regulation, aiming to foster persistence in academic pursuits and enhance performance. Notably, AI chatbots have shown particular efficacy as a conversation partner in language learning, facilitating the understanding of local terminology and idioms within context (Farrelly and Baker, 2023)
Some studies have found applications of generative AI for the academic research writing process. Firstly, they exhibit remarkable efficiency in data processing and analysis, enabling the handling of vast datasets and identification of relevant information and patterns which may potentially streamline labor-intensive tasks for researchers (Alshater, 2022). Second, these tools excel in generating realistic scenarios, offering researchers and students diverse testing grounds for theories and possibly catalyzing new insights and advancements in scientific understanding (Alshater, 2022). Third, ChatGPT and similar chatbots possess adeptness in articulating research findings clearly and succinctly. By providing summaries, explanations, and examples, AI can enhance the accessibility of complex ideas and facilitate more effective dissemination of research outcomes (Alshater, 2022; Mollick and Mollick, 2023). Fourth, generative AI has been found to streamline information acquisition, compilation, and consolidation which may aid in literature searches, summarizing readings, and even generating hypotheses (Chan and Hu, 2023). Fifth, AI has been used by researchers for mentorship in complex technical issues through tutorials tailored to the context and skill level of the researcher (Berg, 2023). However, researchers have also noted limitations in generative AI tools that temper their potential impacts (Alshater, 2022; Skavronskaya et al., 2023). GenAI tools cannot evaluate the accuracy of the content and identify any false or misleading information they produce (Lubowitz, 2023). ChatGPT and similar models show an inability to deeply analyze data and properly interpret findings (Alshater, 2022). They also lack the ability to reason or show any sign of “actual thinking,” instead regurgitating information they have been trained on (Skavronskaya et al., 2023) or providing outputs that are generic and lack both breadth and depth (Rudolph et al., 2023). Further, the effectiveness of AI depends on the quality of its training data, which may lack domain-specific knowledge, thus posing a potential challenge in academic contexts (Alshater, 2022).
Towards ethical, responsible, and authentic use of generative AI in business research
In higher education, a significant concern that has garnered considerable scholarly attention and raised alarm is the potential for academic dishonesty associated with AI usage. Researchers have expressed worries about the possibility of students utilizing ChatGPT to generate papers that evade detection by plagiarism software and present them as their own work (Dehouche, 2021; Rudolph et al., 2023). ChatGPT is capable of producing text that is coherent, accurate, and structured, yet appears to be original, posing a challenge to the integrity of written assessment practices (Zhai, 2022). Universities perceive AI as an external tool that may constrain students' independent efforts and intellectual contributions (Luo, 2024).
These concerns have prompted higher education instructors and administrators to view AI with skepticism, but as Sison et al. (2023) argue, Generative AI cannot have malicious intentions because it is incapable of intent. GenAI does not consider nor can it distinguish between truth and falsehood but rather predicts text based on statistical correlations. Artificial Intelligence is a tool, and its pitfalls are a product of misunderstanding and misuse. Shneiderman (2022) proposes a possible bridge between ethical concerns and practical realities through a focus on Human-Centered Artificial Intelligence.
Human-centered artificial intelligence (HCAI)
Schmidt (2020) defines Human-Centered AI (HCAI) as a principle for designing systems with a clear human-centric purpose. The overarching aim is to leverage innovative algorithms and methodologies to enhance individual and societal capacities, thereby maximizing the utility of AI for humanity. HCAI emphasizes placing humans at the core of systems design thinking, shifting the perspective away from perceiving AI as intelligent autonomous teammates and instead portraying them as potent tool-like appliances (Shneiderman, 2020).
While the emphasis lies on the technical aspects of AI development, this definition of HCAI implicitly underscores the user’s responsibility not to cede control entirely to AI systems but rather to remain vigilant and critical of their outputs. Human-centeredness can only be achieved if humans are willing to exert their autonomy and take accountability for the goals and outcomes of AI usage. As Sison et al. (2023) contend, AI is ultimately inanimate, so the responsibility is, in part, on the user to strive for HCAI. In essence, HCAI strives to promote human flourishing (Sison et al., 2023) and may be considered the ideal for how AI should be designed by its creators and regarded by its users. Therefore, this paper defines responsible AI usage as the utilization of AI aligned with HCAI principles and goals.
Authenticity
On an empirical level, assessing whether individuals are truly adhering to the principles of Human-Centered AI (HCAI) usage can be challenging, as it involves examining unobservable mental processes. For instance, merely analyzing written excerpts from students may not reveal the depth of critical thought applied. Students may need to recognize their thinking processes and realize the loss of their autonomy. To operationalize HCAI usage, this study adopts Lonergan’s framework of Authenticity.
Authenticity, as outlined by Coghlan (2022), involves continual self-appropriation through the transcendental precepts: be attentive to experience, intelligent in understanding, reasonable in judging, and responsible in deciding and acting. It is the constant exercise of critical self-reflection on the alignment between the acts of knowing and doing, and its imperative construction encourages vigilance towards the cognitional processes. Authenticity provides structure to the mental mechanisms of engaging with AI, and “authentic” AI usage is understood as usage that has undergone a critical and reflective process grounded in the transcendental precepts with the goal of adherence to HCAI principles.
Social-emotional learning (SEL)
Social and emotional learning (SEL) is the process of cultivating skills to recognize and regulate emotions, demonstrate empathy, make responsible decisions, foster positive relationships, and navigate challenging situations adeptly (CASEL, 2003). While traditionally applied to childhood learners, Conley (2015) argues that the lack of structure in higher education compels students to transition from external to internal responsibility which highlights the value of SEL competencies concerning positive adjustment.
Synthesis
The integration of AI in education can be effectively guided by a framework that synthesizes the principles of SEL, authenticity, and HCAI. This approach prioritizes holistic student development, emphasizing human agency, ethics, and positive human-AI interactions in AI integration within education. The next table summarizes the relationship between the three lenses (see Table 1).
The intersections among SEL, authenticity, and HCAI form a strong basis for the ethical and effective incorporation of AI in education. SEL competencies like self-awareness, self-management, and responsible decision-making, as interpreted within Conley’s (2015) adjusted definition based on CASEL's framework (CASEL, 2003) to suit higher education demographics, closely resonate with the principles of authenticity. These principles advocate for being mindful in experiences, perceptive in comprehension, fair in assessment, and accountable in both decision-making and actions.
These competencies and principles empower students to engage critically with AI tools, maintaining control over their learning process and making informed decisions about AI usage. SEL principles, such as social awareness and relationship skills, align closely with HCAI principles of considering societal impact and fostering positive human-AI collaboration. By cultivating these competencies, students may approach AI as a tool for enhancing human capabilities and promoting social good, rather than replacing human intelligence. Framing AI integration through SEL, authenticity, and HCAI lenses enables educators to create empowering and socially responsible learning environments.
Methodology
Content delivery
The course discussed in this paper is a 14-week undergraduate business research course where students form research groups to produce a final paper following the latest American Psychological Association (APA) style. The instructor provided comprehensive guidance through the research process, covering foundational topics such as philosophy of science, research question formulation, literature review, and research design (both qualitative and quantitative). The goal was for students to develop essential skills in designing measurement instruments, collecting and processing primary and secondary data, and applying analytic approaches to derive theoretical and practical insights.
AI pedagogical innovation
At the course’s outset, students were introduced to diverse AI tool applications in research, covering literature gathering, summarization, drafting, and copyediting. The pedagogical approach involved the professor facilitating learning through face-to-face and online feedback sessions. Social media group chats tailored to research topics were used for close monitoring and consultations. To promote responsible AI use, students were required to maintain meta-reflective journals, fostering SEL, authenticity, and HCAI principles. This comprehensive approach prioritizes student holistic development while emphasizing human agency, ethics, and positive human-AI interactions. Instructor reviews of journals assessed student progress, with students appending prompts and AI tools used in their research methodology section.
Design of the study
A qualitative case study involves a thorough investigation of a system, such as an activity or individuals, through extensive data collection within a bounded timeframe (Creswell, 2014). This methodology examines the unit of analysis within its natural environment. Insider case studies, conducted by researchers embedded in the context, offer nuanced insights based on firsthand experience (Brannick and Coghlan, 2007). This design allows for a deeper understanding of student performance and concerns through close communication and supervision. Previous studies have utilized insider case studies to explore pedagogical innovations (Lisewski, 2004; Unluer, 2012).
Data gathering
Data was collected from meta-reflective journals written by 72 undergraduate students and the observations/meta-reflections of the research professor. Reflection questions focused on evaluating content co-created with AI tools, including prompt effectiveness, output quality, and identifying inefficiencies. Student meta-reflections serve as valuable data because they capture the learner’s voice and perspective, aligning with SEL and authenticity principles. They enable tracking of students' cognitive development, knowledge integration, emotional responses, and growth in critical analysis skills over time, offering insights into the learning process.
Results and discussions
Benefits of AI in the research writing process
Table 2 illustrates key themes and quotes depicting how students utilized AI tools in research writing. The data demonstrates that students employ AI to initiate the writing process, sift through academic sources, conduct preliminary data analysis, and refine their manuscripts. Interestingly, the role of AI as a learning support tool was also applicable in the research writing context. Students used AI as a virtual tutor to enhance their technical skills for statistical analysis and to familiarize themselves with theories relevant to their studies.
Apart from functional advantages, the data underscores several experiential benefits. The primary value proposition of AI tools for students was found to be savings in time and effort. Students consistently highlighted that “utilizing AI in our research has greatly accelerated some traditionally lengthy parts of the process,” allowing them more time for “deeper analysis and critical thinking.” It was also found that AI mitigated the sense of overwhelm experienced during the research process. One student discussed being prone to distraction and tangential digressions leading them to feel “lost in a sea of information”. However, the AI tools introduced a direct and focused approach which carried them through a state of inundation. It was further found that the act of conversing with AI tools also tested students’ foundational knowledge and revealed the gaps in their understanding (“I had a difficult time phrasing what I wanted to ask the AI. It made me realize I lacked foundational knowledge about methodological approaches”) which they then supplement with the guidance of AI.
Authentic and human-centered AI usage aligned with SEL
The data indicates that students exhibited authenticity and HCAI principles in their engagement with AI. They displayed attentiveness to AI output, expressing concerns over its potential for generating “hallucinations and false information.” Moreover, they acknowledged the trade-off between the efficiency of using AI tools and “nuance or accuracy,” and they emphasized the importance of “fact-checking AI-generated content.” The students also noted limitations in AI output, describing it as providing “overly general” and “Western-centric” recommendations for relevant literature. Students demonstrated intelligence by assessing AI output against their knowledge and displaying purpose and direction in their writing to gauge its applicability. Upon identifying AI’s limitations, students reoriented their perspective to see it as “merely a supplement” for completing auxiliary tasks. They further displayed responsibility in acting by opting to manually complete tasks they deemed conducive to meaningful learning experiences. Students also ultimately made the judgment call to include AI output they deem factual, relevant, ethical, and reflective of their research goals, this commitment to authenticity is vividly portrayed in the following quote:
I first attempted to see how Perplexity would interpret our findings. What I observed was slightly similar to the insights provided by AI; however, I find that when they do analyses, it’s mostly empty and not as comprehensive. When I saw the first few answers weren’t that helpful, I decided to find patterns myself. From there, I compiled the transcript to group into the themes and used Perplexity to aid in building on the idea and patterns. I found it more convenient to use AI this way, to help me speed up the process of writing. After that, the analysis I had written was not assisted by AI since I had already gotten a clear view of what I wanted to present.
While students generally demonstrated authentic usage of AI, some lapses occurred in their application, particularly in assuming that AI can verify information. Students attempted to ask AI to “point out any missing information or flaws in logic to double-check quality” and in some cases would “use multiple AI tools and cross-reference their outputs.” They also sought AI’s assessment of whether their output “made sense and if it was correct.” This poses an issue because determining flaws or confirming accuracy requires AI to make judgments of right and wrong, which it is incapable of doing by itself at time of writing.
Impact of AI-integrated academic programs
Encouraging students to use AI led to enhancements in both their research writing skills and their proficiency in AI usage. Through hands-on experience, students grasped the importance of effective prompting and learned to craft better prompts for their purposes: “I gradually learned through trial and error how to better target my prompts, but crafting high-quality prompts that resulted in valuable output required continual refinement of my skills.” They also learned various techniques to optimize their interactions with AI, including providing context in their prompts, using appropriate jargon, avoiding overly broad or overly specific prompts, and approaching prompting as an iterative process. Furthermore, students learned to assess the strengths of AI tools and select those that best fit their needs. For example, one student found that “Perplexity encouraged users to delve deeper into broad topics” whereas Elicit was suitable for “swiftly identifying widely cited sources.”
On the contrary, a notable discovery is that students harbor reservations about AI utilization. Some expressed concerns about the ethical implications, with one student finding it “difficult at times to use AI because it felt “illegal’”. Another student articulated: “concerns about potentially crossing ethical boundaries and compromising academic integrity restrained me, leading me to stick with familiar methods until late in the process.” Additionally, students are apprehensive about the potential consequences of over-reliance on AI, fearing it could “potentially distort or negatively impact the quality” of their work and “lead to reduced creativity and knowledge”. One student conveyed these reservations in the following quote:
The increasing possibility of ‘dependency’ on these AI tools if we use them too often is a recurring fear of mine, especially when I find myself using AI tools too often, even on tasks that I would otherwise be able to do on my own before tools like ChatGPT and Claude were created
These skepticisms align with Chan and Hu’s (2023) findings which suggest that students are wary of plagiarism and anticipate that dependence on AI could degrade growth, skills, and intellectual development.
Discussion
The instructor’s meta-reflections on integrating AI tools into a research methods course, triangulated with in-class observations of student behaviors and assessment of research paper performance, reveal a complex interplay of opportunities and challenges.
Students' reflections demonstrated varying success with prompting AI tools, highlighting the need for effective prompt engineering to promote self-management and responsible decision-making in AI-assisted learning. Some students demonstrated intelligent understanding by crafting effective prompts, leading to relevant outputs, while others struggled to articulate their needs clearly, resulting in generic responses. This raises questions about teaching and evaluating the skill to create effective prompts as a crucial competency.
Through the reflection process, students were prompted to consider the societal impact and ethical implications of AI, challenging preconceived notions about AI-assisted research. Some students expressed skepticism and a preference for traditional methods, and their reservations highlight the importance of fostering critical thinking and agency in AI-assisted learning.
Educators enhancing research writing courses with AI should emphasize its role as a collaborative tool and ensure instructor guidance to prevent AI from overtaking researchers' duties. True HCAI usage occurs when AI aids critical thinking, prompting students to evaluate and justify its outputs (Sison et al., 2023). Integrating AI should prioritize human flourishing by aiming for efficiency while allowing space for rational thinking and independent judgment. To strike this balance, the following roles for AI in the research writing process are recommended.
AI as brain-storming partner. Brainstorming entails freely exploring diverse possibilities to generate a wide array of ideas for subsequent refinement. AI is well-suited for brainstorming due to its dual capabilities: leveraging its inherent knowledge base from training data and accessing the vast resources of the internet. This combination enables AI to generate curated collective knowledge or “a significantly larger net of insights from which [students] can filter,” thereby facilitating various tasks such as compiling research articles relevant to a topic, identifying trends and patterns for initial data analysis, or outlining sections of a written work.
AI as co-writer. Generative AI may be used for tasks related to copy-editing, including enhancing language, readability, cohesion, spelling, and grammar. It is capable of reorganizing and restructuring the content of a rough draft and assisting writers in articulating their ideas through a conversational process of iteration. AI can generate coherent preliminary text from unstructured user input, providing a foundation for writers to build upon and revise according to their needs.
AI as co-reader. AI can simplify complex academic texts, making them easier to understand. It aids students' comprehension of scholarly work by extracting key points, summarizing relevant papers, and generating simplified explanations for jargon. Students can then ask follow-up questions to deepen their understanding of the topic. Furthermore, students can leverage AI to digest multiple papers simultaneously, facilitating a quicker grasp of the current scholarly conversation on a topic. The role of AI as co-reader holds significant implications for business research which often incorporates theories from diverse disciplines. Resources are typically designed for scholars with foundational expertise in a field. AI then emerges as a vital guide for simplifying complex concepts and enabling a practical comprehension necessary for undergraduate business research applications.
Integrating AI into learning tasks, such as business research, imparts relevant AI skills to students and is a viable means for developing AI competencies. AI-involved learning tasks familiarized students with AI functionality and improved their skills in prompting, but most crucially, trained them to use AI for applied problem-solving. In the face of advancing AI technology, simply learning to use existing tools is insufficient; adaptability is key. Through classroom exploration, students gained fundamental skills for real-world application of AI, enhancing their comfort and confidence in navigating the evolving AI landscape.
Thus, given the depth of insights produced by this study, educators are encouraged to incorporate reflection papers as a pedagogical tool for promoting responsible AI usage. The benefits are twofold. First, progressive student meta-reflections give educators opportunities to catch any unideal uses early and take corrective actions, such as in the case of using AI for verification. Structured reflections also help students develop self-awareness and ethical engagement (Mcguire et al., 2009). By framing reflection around authenticity, students gain guided insight into their thoughts and develop their skills in reflective practice. Encouraging constant awareness and vigilance in AI engagement fosters internal responsibility which may guide them in spaces beyond the jurisdiction of university policy. As Schon (1983, as cited in Leigh and Bailey (2013)) contends, real-world situations are often uncertain, and excellence lies in transforming ambiguity into clarity through reflective practice.
Achieving student acceptance is paramount for effectively integrating AI into the learning process (Chan and Hu, 2023). However, the study’s findings reveal a challenge: students have ethical reservations and worry that relying on AI tools, even responsibly, might compromise the quality of their learning. For higher education institutions to successfully impart AI skills to students, they may need to redefine their role as “AI champions” and implement policies that foster a sense of safety and trust in utilizing AI.
Conclusions, implications, and provocations
This study examines the integration of AI tools in an undergraduate business research course, assessing their impact on research efficiency, comprehension, and skill development. Successful integration of AI requires a shift in traditional roles, with students becoming active co-creators of knowledge, teachers serving as facilitators, and AI tools augmenting human capabilities. The results suggest that AI can enhance students' research capabilities, equipping them for real-world applications.
The results of this study suggest several key implications. Practically, meta-reflections are recommended to promote responsible AI usage, self-awareness, critical thinking, and ethical engagement among students, with structured reflections yielding particularly positive outcomes. Socially, the study highlights the need for open discussions about the social perceptions and emotional responses to AI use, fostering an environment for authentic, reflective, and emotionally intelligent engagement with AI. As AI evolves in education, it raises questions about perceptions of guilt and cheating, teacher expectations, and prioritizing authenticity and emotional intelligence. Addressing these issues requires collaboration among educators, researchers, and policymakers to harness AI’s potential while preserving human learning and growth.
For policy applications, this paper supports the call of preceding studies (Chan and Hu, 2023; Perkins, 2023; Rudolph et al., 2023) for a comprehensive “living” AI policy, driven by the belief that clear guidance may alleviate concerns about AI adoption and encourage adoption. As a living policy, it should be subjected to ongoing discussions that pay special attention not only to technical use, but normative and social perceptions in using AI tools. The emphasis is on human-centered agency and critical thinking, with recommendations of pedagogical tools such as incorporation of meta-reflections and multiple formative and iterative feedback from teachers.
This study has advanced research on AI integration in higher education. However, the novelty of the field inspires numerous promising avenues for future research. As AI platforms begin to develop features for collaboration, studies may investigate more thoroughly the social dimensions of SEL and explore how teams can synchronously interact with AI tools. Examining group dynamics, shared responsibility, and collaborative decision-making in AI-assisted learning could yield valuable insights into fostering positive human-AI interactions and promoting social awareness and relationship skills. Furthermore, longitudinal studies tracking students' AI engagement, skill development, and long-term outcomes could provide a more comprehensive understanding of the impact of AI integration on student growth and success. Researchers may explore how early exposure to AI in undergraduate education influences students' career trajectories, innovation capabilities, and ethical decision-making in professional settings. Additionally, future research could delve deeper into the ethical and emotional dimensions of AI use in education. Qualitative studies exploring students' reservations, anxieties, and coping strategies could inform the development of supportive interventions and resources. Investigating the psychological factors underlying resistance to AI adoption and the strategies for cultivating trust and acceptance could contribute to more effective AI integration policies and practices.
Synthesis of conceptual lenses
Social-emotional learning | Authenticity | Human-centered AI |
---|---|---|
Self-awareness: Accurately recognizing one’s thoughts, emotions, and their impact on behavior, as well as assessing one’s strengths, limitations, and maintaining a well-grounded sense of self-esteem, self-efficacy, confidence, perceived control, and optimism | Be attentive in experiencing | Recognizing the role of human awareness and understanding in AI usage |
Self-management: Regulating one’s thoughts, emotions, and behaviors, managing stress, savoring emotional well-being, and employing skills such as coping, problem-solving, mindfulness, relaxation, and positive, productive thinking | Be intelligent in insighting and understanding; Be reasonable in judging | Maintaining human control and regulation over AI systems; Considering the societal impact and ethical implications of AI; Fostering collaboration and positive relationships between humans and AI |
Responsible decision-making: Making constructive, responsible, and ethical choices that promote the well-being of oneself and others while effectively managing goals, time, and tasks | Be responsible in deciding and acting | Making human-centric, ethical choices in AI development and usage |
Source(s): Authors’ own work
Main themes on the benefits of AI for research writing
Learning outcome | Theme | Illustrative quotes |
---|---|---|
LO1: Write a critical review of related literature on a current or emerging topic of interest in the field of business or management, using the prescribed standards of the American Psychological Association (APA) | Extracting key findings form scholarly articles | “Being able to get the key points of complex research papers through AI summarization made compiling my own review so much faster.” |
Demystifying complex reading | “In reviewing different studies, I’ve found ChatGPT to be incredibly helpful in explaining content, especially when dealing with challenging jargon” | |
Aligning with current scholarly dialogue | “By providing detailed prompts asking for summaries of key concepts, seminal studies, open questions, etc. I was able to get an overview of the landscape efficiently.” | |
Curating relevant articles | “To identify which literature was pertinent to the topic, I asked Perplexity to help find the most relevant research articles. This helped ease my research process as it was able to narrow down the sources from the numerous research papers online.” | |
LO2: Write a research proposal that clearly shows how the research question and objectives will be addressed by testing a business and/or management theory using an appropriate methodology | Identifying relevant theories and frameworks | “Navigating the vast landscape of theoretical frameworks and philosophies within scholarly space was something I found to be quite an arduous task. Starting the writing process becomes particularly challenging when confronted with an overwhelming multitude of options—some irrelevant and others that may transform the potential of my academic papers—that I may be yet to be aware of. In this regard, AI proved as an invaluable tool in presenting a curated list of such options and providing concise summaries for each.” |
Clarifying linkages within the theoretical framework | AI provided a simplified, yet insightful summary and schematic that clearly outlined the key variables and causal relationships. This enabled me to efficiently evaluate the applicability of the framework to my specific research context | |
LO3: Write a research paper that resulted from the implementation of the research proposal, and one that benefits any organization in achieving its goals | Methodology | |
Designing survey instruments | AI accelerated the development of the survey methodology by automatically generating an initial draft of suitable sociodemographic questions tailored to my target groups | |
Results, Discussion, and Analysis | ||
Transcribing recordings | For the qualitative data, our group used Buzz Captions to automatically transcribe our interviews. Generally, it did a good job in transcribing the interview, but there were still some flaws or misworded phrases. Therefore, we made sure to double check the whole transcription again before analyzing it | |
Coaching on statistical analysis | The integration of AI significantly expedited the analysis process, providing insights and summaries that would have otherwise required extensive manual efforts. Without AI, I would have resorted to watching more YouTube videos to grasp the nuances of linear regression analysis | |
Formulating initial statistical findings | “Claude AI helped me digest statistical data more. When all of the analysis tools needed from jamovi were completed, I ran all of them through Claude to get a better understanding since statistics, as I have said, is not really my strong suit.” | |
Narrativizing quantitative data | “It has proven particularly valuable in the tasks of paraphrasing content and converting numerical data into a more narrative format.” | |
Identifying patterns in qualitative data | This AI tool efficiently identified potential themes, aiding in the creation of codes and sub-themes. While I did not rely solely on the AI-generated results, they served as a foundational guide, streamlining the process and ensuring a more structured and comprehensive analysis | |
Providing perspective for cross-validation | “By comprehensively extracting themes systemic to the full interview data set, I was able to cross-check if my initial impressions of patterns matched what Claude computationally identified across the qualitative responses. This allowed a more balanced interpretation.” | |
Overall Research Writing Process | ||
Creating initial drafts | “ The drafting process was made quicker through using AI for an initial paper framework. My process shifted from staring at a blank page to reworking existing text.” | |
Refining ideas efficiently | Before ChatGPT, I would spend 30 min on my word vomit, and about 1 h trying to find a way to connect it all together and write it cohesively | |
Enhancing writing quality | Additionally, I utilized ChatGPT to enhance the structure of my sentences, ensuring a seamless flow and incorporating a richer vocabulary, as I often found myself relying on the same words |
Source(s): Authors’ own work
References
Abd-Elsalam, K.A. and Abdel-Momen, S.M. (2023), “Artificial intelligence's development and challenges in scientific writing”, Egyptian Journal of Agricultural Research, Vol. 101 No. 3, pp. 714-717, doi: 10.21608/ejar.2023.220363.1414.
Alshater, M.M. (2022), “Exploring the role of artificial intelligence in enhancing academic performance: a case study of ChatGPT”, Social Science Research Network, [SSRN Scholarly Paper No. ID 4312358], doi: 10.2139/ssrn.4312358.
Belenguer, L. (2022), “AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry”, AI and Ethics, Vol. 2 No. 4, pp. 771-787, doi: 10.1007/s43681-022-00138-8.
Benuyenah, V. (2023), “Commentary: ChatGPT use in higher education assessment: prospects and epistemic threats”, Journal of Research in Innovative Teaching and Learning, Vol. 16 No. 1, pp. 134-135, doi: 10.1108/JRIT-03-2023-097.
Berg, C. (2023), “The case for generative AI in scholarly practice”, [SSRN Scholarly Paper 4407587], doi: 10.2139/ssrn.4407587.
Brannick, T. and Coghlan, D. (2007), “In defense of being ‘native’: the case for insider academic research”, Organizational Research Methods, Vol. 10 No. 1, pp. 59-74, doi: 10.1177/1094428106289253.
Chan, C.K.Y. and Hu, W. (2023), “Students' voices on generative AI: perceptions, benefits, and challenges in higher education”, International Journal of Educational Technology in Higher Education, Vol. 20 No. 1, p. 43, doi: 10.1186/s41239-023-00411-8.
Coghlan, D. (2022), “Developing interiority through insider inquiry: enabling students' learning from organizational experiences”, The International Journal of Management Education, Vol. 20 No. 3, 100696, doi: 10.1016/j.ijme.2022.100696.
Collaborative for Academic, Social, and Emotional Learning (2003), “Safe and sound: an educational leader's guide to evidence-based social and emotional learning (SEL) programs”, CASEL, available at: https://casel.org/safe-and-sound-guide-to-sel-programs/
Conley, C.S. (2015), “SEL in higher education”, in Durlak, J.A., Domitrovich, C.E., Weissberg, R.P. and Gullotta, T.P. (Eds), Handbook of Social and Emotional Learning: Research and Practice, The Guilford Press, pp. 197-212.
Cotton, D.R.E., Cotton, P.A. and Shipway, J.R. (2023), “Chatting and cheating: ensuring academic integrity in the era of ChatGPT”, Innovations in Education and Teaching International, Vol. 61 No. 2, pp. 1-12, doi: 10.1080/14703297.2023.2190148.
Creswell, J.W. (2014), Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th ed., SAGE Publications.
Crompton, H. and Burke, D. (2023), “Artificial intelligence in higher education: the state of the field”, International Journal of Educational Technology in Higher Education, Vol. 20 No. 1, p. 22, doi: 10.1186/s41239-023-00392-8.
Dehouche, N. (2021), “Plagiarism in the age of massive generative pre-trained transformers (GPT-3)”, Ethics in Science and Environmental Politics, Vol. 21, pp. 17-23, doi: 10.3354/esep00195.
Farrelly, T. and Baker, N. (2023), “Generative artificial intelligence: implications and considerations for higher education practice”, Education Sciences, Vol. 13 No. 11, p. 1109, doi: 10.3390/educsci13111109.
Kim, C. and Bennekin, K.N. (2016), “The effectiveness of volition support (VoS) in promoting students' effort regulation and performance in an online mathematics course”, Instructional Science, Vol. 44 No. 4, pp. 359-377, doi: 10.1007/s11251-015-9366-5.
Leigh, J. and Bailey, R. (2013), “Reflection, reflective practice and embodied reflective practice”, Body, Movement and Dance in Psychotherapy, Vol. 8 No. 3, pp. 160-171, doi: 10.1080/17432979.2013.797498.
Lisewski, B. (2004), “Implementing a learning technology strategy: top–down strategy meets bottom–up culture”, ALT-J, Vol. 12 No. 2, pp. 175-188, doi: 10.1080/0968776042000216228.
Lubowitz, J.H. (2023), “ChatGPT, an artificial intelligence chatbot, is impacting medical literature”, Arthroscopy, Vol. 39 No. 5, pp. 1121-1122, doi: 10.1016/j.arthro.2023.01.015.
Luo, J. (2024), “A critical review of GenAI policies in higher education assessment: a call to reconsider the ‘originality’ of students' work”, Assessment and Evaluation in Higher Education, pp. 1-14, doi: 10.1080/02602938.2024.2309963.
McGuire, L., Lay, K. and Peters, J. (2009), “Pedagogy of reflective writing in professional education”, Journal of the Scholarship of Teaching and Learning, Vol. 9 No. 1, pp. 93-107.
Mollick, E. and Mollick, L. (2023), Let ChatGPT Be Your Teaching Assistant: Strategies for Thoughtfully Using AI to Lighten Your Workload, Harvard Business School Publishing, available at: https://hbsp.harvard.edu/inspiring-minds/let-chatgpt-be-your-teaching-assistant
Perkins, M. (2023), “Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond”, Journal of University Teaching and Learning Practice, Vol. 20 No. 2, doi: 10.53761/1.20.02.07.
Rudolph, J., Tan, S. and Tan, S. (2023), “ChatGPT: bullshit spewer or the end of traditional assessments in higher education?”, Journal of Applied Learning and Teaching, Vol. 6 No. 1, pp. 342-363, doi: 10.37074/jalt.2023.6.1.9.
Schmidt, A. (2020), “Interactive human centered artificial intelligence: a definition and research challenges”, Proceedings of the International Conference on Advanced Visual Interfaces, pp. 1-4, doi: 10.1145/3399715.3400873.
Shneiderman, B. (2020), “Human-centered artificial intelligence: three fresh ideas”, AIS Transactions on Human-Computer Interaction, Vol. 12 No. 3, pp. 109-124, doi: 10.17705/1thci.00131.
Shneiderman, B. (2022), Human-centered AI, Oxford University Press.
Sison, A.J.G., Daza, M.T., Gozalo-Brizuela, R. and Garrido-Merchán, E.C. (2023), “ChatGPT: more than a ‘weapon of mass deception’ ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective”, International Journal of Human–Computer Interaction, pp. 1-20, doi: 10.1080/10447318.2023.2225931.
Skavronskaya, L., Hadinejad, A.H. and Cotterell, D. (2023), “Reversing the threat of artificial intelligence to opportunity: a discussion of ChatGPT in tourism education”, Journal of Teaching in Travel and Tourism, Vol. 23 No. 2, pp. 253-258, doi: 10.1080/15313220.2023.2196658.
Strzelecki, A. (2023), “To use or not to use ChatGPT in higher education? A study of students' acceptance and use of technology”, Interactive Learning Environments, pp. 1-14, doi: 10.1080/10494820.2023.2209881.
Unluer, S. (2012), “Being an insider researcher while conducting case study research”, Qualitative Report, Vol. 17, pp. 1-14, available at: https://eric.ed.gov/?id=EJ981455
Zhai, X. (2022), “ChatGPT user experience: implications for education”, Social Science Research Network, [SSRN Scholarly Paper No. ID 4312418], doi: 10.2139/ssrn.4312418.