Future-proofing quality education using integrated assessment systems

Lucy Tambudzai Chamba (Faculty of Management Sciences, Department of Entrepreneurial Studies and Management University of Technology, Durban, South Africa and TDEAM, Zimbabwe School Examinations Council, Harare, Zimbabwe)
Namatirai Chikusvura (Department of Mathematics, College of Education, University of South Africa, Pretoria, South Africa)

Quality Education for All

ISSN: 2976-9310

Article publication date: 26 August 2024

Issue publication date: 16 December 2024

336

Abstract

Purpose

Current assessment models in education have focused solely on measuring knowledge and fail to address the goals of Sustainable Development Goal 4 (SDG4) for a well-rounded, future-proof education. While SDG4 emphasizes quality education, traditional assessments do not account for the diverse skills and intelligence learners possess. This gap between assessment and the needs of SDG4 presents a conundrum for educators: How can we develop assessment strategies that encompass multiple intelligences and prepare learners for the future while ensuring the delivery of quality education as outlined by SDG4? This paper aims to propose integrated assessment strategies as a solution, examining their effectiveness in assessing multiple intelligences and supporting the future-proofing agenda within quality education.

Design/methodology/approach

The study used a qualitative research design. Interviews were held up to saturation point with 60 teachers and students purposively selected from schools in ten provinces across the country. Data from interviews were analysed using thematic network analysis. The data were complemented by documentary analysis from the Ministry of Primary and Secondary Education, Zimbabwe documents which included Curriculum Frameworks and policy documents as well as a systematic literature review.

Findings

Results indicated that integrated assessment systems provide an avenue for testing deeper learning and help students acquire competencies needed in the world of work, such as problem-solving and teamwork. However, certain conditions mitigate against the effective implementation of integrated assessment in schools.

Research limitations/implications

This study uses the use of a qualitative research methodology, hence the generalizability of results in other settings may not be possible. The data collected from the research findings was manually coded and analysed. However, coding the data manually allowed the researchers to be fully immersed in the emerging themes enriching the study with additional data. This means that in-depth data engagement was ensured.

Practical implications

The paper concludes that integrated assessment provides authentic assessment which prepares learners for the future. The study recommends that the government should redress the teaching-learning environment in schools for effective implementation of integrated assessment systems so that not only one regime of intelligence is tested and future-proofing of quality is guaranteed.

Originality/value

The research contributes to increasing the motivation to deliver quality education by investing in integrated evaluation systems.

Keywords

Citation

Chamba, L.T. and Chikusvura, N. (2024), "Future-proofing quality education using integrated assessment systems", Quality Education for All, Vol. 1 No. 1, pp. 240-255. https://doi.org/10.1108/QEA-11-2023-0014

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Lucy Tambudzai Chamba and Namatirai Chikusvura.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Quality education from an assessment perspective denotes deeper learning and imparting 21st-century skills to learners. Assessment policy and practices are vital for educational improvement, and assessment data are necessary to monitor, evaluate and improve the education system. Assessments can provide useful information to learners, teachers, parents and policy-makers and can also shape educational efforts because of the consequences attached to learner performance. Braun and Kanjee (2006) argued that assessment is not only used to measure student progress but also to evaluate school and education system quality. Therefore, modern educators should use assessments that measure 21st-century skills, such as complex tasks and richer learning. Integrated Assessment Systems (IAS) combine formative and summative assessment, allowing learners and teachers to understand learning processes (Birenbaum, 2007). IAS test multiple intelligences, making them part of the future-proofing agenda in education. Future-proofing students means equipping them with skills to succeed in complex global workplaces (O’Shea et al., 2022). This in essence means enabling students to acquire 21st-century skills and competencies.

Background

Over the past two decades, Zimbabwe has experienced a shift in the perception of education, with a greater emphasis on preparing students for employment. In January 2017, the country implemented a competence-based curriculum that uses a combination of formative and summative assessments to provide a holistic view of student learning. The assessment framework comprises continuous assessment (CA) which makes up 30% of a student’s performance outcome in Grade 7, Form 4 and Form 6 public exams and 70% summative assessment. This is a departure from the previous emphasis on standardized testing at the end of a course. The competency-based curriculum incorporates CA learning activities (CALA) or projects, which test skills not evaluated in summative exams (Assessment Framework, 2015; Makamure and Jojo, 2023). This hybrid approach also assesses soft skills rather than only cognitive skills. This article aims to establish the extent to which the IAS has contributed to the future-proofing agenda in the provision of quality education through the assessment of multiple intelligences in learners. This was done using the case study of the competence-based curriculum in Zimbabwe. This research is relevant in the context of future-proofing education, which seeks to equip students with a diverse range of skills to thrive in complex global workspaces. The article contributes to educational assessment literature and provides insights for educational policy planners on the benefits of IAS.

Theoretical framework

Two theories guide the study, the P21 Framework and Howard Gardner’s theory of Multiple Intelligence (the MI perspective). Gardner (2013) defines intelligence as a bio-psychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value to a culture. From this perspective, intelligence is a potential that is emergent and responsive, a departure from the traditional view which imagines intelligence to be innate. Gardner’s view considers the creation of products such as sculpture or computer programs, to be as important an expression of intelligence as abstract problem-solving. From this standpoint, the MI perspective suggests that educators should use diverse assessment methods that measure different intelligence, as this renders a more accurate picture of a student’s strengths and weaknesses. Educators may use multiple assessment methods, such as self-reflection, observation, portfolios, performances and interviews, to evaluate learners’ progress in various intelligence. The implications of the MI perspective are that the curriculum and assessment should include activities and learning experiences that address different intelligence and include activities such as hands-on experiments for kinesthetic learners, audio-visual resources for spatial learners and research projects for linguistic learners. In addition, the curriculum should allow students to select learning activities that appeal to their strengths and interests.

Gardner (2006) in his article ‘Tapping into Multiple Intelligence” postulates that the human mind possesses nine distinct types of intelligence. These include verbal-linguistic intelligence, which implies proficiency in language and an acute awareness of the sounds, meanings and rhythms of words. Logical–mathematical intelligence denotes an aptitude for abstract thinking and the ability to discern numerical and logical patterns. Spatial–visual intelligence allows individuals to conceptualize and visualize images and objects with great accuracy. Bodily–kinesthetic intelligence is the capability to control one’s body movements and handle objects with finesse. Interpersonal intelligence allows one to accurately detect and respond to the emotional states, motivations and desires of others, whereas intra-personal intelligence involves being self-aware and in tune with one’s inner feelings, values, beliefs and thinking processes. Naturalist intelligence on the other hand refers to the ability to recognize and categorize plants, animals and other objects in nature. Finally, existential intelligence pertains to the sensitivity and capacity to ponder profound questions about human existence, such as the purpose of life, the inevitability of death and the origins of humanity. All these types of intelligence should be assessed in learners in the quest for quality education.

Recent advances in brain and cognitive science have highlighted the need to move away from current testing models and towards performance assessments that encourage deeper learning and multiple intelligence. Recent research has demonstrated that the human brain is malleable and that individuals are capable of developing many skills and capacities previously thought to be fixed. Intelligence is no longer viewed as a single, unchanging attribute, but rather as varied and multi-dimensional, capable of development over time with the right stimulation. Moreover, students’ attitudes towards learning academic material are just as crucial as their aptitude. According to Cascallar (2004), there is a clear need for a new assessment methodology that addresses current concerns while incorporating both existing and new technologies and methods. Despite the significant changes in modern societies with the development of technology and information systems, many educational systems still rely on an outdated information transmission model for their teaching practices. As a result, assessments often fail to meet the needs of today’s learners and the complex, globalized societies they inhabit. Consequently, assessments tend to be summative evaluations rather than formative assessments of learning, which do not allow learners to improve their learning and develop deeper learning skills. This spells out the need for integrated assessment systems to address the needs of modern learners.

The partnership for 21st-century learning, P21 framework spells out the most urgent need for educators to impart 21st-century skills to learners because of the dual forces of globalisation and technological change that together are transforming the needs of the world of work. The theory highlights the necessity to review existing practices in educational assessment so that students are empowered to meet the demands of the labour market. The teaching and assessing of 21st-century skills is an attempt to future-proof students for trends in the global workspace and life. This is because 21st-century society has undergone an unprecedented and accelerated pace of change in economy and technology, demanding that the educational system prepare students for the workforce. The 21st-century skills comprise skills, abilities and learning dispositions that have been identified as being required for success in 21st-century society and workplaces by educators, business leaders, academics and governmental agencies. This is part of a growing international movement focusing on the skills required for students to master in preparation for success in a rapidly changing, digital society. Many of these skills are also associated with deeper learning, which is based on mastering skills such as analytic reasoning, complex problem-solving and teamwork. These skills differ from traditional academic skills in that they are not primarily content knowledge-based. The skills have been grouped into three main categories, that is, learning and innovation skills (see Figure 1), digital literacy skills and career and life skills. Learning and innovation skills include critical thinking and problem-solving, communications and collaboration, creativity and innovation while digital literacy skills comprise information literacy, media literacy and information and communication technologies (ICT) literacy. Career and life skills are flexibility and adaptability, initiative and self-direction, social and cross-cultural interaction, productivity and accountability.

Many of these skills are also identified as key qualities of progressive education or the 7C Skills as identified by P21 senior fellows at P21, Bernie Trilling and Charles Fadel. These include Critical thinking and problem solving, creativity and innovation, cross-cultural understanding, communications, information and media literacy, computing and ICT literacy, career learning and self-reliance. The P21 organization also conducted research that identified deeper learning competencies and skills they called the Four Cs of 21st-century learning: This resonates with the 2015, World Economic Forum published report titled “New Vision for Education: Unlocking the Potential of Technology” that focused on the pressing issue of the 21st-century skills gap and ways to address it through technology. The Four Cs include collaboration, communication, critical thinking and creativity.

For years, test designers have claimed that they can accurately assess students’ abilities. However, recent research shows that these tests may discourage students from putting in sustained effort to learn and thwart efforts in achieving the future-proofing agenda. Additionally, the idea that the human brain operates such as a library, with neatly organized information that can be easily recalled, has been challenged. In reality, the brain processes information based on its relevance. As high school students work to deepen their understanding of important concepts and skills, they should apply them to new challenges. To support this kind of learning, teachers need access to assessment tools that measure more than just recall. Forward-thinking policies aim to integrate assessment into the curriculum, using a combination of formative and summative assessment to gain insight into the learning process. IAS have the potential to benefit both learners and teachers by facilitating formative assessment and promoting learner autonomy and problem-solving skills (Cascallar, 2004)

Key aspects of integrated assessment systems

IAS are not entirely new but are currently being developed in a range of areas and settings. The general vision of an IAS is useful to outline some generic features. The description of these generic features is adapted from Birenbaum (2004). IAS serve a dual purpose to support learning and evaluate progress. They meet the needs of both learners and teachers while incorporating a quality control mechanism and e-assessment systems. Research findings inform the development, evaluation and refinement of assessments based on feedback from learners and teachers. IAS foster deep learning and conceptual understanding, critical in today’s knowledge and information societies. They also consider factors that impact learning outcomes, including intellectual abilities, resource utilization, learning opportunities, assessment modes, learning approaches and perspectives on learning. IAS are also cost-effective, alleviating the workload of overburdened examining organizations.

The significance of assessment policy is widely acknowledged in the realm of education, as it necessitates that educational indicator systems should encompass not only inputs but also outputs. As a result, there is a natural inclination to demand higher standards of quality and validity from assessments, given their increased prominence in policy-making (Rotherham and Willingham, 2009). Educationists are compelled to step beyond the limitations of existing test instruments and reconsider the entire design and development process, which incorporates local and national values and goals, as well as 21st-century skills. Integrated assessment systems have a crucial role to play in this regard, ensuring the evaluation of multiple intelligence or competencies.

In Zimbabwe, the Assessment Framework (2017), which is an IAS was developed by the Curriculum Development and Training Services, Ministry of Primary and Secondary Education together with the assessment body, Zimbabwe Schools Examinations Council (Zimsec). In this competency-based framework, CA accounts for 30% of the final assessment mark of the learner across all subjects, whereas summative assessment carries a weight of 70%. This shift towards integrated assessment systems was fuelled by the increasing diversity of today’s student population and their growing expectation for high-quality, timely and constructive feedback on their work, as well as the desire of national leaders to produce graduates who are well-suited for the workforce (Curriculum Framework, 2015). It is also an attempt by Zimbabwean educators to test multiple intelligences in the schools while equipping learners with 21st-century skills.

Assessment strategies for integrative assessment systems

A wide range of assessment strategies exist for assessing multiple intelligences and imparting 21st-century skills. Firstly, rubrics are both a tool to measure students’ knowledge and ability as well as an assessment strategy. A rubric allows teachers to measure certain knowledge at a fixed moment in time (Reeves and Stanford, 2009). Rubrics enhance the entire learning process from start to finish by serving several purposes including communicating expectations for an assignment and providing focused feedback on a project still in process. Rubrics encourage self-monitoring and self-assessment and give structure for a final grade on an end product. This is a collaborative development crafted by teachers and students that promotes cooperation between teachers and students, as they work together to build and use the rubric tool (Ashton and Lee, 2009). The merit of rubrics is that students are more comfortable with them because they feel some ownership in the process, recognize that their opinion is valued and are more successful because they know what is expected of them. Meta-cognition can also lead to more self-directed learning through self-monitoring and self-assessment by students (Olivier, 2021).

Performance-based assessments (PBA), also known as project-based or authentic assessments, are typically used as a final evaluation method to not only measure what students have learned about a subject but also to determine if they possess the ability to apply that knowledge in a real-world scenario. (VanTassel-Baska, 2021). By asking them to create an end product, PBA pushes students to synthesize their knowledge and apply their skills to a potentially unfamiliar set of circumstances that is likely to occur beyond the confines of a controlled classroom setting (Palm, 2008). Examples include designing and constructing a model, developing, conducting and reporting on a survey, carrying out a science experiment, writing a letter to the editor of a newspaper, creating and testing a computer program and outlining, researching and writing an in-depth report allows for differentiation of assessment so that all students have space to demonstrate understanding including special education.

Portfolios are a collection of student work that is gathered over time and is primarily used as a method of evaluation. Unlike standardized tests, which provide a snapshot of a student’s knowledge at a single point in time, portfolios showcase a student’s effort, development and achievement over a period. Portfolios measure a student’s ability to apply knowledge rather than just memorize it. They are also considered to be both student-centred and authentic assessments of learning that can effectively teach and test at the same time, without taking away from instruction if implemented properly. According to Torphy Knake (2021), portfolios are an effective way to supplement instruction and measure a student’s progress.

Self-assessment is another type of IAS strategy which occurs when students judge their work to improve performance, as they identify discrepancies between current and desired performance (Schultz et al., 2021). This method aligns well with standards-based education because it provides clear targets and specific criteria against which students or teachers can measure learning. E-assessment also referred to as digital assessment, online assessment technology-based assessment or computer-based assessment is a versatile strategy for assessing multiple intelligence in students. Combined with all aforesaid types of assessment for example formative assessment uses gaming and online exams for summative encouraging high–order thinking skills.

Challenges of traditional assessment

Traditional assessment has various drawbacks. One of the primary complaints is that it frequently relies on narrow-scale exams that cannot adequately reflect a student’s success or failure and are insufficient for testing higher-level skills. Furthermore, traditional assessment can result in a disagreement between grades and assessment standards, causing issues (Avitabile, 2022). Another drawback is a lack of feedback for pupils, which limits their potential to progress (Susanti et al., 2020). Furthermore, typical assessment tasks frequently consist of insufficient intellectual examinations and are unable to match the demands of many study areas, particularly topics requiring practical skills, such as agriculture. This emphasizes the importance of IAS, which can provide a more comprehensive evaluation of students’ knowledge and skills as traditional assessments focus more on end-of-program learning.

This highlights the need for IAS which can provide a more comprehensive evaluation of students’ knowledge and skills. When talking about traditional assessment, one often refers to formal tests that evaluate students’ ability to remember and reproduce course content. These tests are usually timed and standardized, applying the same conditions to all students (Chen, 2023). However, there are other typical features of traditional assessment that have become entrenched in education practices and are seen as the “correct” way to assess learners. For example, traditional assessment is often used summarily to check learning at the end of a term, such as through final exams. These assessments typically focus on finding one single right answer, leaving little room for doubt, discussion and critical thinking (Zipperle, 2021). As a result, they are easily graded and yield highly reliable results but producing students not fit for higher education or the industry. Scores are heavily emphasized in traditional assessment, often being the sole feedback students receive (McCune and Rhind, 2014). This focus on the end product of the assessment rather than the learning process is a hallmark of traditional assessment. This approach tends to promote extrinsic motivation in learners, as they are more concerned about passing or failing rather than focusing on what they have learned, how effective the teaching has been and what skills or competencies need attention.

Reasons to rethink the use of traditional assessment

Looking at the qualities of a traditional standardized assessment listed above, one may be able to raise concerns about its usefulness, particularly in a teaching context that promises to use a more competence-based approach. In a more traditional paradigm, there may be little room for assessing true learning, as this type of assessment often promotes simple content replication and frequently ignores the acquisition of competencies (Schneiter, 2024). As a result, the validity of traditional examinations is likely to be low because of their summative character and the emphasis placed on scores.

As tests are frequently used to simply measure the result of the term (looking back at what has been accomplished) and the most important goal seems to be getting approval to move on to the next level, they end up not being perceived as opportunities to gather information about strengths and areas for improvement (Maxlow et al., 2021). As a result, what learners still need to know and what they can do to achieve becomes unclear. Likewise, how the teaching might be enhanced winds up not being looked at. Therefore, in a summative approach to assessment, results are not used to inform further learning and teachers do not have the chance to personalize lessons that better address learners’ specific needs (Beasley, 2024). Neither do teachers go on to make informed decisions about what needs to change in their lessons so that further and more effective learning can be fostered.

Traditional assessment limits a student’s potential and prevents unconventional thinking (Majola, 2023). As a result, this strategy cannot be considered particularly inventive. It just provides a surface preview of a student’s ability based on the norms established by the regular curriculum. Not every individual is suited to a specific format, and traditional examinations are extremely restricting because pupils have few options. It is a more theoretical approach that may not always encourage a healthy learning and inclusive environment, putting pressure on the provision of quality education for today’s kids. This method is not very diverse, thus it fails to meet the unique demands of each individual, resulting in the need for IAS.

Methodology

Research design

The study used a qualitative methodology, which aims to gain a deeper understanding and insight into a phenomenon, rather than just examining its surface features (Chimbganda et al., 2022). According to Taherdoost (2021), this type of research methodology seeks to answer “how and why” questions in a research study and mostly covers data regarding feelings, perceptions and emotions, using unstructured approaches such as interviews for data collection. Nkengbeza and Shava (2019) added that the goal of qualitative research is to interpret and understand both the meanings that events have for the people who experience them and how the researchers have interpreted those meanings. The researchers used the use of a qualitative methodology, as it offers a vital perspective on the human experience of using IAS, ultimately leading to a more comprehensive understanding of its merits.

To provide a comprehensive overview of the existing research on this topic, the researchers conducted a systematic literature review (Badampudi et al., 2022a, 2022b). This involved a thorough search of relevant databases, such as Google Scholar, Scopus and Web of Science, using keywords related to the research question. The inclusion criteria were peer-reviewed articles published in English within the past five years. After screening the titles, abstracts and full-text articles, 25 studies were selected for in-depth analysis. The key findings from this review were synthesized and used to inform the design and analysis of the primary qualitative data collection.

The researchers used a purposive sampling method to determine the informants. Purposive sampling involves selecting samples based on the researcher’s judgement. In simple terms, it means choosing samples specifically tailored to the requirements of research (Lohr, 2021a, 2021b). Purposive sampling was used to select teachers and students from schools that implemented the updated curriculum, that is public schools and mission schools. In Zimbabwe, private schools and trust schools register their students for the University of Cambridge International Education examinations (International general certificate for secondary education, advanced subsidiary level and advanced level), hence the students are not exposed to the local curriculum and do not sit for the ZIMSEC national examinations. The sample size was determined using the Yamane sample size calculator, with 20 schools from 10 provinces participating. A total of 60 teachers and 30 students were interviewed from three school types predominantly found in Zimbabwe, that is rural, day and boarding schools. The participants were given information and consent letters, and their anonymity and confidentiality were preserved. This is in line with the recommendations of Badampudi et al. (2022a, 2022b), who stressed the importance of following proper procedures to prevent any harm to human subjects in empirical studies. Clearance was obtained from the Ministry of Primary and Secondary Education to conduct research in the schools. Interviews were held to saturation point until the researchers did not receive new information from the interviewees.

Data from interviews were analysed using a thematic analysis and triangulated together with a document analysis of the framework literature from the Ministry of Primary and Secondary Education and the Curriculum Development Services Department and presented in themes. Although time-consuming, performing a thematic analysis manually has some advantages over using software, most notably flexibility, depth of comprehension and researcher interaction with the data. First, manual theme analysis allowed the researchers to adjust their methodology, as they worked with the data. They might change codes and themes in real time based on information gathered during the analysis process. This reflexivity resulted in a more nuanced interpretation of the data, as researchers were not bound by the predefined settings commonly established in software programmes (Chimbganda et al., 2022).

The process also allowed for more in-depth engagement with data. As the researchers manually evaluated the data, they built a stronger relationship to the material. This participation improved their capacity to detect subtle themes and patterns that automated processes may miss (Darbyshire and Baker, 2012). The researchers’ experience as educational practitioners and assessment professionals is critical in evaluating meanings and context, especially in qualitative research.

While software can automate the coding process, it may introduce biases based on its algorithms and the training it has received. Manual analysis allows researchers to be aware of their biases and actively work to mitigate them, providing a more personal and potentially more accurate interpretation of the data. For smaller data sets, manual analysis can be more practical and cost-effective. Researchers can avoid the expenses associated with purchasing and learning new software, making manual analysis a suitable option for projects that are not externally funded like this one (Terry et al., 2017). Manual thematic analysis allows researchers to think critically about their data, potentially leading to more insightful results. This technique can reveal surprising themes and connections that automated analysis, which is frequently more concerned with efficiency than depth, may miss. In conclusion, while software may substantially speed up thematic analysis and handle larger data sets, manual analysis is still a desirable method because of its flexibility, depth of involvement and ability to generate subtle insights.

Discussion and findings

From the interviews, it was revealed that IAS tests for MI at a better level than summative assessment regimes. Several themes have emerged from the interviews held with both teachers and students. The themes point to the superior performance of the updated curriculum, an IAS when compared to traditional assessment in providing authentic assessment.

The first theme that emerged from the interview data was that IAS produce more authentic evaluations. Teachers emphasized that the implementation of CALA at both the primary and secondary levels enables the testing of multiple intelligences because of the inclusion of formative assessment activities that students complete under the supervision of teachers and contribute to learners’ final certification. This is preferable to using pen-and-paper assessments, which emphasize teaching to the exam and do not measure deeper understanding. Interviewees agree that IAS offers more valid assessments for students in the cognitive, affective and psycho-motor domains than summative regimes. Most teachers praised the IAS for measuring a broader range of skills in comparison to the previous summative approach.

A teacher from one school remarked…

The Updated Curriculum has provided a gateway to testing a variety of skills, than the Old curriculum. Previously our assessment regime concentrated on assessment of only the cognitive domain in most of the subjects across the curriculum, except for practical subjects. The introduction of CALAs has seen the teaching and testing of a variety of skills such as creativity and innovation, collaboration, and computer literacy skills.

(Interviewee 1, Harare Metropolitan Province).

The interview findings concur with the views of Alan et al. (2017) that IAS systems provide flexible, authentic, context-embedded learning experiences which are considered consistent with existing values, experiences and needs, as expressed by the P21 Framework. These views are also reflected in the Curriculum Framework (2015), which highlights that IAS also fulfil the aspirations of the nation by providing learning and assessments which produce learners with competences, capabilities, skills and knowledge to drive economic growth and development.

The second theme to emerge from interviews is that the adoption of IAS has resulted in students becoming active participants in their learning and assessment, as is encouraged in reflective learning. Interviewees concur that this type of assessment is more engaging.

One participant commented that:

The hybrid assessment method has promoted greater student involvement in the learning and assessment process. Learners are no longer passive recipients of assessment instruments as fostered by summative assessment regimes but now control the direction of assessment. This results in the production of mature, self-directed learners.

(Interviewee 2, Harare Metropolitan Province).

Most interviewees also highlighted that IAS promotes fairness in assessment. A teacher commented that:

The Updated Curriculum has advanced the concept of fairness in assessment since it assesses not only the cognitive domain but also other domains in learners. Students who are not gifted cognitively do not feel like outcasts and have better hope of achieving better grades. CALAs have ensured that nearly all students have a pass mark, as it is almost impossible to fail this type of assessment. It is only in cases where students and parents do not cooperate with the teacher that a learner fails a CALA task, hence the assessment ground is level in this type of assessment method.

(Interviewee 6, Mashonaland Central Province).

These findings confirm Gardner’s (2013) theory states that every learner has their own distinct set of strengths and weaknesses in various forms of intelligence. Assessments must be crafted in a way that brings these strengths and weaknesses to light. Abbasi (2022) firmly agrees with this viewpoint, emphasizing that assessments must allow students to demonstrate their comprehension of the subject matter.

Another theme to emerge from interview results is that IAS provide a more flexible assessment method and provides a foolproof method of monitoring learner progression.

A student remarked:

The CALA approach to assessment introduces a less stressful form of assessment as compared to sitting for an examination. We as students can be assessed in our comfort zone, as the work is done in our context and not under exam conditions which brings panic to most of us.

(Interviewee 35 Matabeleland South Province).

A teacher applauded the IAS system for better monitoring of student performance during the learning process:

There is greater room for guidance on appropriate learning strategies or providing remedial or additional lessons during formative assessment activities. This allows for all students to be carried along the learning process unlike in summative assessment where slow or below-average students are left behind. This goes along with the country’s developmental mantra of ‘No Child Left Behind’.

(Interviewee 47; Mashonaland West Province).

A student also highlighted the same view and said:

The Updated Curriculum has encouraged teachers to focus on all learners. Previously only fast learners were accommodated as teachers did not have time for slow learners. Now we get a chance to consult while working on CALA activities and teachers get to hear our concerns and help us in areas where we need help.

(Student 18 Manicaland Province).

Both teachers and learners concurred that the IAS test multiple intelligences and are a productive form of assessment and that has exposed the innovation capabilities of students through a vast array of products in CALA:

A lot of innovations at school have been witnessed due to introduction of the new assessment methods. This dimension was not brought up in the previous traditional assessment system which was largely pen and pen. The learners have developed great products which are consumed by people in society, and some have solved problems that have been affecting communities.

(Interviewee 58, Matabeleland North).

A student also reiterated this view:

CALA allowed me to design and develop objects that I had no idea I could do. Even in subjects that I found challenging, I was able to perform better because the assessment got me to come up with solutions to problems being faced every day in our communities. I was able to improve on innovation, which is something I never thought I would be able to do in my life.

(Student 15, Bulawayo Metropolitan Province).

This is in line with findings by Birenbaum (2007) who asserts that IAS is an assessment mechanism that is adaptive and innovative and provides a more accurate measure of determining the outputs of the learning process, through testing multiple intelligences. Thus IAS is an innovation in assessment which ticks the boxes of providing relative competitive advantage in the provision of quality. This aligns with the findings by Vagnani and Volpe (2017), who posit that relative advantage is measured in terms of economy, social achievement, comfort and satisfaction. The interview results confirm that IAS is central to the provision of quality education in the country.

Challenges of implementing integrated assessment systems in Zimbabwe

Interview results also revealed that while IAS is a progressive assessment, it is also beset with challenges in the Zimbabwean context. The first theme to emerge under challenges is the huge financial outlay needed to provide for this type of assessment effectively. The implementation of the IAS system in Zimbabwe has brought to light the need for substantial financial investment in assessing multiple intelligences within schools. While cognitive assessment can be done with traditional pen-and-paper methods, the testing of multiple intelligences demands a wider range of materials and a greater reliance on technology.

A teacher from Midlands echoed that:

While the Updated Curriculum is a welcome development, testing of multiple intelligence comes with the drawback that it requires a lot of capital injection which schools do not have. Multiple Intelligence assessment requires materials which schools do not have and are then supplied by parents. This has proven to be a burden to most parents who are financially constrained and have in most instances failed to supply materials, such that result teachers end up designing pen and paper assessments because of lack of materials and the burden of large class sizes. this is the most practical way to assess students with the present workloads.

(Interviewee 56, Midlands).

This is in line with the comment by Rusmiati et al. (2020) that government funding per student has decreased over the past 15 years, with more resources being allocated to administration, quality assurance, national initiatives and information technology. Consequently, assessment costs are on the rise and beyond the reach of many schools, calling for collaboration from parents and external stakeholders.

Another finding from the data is that big class sizes have a negative impact on IAS performance. During interviews, participants made an intriguing observation that IAS takes longer to complete because it demands more teacher–pupil engagement and collaboration than summative exams do. However, this is inconsistent with Zimbabwe’s enormous class sizes.

One teacher lamented that:

The government did not review class sizes, teacher loads as well the requirements in several exercises and tests given per subject to students by subject teachers. This has taken a toll on the teachers who are now heavy-laden and cannot juggle both assessment regimes for the benefit of the learners. This has resulted in sub-standard assessments which are a replica of pen and pen (summative assessment) as teachers try to balance the documentation requirements by school and ministry authorities together with increased time of assessment.

(Interviewee 17, Harare Metropolitan Province).

This challenge is further compounded by an increase in the number of subjects in the curriculum, which leads to less academic time per student and heightened pressure to produce research (Popov, 2021):

In some subjects, our CALA tasks come in the form of written exercises just like essays, and tests because teachers claim they have no time, some even do not have time to consult and just want us to submit our work and they mark:

(Student Mashonland East Province).

The interview findings suggest that large class sizes can impede the effective application of IAS, compromising educational quality. Owing to resource constraints, teachers may use less time-consuming and less effective assessment methods, such as memory exams and multiple-choice questions, which do not evaluate multiple intelligences and fail to impart 21st-century skills. The interview findings are consistent with a study by Winarti (2019), which shows that efficient implementation of IAS necessitates low teacher–pupil ratios.

Another issue that affects IAS implementation in Zimbabwe is a lack of teacher motivation. Teachers’ motivation poses a huge hurdle to IAS operation and effective education supply. Teachers in Zimbabwe are poorly compensated, which reduces their willingness to implement this innovation. Instead of using the extra time to earn revenue for their families, many see it as an additional workload. This lack of desire impedes the successful implementation of IAS and the provision of quality education (Ohene, 2023).

A participant commented:

The Updated Curriculum has increased the effect of poverty and further decreased the quality of life of teachers in the country. Currently, teachers are ranked as the most poorly paid professionals, with our salaries incapable of making us attain a decent living. We need to be involved in ventures that complement our incomes so that we can live normal lives in a country which has a very high cost of living. Now IAS brings in an assessment that requires us to spend a whole day till sunset at work in the name of assisting students. This is unacceptable to teachers as our pockets do not allow us to spend the whole day at work, lest our families go hungry and our children drop out of school. Genuinely speaking teachers are not motivated at all to take part in this IAS, it has been imposed on us, thus we cut corners so that it’s a win-win situation.

(Interviewee 3 Harare Metropolitan Province).

Another interviewee commented:

I am not at any level to participate in IAS because I am poorly paid, and the introduction of these CALA only means more work and no better remuneration. There should be efforts by the government to improve the teacher’s standard of living before pulling up additional work. It’s an unwelcome innovation to us, though it is good for learners and improves our education system.

(Interviewee 49 Beitbridge, Matabeleland South Province).

The interview findings are compatible with the study conducted by Yarmanelis et al. (2022), which found that the effectiveness of IAS is strongly dependent on the amount of dedication and drive demonstrated by teachers as collaborators, as well as the leadership offered by school heads. The researchers discovered that the key to a successful IAS is ensuring that all stakeholders are satisfied with the evaluation process. This involves not just the pupils under evaluation, but also the teachers, parents and school administrators. As a result, teachers and principals must collaborate to develop a successful IAS that fulfils the needs of all parties involved.

Conclusion

This study provided evidence that IAS offer a valuable approach to assess multiple intelligences and prepare learners for the complexities of modern society. Key findings demonstrated that IAS goes beyond assessing cognitive skills, encompassing essential soft skills such as computer literacy, creativity, collaboration and communication which are crucial for workplace success. Furthermore, integrating formative assessment into the final evaluation process fosters deeper learning and the development of 21st-century skills. Traditional summative assessments, solely focused on cognition, often left learners unprepared for the demands of a globalized and knowledge-driven world. IAS, by contrast, contributes to “future-proofing” education by emphasizing the assessment and development of these critical 21st-century competencies, fostering well-rounded learners.

However, maximizing the potential of IAS requires supportive policy measures. Policymakers should create an environment that encourages educators and learners to embrace these methodologies. This could involve providing professional development opportunities for teachers, allocating resources for implementing IAS effectively and fostering a culture of continuous improvement within educational institutions.

By implementing these recommendations, we can ensure that IAS contributes to a success story in quality education. Imagine a future where assessments produce “deep” learners – highly motivated individuals equipped with a range of transferable skills who actively participate in the learning process. This future is achievable through the strategic implementation of IAS.

Figures

The P21 framework

Figure 1.

The P21 framework

References

Abbasi, P. (2022), “Effect of cognitive & metacognitive strategy developing reading comprehension emphasizing students’ linguality”, doi: 10.21203/rs.3.rs-1218693/v1.

Ashton, M.C. and Lee, K. (2009), “The HEXACO–60: a short measure of the major dimensions of personality”, Journal of Personality Assessment, Vol. 91 No. 4, pp. 340-345.

Assessment Framework (2015), “Ministry of primary and secondary education (MoP SE)”, Zimbabwe.

Avitabile, C. (2022), “The impact of computer assisted learning in higher education: evidence from an at scale experiment in Ecuador”, AEA Randomized Controlled Trials, doi: 10.1257/rct.9036-1.0.

Badampudi, D., Fotrousi, F., Cartaxo, A. and Usman, M. (2022a), “Guidelines for conducting systematic literature reviews in software engineering”, Information and Software Technology, Vol. 143, p. 106757, doi: 10.37190/e-inf220109.

Badampudi, D., Fotrousi, F., Cartaxo, B. and Usman, M. (2022b), “Reporting consent, anonymity and confidentiality procedures adopted in empirical studies using human participants”, E-Informatica Software Engineering Journal, Vol. 16 No. 1, p. 220109.

Beasley, J.G. (2024), “Summative assessment”, Technology Integration and Differentiation for Meeting the Needs of Diverse Learners, pp. 3186-3618, doi: 10.4018/978-1-4666-9600-6.les9.

Birenbaum, M. (2007), “Evaluating the assessment: sources of evidence for quality assurance”, Studies in Educational Evaluation, Vol. 33 No. 1, pp. 29-49, doi: 10.1016/j.stueduc.2007.01.004.

Braun, H. and Kanjee, A. (2006), “Using assessment to improve education in developing nations”, Educating All Children, pp. 303-354, doi: 10.7551/mitpress/2638.003.0007.

Cascallar, E.C. (2004), “A new advanced integrated assessment system”, Paper presented at the Avignon international invitational conference on assessment, Avignon, France (September 6–8).

Chen, Y. (2023), “Underrepresented minority students' college access after the university of California’s standardized tests suspension”, Proceedings of the 2023 AERA Annual Meeting. doi: 10.3102/2016425.

Chimbganda, S., Matare, F., Shava, G.N. and Zinyama, M. (2022), “Analysing qualitative data in research, processes and features”, International Journal of Innovative Science and Research Technology, Vol. 7 No. 10, pp. 924-938.

Darbyshire, D. and Baker, P. (2012), “A systematic review and thematic analysis of cinema in medical education”, Medical Humanities, Vol. 38 No. 1, pp. 28-33.

Gardner, H. (2006), “The science of multiple intelligences theory”, Educational Psychologist, Vol. 41 No. 4, pp. 227-232.

Gardner, H. (2013), “Frequently asked questions—multiple intelligences and related educational topics”, Howard Gardner | Hobbs Professor of Cognition and Education/Harvard Graduate School of Education, available at: https://howardgardner01.files.wordpress.com/2012/06/faq_march2013.pdf

Lohr, S.L. (2021a), “Simple probability samples”, Sampling, pp. 31-78, doi: 10.1201/9780429298899-2.

Lohr, S.L. (2021b), Sampling: Design and Analysis, Chapman and Hall/CRC, New York, NY.

McCune, V. and Rhind, S. (2014), “12 Understanding students’ experiences of being assessed: the interplay between prior guidance, engaging with assessments and receiving feedback”, Advances and Innovations in University Assessment and Feedback, pp. 246-263, doi: 10.1515/9780748694556-016.

Majola, X.M. (2023), “Thinking ‘out of the box’ when designing formative assessment activities for the E-portfolio”, Journal of Curriculum Studies Research, Vol. 5 No. 3, pp. 113-130, doi: 10.46303/jcsr.2023.34.

Makamure, C. and Jojo, Z.M. (2023), “The role of continuous assessment learning activities (CALA) in enhancing mathematics competency and proficiency in secondary school learners”, Mathematics Education Journal, Vol. 7 No. 1, pp. 1-15, doi: 10.22219/mej.v7i1.24017.

Maxlow, K.W., Sanzo, K.L. and Maxlow, J.R. (2021), “Creating, grading, and using traditional assessment strategies”, Creating, Grading, and Using Virtual Assessments, pp. 1-24, doi: 10.4324/9781003200093-1.

Nkengbeza, D. and Shava, G.N. (2019), “Qualitative research paradigm: a design for distance education researchers”, Namibia CPD Journal for Educators, Vol. 6 No. 1, pp. 1-15.

Ohene, N. (2023), “An analysis of factors affecting the successful implementation of educational policies in developing countries”, Journal of Education Review Provision, Vol. 1 No. 3, pp. 30-35, doi: 10.55885/jerp.v1i3.216.

Olivier, J. (2021), “Self-directed multimodal assessment: towards assessing in a more equitable and differentiated way”, NWU Self-Directed Learning Series, pp. 51-69, doi: 10.4102/aosis.2021.bk280.03.

O’Shea, M.A., Bowyer, D. and Ghalayini, G. (2022), “Future proofing tomorrow's accounting graduates: skills, knowledge and employability”, Australasian Business, Accounting and Finance Journal, Vol. 16 No. 3, pp. 55-72, doi: 10.14453/aabfj.v16i3.05.

Popov, Y. (2021), “Digitalisation in the German Mittel stand”, doi: 10.5771/9783828876859.

Reeves, S.T.A.C.Y. and Stanford, B. (2009), “Rubrics for the classroom: assessments for students and teachers”, Delta Kappa Gamma Bulletin, Vol. 76 No. 1.

Rotherham, A.J. and Willingham, D. (2009), “What Will It Take?”, Curriculum Framework, Curriculum Design Services, Zimbabwe.

Rusmiati, M.N., Deti, S., Sukmana, S.F., Dewi, D.A. and Furnamasari, Y.F. (2020), “Penerapan Teknologi Informasi untuk Meningkatkan Motivasi Belajar Siswa dalam Pembelajaran P Kn di Sekolah Dasar”, Aulad: Journal on Early Childhood, Vol. 4 No. 3, pp. 150-157.

Schneiter, R.W. (2024), “No more tests: extending cooperative learning to replace traditional assessment tools”, 2004 Annual Conference Proceedings, doi: 10.18260/1-2–13971.

Schultz, K., McGregor, T., Pincock, R., Nichols, K., Jain, S. and Pariag, J. (2021), “Discrepancies between preceptor and resident performance assessment: using an electronic formative assessment tool to improve residents’ self-assessment skills”, Academic Medicine, Vol. 97 No. 5, pp. 669-673, doi: 10.1097/acm.0000000000004154.

Susanti, M., Herfianti, M., Damarsiwi, E.P. and Perdim, F.E. (2020), “Project-based learning model to improve students ‘ability”, International Journal of Psychosocial Rehabilitation, Vol. 24 No. 2, pp. 1378-1387, doi: 10.37200/ijpr/v24i2/pr200437.

Taherdoost, H. (2021), “Sampling methods in research methodology; how to choose a sampling technique for research”, International Journal of Academic Research in Management, Vol. 5 No. 2, pp. 18-27.

Terry, G., Hayfield, N., Clarke, V. and Braun, V. (2017), Thematic Analysis. The SAGE Handbook of Qualitative Research in Psychology, Vol. 2 Nos 17/37, p. 25.

Torphy Knake, K. (2021), “Pinterest curation and student achievement: the impacts of mathematics resources on students' learning over time”, Proceedings of the 2021 AERA Annual Meeting, doi: 10.3102/1681803.

Vagnani, G. and Volpe, L. (2017), “Innovation attributes and managers' decisions about the adoption of innovations in organizations: a meta-analytical review”, International Journal of Innovation Studies, Vol. 1 No. 2, pp. 107-133, doi: 10.1016/j.ijis.2017.10.001.

VanTassel-Baska, J. (2021), “Using performance-based assessment to document authentic learning”, Alternative Assessments, pp. 285-308, doi: 10.4324/9781003232988-14, The Washington Post, available at: www.washingtonpost.com/news/answer-sheet/wp/2013/10/16/howard-gardner-multiple-intelligences-are-not-learning-styles/ (accessed 16 October 2013).

Winarti, W. (2019), “The effect of pair and group work in collaborative pre-writing discussion on students’ writing quality”, Journal of English for Academic and Specific Purposes (JEASP), Vol. 2 No. 2, pp. 12-24, doi: 10.18860/jeasp.v2i2.7782.

Yarmanelis, W., Rahman, S., Junaedi, A.T. and Momin, M.M. (2022), “The effect of commitment, motivation, and leadership on heads and teachers performance in the junior high school in Rimba Melintang”, Journal of Applied Business and Technology, Vol. 3 No. 3, pp. 226-234, doi: 10.35145/jabt.v3i3.106.

Zipperle, I. (2021), “A field experiment on adaptive learning applying a machine learning algorithm”, AEA Randomized Controlled Trials, doi: 10.1257/rct.7637-1.0.

Further reading

Abbas, M., Shahid Nawaz, M., Ahmad, J. and Ashraf, M. (2017), “Undefined”, Cogent Business and Management, Vol. 4 No. 1, p. 1312058, doi: 10.1080/23311975.2017.1312058.

Birenbaum, M., Breuer, K., Cascallar, E., Dochy, F., Dori, Y., Ridgway, J., Wiesemes, R. and Nickmans, G. (2006), “A learning integrated assessment system”, Educational Research Review, Vol. 1 No. 1, pp. 61-67.

Carron, G. and Chau, T.N. (1996), Quality of Primary Schools in Different Development Contexts.

Cascallar, A.S. and Cascallar, E.C. (2024), “Setting standards in the assessment of complex performances: the optimized extended-response standard setting method”, Optimising New Modes of Assessment: In Search of Qualities and Standards, pp. 247-266, doi: 10.1007/0-306-48125-1_10.

Chinapah, V., H’ddigui, E.M., Kanjee, A., Falayajo, W., Fomba, C.O., Hamissou, O., Rafalimanana, A. and Byomugisha, A. (2000), “With Africa for Africa: towards – quality education for all”, 1999 MLA Project, Human Sciences Research Council, Private Bag X41, Pretoria, available at: www.hsrc.ac.za

Chinapah, V. and UNESCO. (1997), “Undefined”, UNESCO.

Conley, D. (2015), “A new era for educational assessment”, Education Policy Analysis Archives, Vol. 23, p. 8, doi: 10.14507/epaa.v23.1983.

Coombs, P.H. (1985), The World Crisis in Education: The View from the Eighties, Oxford University Press, pp. xiv+-353.

Delors, J. (1996), “Learning: the treasure within, report to UNESCO of the international commission pocket edition”, UNESCO.

Eko Handoyo, Ali Masyhar, K.J. (2021), “Performance of educational assessments: integrated assessment as an assessment innovation during the COVID-19 pandemic”, Turkish Journal of Computer and Mathematics Education (TURCOMAT), Vol. 12 No. 6, pp. 2708-2718, doi: 10.17762/turcomat.v12i6.5777.

Gardner, H.E. (1993), “Multiple intelligences: the theory in practice, a reader”, Undefined, Basic Books, New York, NY.

Learning styles (2024), “Vanderbilt university”, available at: https://cft.vanderbilt.edu/guides-sub-pages/learning-styles-preferences/

Luo, Y. (2019), “The effects of formative assessments on MOOC students' performance in summative assessments”, Proceedings of the 2019 AERA Annual Meeting, doi: 10.3102/1440853.

Madanchian, M., Hussein, N., Noordin, F. and Taherdoost, H. (2021), “Effects of leadership on organizational performance”, Earth Sciences and Human Constructions, Vol. 1, pp. 58-62, doi: 10.37394/232024.2021.1.10.

Rasmitadila, R., Aliyyah, R.R., Rachmadtullah, R., Samsudin, A., Syaodih, E., Nurtanto, M. and Tambunan, A.R. (2020), “The perceptions of primary school teachers of online learning during the COVID-19 pandemic period: a case study in Indonesia”, Journal of Ethnic and Cultural Studies, pp. 90-109, doi: 10.29333/ejecs/388.

Sánchez, T., Gilar-Corbi, R., Castejón, J., Vidal, J. and León, J. (2020), “Students’ evaluation of teaching and their academic achievement in a higher education institution of Ecuador”, Frontiers in Psychology, Vol. 11, p. 233, doi: 10.3389/fpsyg.2020.00233.

Acknowledgements

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Corresponding author

Lucy Tambudzai Chamba can be contacted at: lucyteechamba@gmail.com

Related articles