A recommended closed-loop assessment of learning outcomes process for hospitality programs: The experience of two programs, Part 2

Allen Z. Reich (School of Hotel and Restaurant Management, Northern Arizona University, Flagstaff, Arizona, USA)
Galen R. Collins (School of Hotel and Restaurant Management, Northern Arizona University, Flagstaff, Arizona, USA)
Agnes L. DeFranco (Conrad N. Hilton College of Hotel and Restaurant Management, University of Houston, Houston, Texas, USA)
Suzanne L. Pieper (Office of Curriculum, Learning Design and Academic Assessment, Northern Arizona University, Flagstaff, Arizona, USA)

International Hospitality Review

ISSN: 2516-8142

Article publication date: 31 May 2019

Issue publication date: 8 July 2019

1183

Abstract

Purpose

Because of the increasingly higher expectations of accrediting organizations, calls for greater accountability from state governments and students’ demand for an education that prepares them for a career, most hospitality programs are now required to have an effective assessment of learning outcomes process. The increasing popularity of the assessment of learning outcomes process is viewed as highly positive because it can be considered as best-practices in higher education. The paper aims to discuss this issue.

Design/methodology/approach

This is Part 2 of a two-part article that provides an overview of the justifications for implementing an assessment of learning outcomes process, the steps that were developed by two hospitality programs, and the experiences of the two programs during implementation.

Findings

The steps in a closed-loop assessment of learning outcomes process are relatively detailed; however, because of changes in expectations of stakeholders and the requirements of accreditors, they are now mandatory for most hospitality programs. Therefore, the choice is not whether to implement them, but when. From a competitive standpoint, it is to the program’s advantage to begin as soon as possible. Another factor to consider is that the implementation of a closed-loop assessment of learning outcomes process will take several years to complete.

Originality/value

This paper is presenting a critical view of one of, if not the most important concepts in higher education, the closed-loop assessment of learning outcomes process. Hopefully, the information on the process that is provided and the experiences of the two programs can shorten the learning curve for other hospitality programs.

Keywords

Citation

Reich, A.Z., Collins, G.R., DeFranco, A.L. and Pieper, S.L. (2019), "A recommended closed-loop assessment of learning outcomes process for hospitality programs: The experience of two programs, Part 2", International Hospitality Review, Vol. 33 No. 1, pp. 53-66. https://doi.org/10.1108/IHR-03-2019-0003

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Allen Z. Reich, Galen R. Collins, Agnes L. DeFranco and Suzanne L. Pieper

License

Published in International Hospitality Review. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Part 1 of this article includes an introduction to the closed-loop assessment of learning outcomes process and the first three steps of the process. Part 2 includes steps four through seven of the process and the conclusion.

4. Design assessment measures and measure the results

The most important activity of faculty is to make sure that students achieve specific program and course learning outcomes (note: program learning outcomes incorporate university learning outcomes). While the previous step focused on teaching methods and student learning, assessment focuses on determining if the specified learning outcomes were achieved. That is, assessment asks if students learned what was intended, if faculty effectively taught the learning outcomes, and if not, why not (Trotter, 2006). It is important to have an efficient and accurate assessment, otherwise the assessment plan may end up being a bureaucratic waste of time and effort. Ideally, this focus on assessment will lead to a reconsideration of the effectiveness of current assessment methods – a reassessment that involves a team approach and increases cooperation among instructors. Proper assessment methods and their results can also provide both a formative assessment to facilitate the improvement of current learning efforts (Craddock and Mathias, 2009) and a summative picture of the academic progression (Boud and Falchikov, 2006; Case, 2007).

There are two broad categories of assessment, direct – a direct assessment of the student’s performance by faculty or by the students themselves (e.g. in a senior seminar/capstone/survey course, students could reflect on what they have learned), and indirect – the assessment is based on the opinion of external stakeholders, such as recruiters, employers, internship coordinators at participating businesses, etc. There are two broad categories of direct assessments, objective or quantitative (e.g. multiple choice, true/false, matching and math or financial/accounting) and subjective or qualitative (e.g. short answer or essay exams, presentations and various types of written reports and projects). While subjective assessments are generally considered to be more effective indicators of learning, especially for individual students, objective assessments provide results that can be more effectively and efficiently compared to established program standards and to the performance results of other students (Suskie, 2010). Externally created standardized tests – generally, externally administered objective assessments, allow the program to establish baseline performance measures and to benchmark students’ individual and collective performance against national norms (i.e. external validity) (Prus and Johnson, 1994). For these reasons, historically, objective assessments have been the most commonly used assessment method. However, as time passes, locally developed performance assessment appears to be gaining in popularity over commercial “national-normed” multiple-choice (objective) instruments (Kelly-Riley and Elliot, 2014).

It is always wise to first determine if current existing assessment methods may be used, such as example in the previous paragraph, or if new ones must be selected or developed. New methods should be tested to determine their validity and reliability and how well individual questions or assessment activities allow students to demonstrate what they know (Mery et al., 2011). To improve the validity and reliability of the assessment of something as complex as student learning, Boud and Falchikov (2006) recommend different assessment options in each unit of study. For example, being able to score well on a multiple-choice examination related to procedural or communication skills (e.g. making a sauce, writing/preparing a report or giving a speech) may not accurately measure one’s ability to perform the skills. This is especially true for program learning outcomes such as written communication, leadership and technical skills, where instructors for different courses will focus on different levels of scaffolding for the same program learning outcome topic (i.e. lower to higher hierarchical learning outcomes).

Because the goal is to assess the students’ performance relative to specific learning outcomes, the vast majority of assessments should be direct assessments. Typical forms of direct assessments include qualitative/subjective assessments (e.g. essays, written or oral reports, case studies, work performance on an internship) and quantitative/objective measurements (e.g. multiple choice, true and false, matching, fill-in-the-blank). For some assessments, to improve consistency and to speed grading, a rubric or scoring guide that specifies the level of expected performance or minimum criterion for success for the learning outcome can be created (Gibson, 2011) (e.g. no grammatical errors=4; 1 or 2 grammatical errors=3; 3 or 4 grammatical errors=2; 5 or more grammatical errors=1). Other considerations include indirect measurements (Maki, 2004), such as the students’ own evaluations and reflections of learning and the opinions of others regarding student learning, such as the recruiters and employers. When determining assessment methods, it is important to be aware of the cross-functionality of teaching and assessment methods. For example, a student presentation is both a teaching method and an assessment method for a variety of program learning outcomes such as critical thinking, problem solving, speaking skills, leadership, and more.

Just as active teaching and learning methods are shown to increase student success, so are assessment methods in which students are active players (Boud and Falchikov, 2006). For example, some professors give take home examinations or assignments, open book examinations or involve students in designing examination questions (Papinczak et al., 2012). Students cannot design a question and supply the correct answer if they have not studied the material. At the same time, students who are less prepared often complain that open book examinations are more difficult, while students who are well prepared for tests welcome the open book format and simply use the books sparingly to check certain details and facts. All these methods can lead to better learning than instructors simply giving hints or passing out review sheets with potential test questions. The goal is to have students be aware of and to participate in their own learning.

While aggregate diagnosis and reporting of assessment criteria is often the primary focus of assessment, it is also important to track and locate deficiencies for individual students. Individual scores can help students understand their strengths and weaknesses, how they measure up to other students, as well as motivate them to put forth sufficient effort to demonstrate their newly acquired knowledge and skills. These scores can also help the program keep track of each student’s progress (e.g. red flagging subpar performance) and if used in promotional materials (e.g. on the program’s website), can create a significant competitive advantage (Dwyer et al., 2006). Though this may seem time-consuming and costly, if the assessment is effectively embedded in the course (e.g. it already exists as a graded assignment), then providing a snapshot of an individual student’s progress will take very little extra time and effort. For example, when an assessment of the overall student performance on a specific program learning outcome such as quantitative skills, is desired, the performance of student on various relevant quantitative exams from specific classes could be quickly generated with little effort. This strategy can help avoid the time, effort and cost of grading lengthy projects that are separately designed for a specific program learning outcome.

The process of gathering assessment data typically takes place in the last half of the fall and spring semesters. To make the assessment of learning outcomes a habitual part of each faculty’s responsibilities, it should be done on an ongoing basis as a regular aspect of each course. To improve the chances of success, the program will need to identify someone with a passion for the assessment of learning outcomes concept and at least a moderate knowledge of research theory for gathering the data. To make sure that the assessment is completed in a timely manner, various types of matrices can be developed (see Tables I and II for examples). Finally, determine if the assessment will be a snapshot of student performance at one point in time or a recurring and comparative assessment of student achievement over time, or a combination of the two based on specific considerations.

According to Banta and Blaich (2011), if most of an assessment program’s resources are devoted to gathering evidence, little change will occur. Effective institutional assessment will not happen without faculty engagement in ongoing research regarding the assessment process. Engagement in ongoing research and educational support systems are required because even though the process itself will lead to change (e.g. changes in pedagogy, creating effective learning outcomes), knowledge of all aspects of the assessment of learning outcomes process and learning itself constantly evolves (National Institute for Learning Outcome Assessment, 2016). Under the title of something similar to faculty development, most campuses promote the sharing of evidence-informed teaching that can help maximize faculty and student potential (Faculty Professional Development, 2016).

The experience

Different instructors prefer different types of assessments (e.g. a test vs a written report). Since some forms of assessment are less effective than others and some forms take longer to grade, persuading some faculty to change what they have been doing for many years was sometimes a little challenging. As mentioned before, since most accrediting bodies currently require an effective assessment of learning outcomes process, at some point it is the administrator’s responsibility to make sure that instructors support the program’s efforts. Additionally, it was found that there was some inconsistency in grading rigor and feedback for how each faculty assesses student performance relative to learning outcomes. Identifying and addressing this issue through the use of more consistent assignments and grading rubrics ended up being one of the key advantages of implementing the assessment of learning outcomes process. At NAU, these differences have narrowed over time. However, because of the many different teaching philosophies and teaching strategies of faculty, such differences will probably never be eliminated.

As addressed above, the assessment of course learning outcomes can be objective/quantitative or subjective/qualitative, based on the particular learning outcome. For example, in accounting related courses, most assessments will be quantitative – the answer is right or wrong. These assessments tended to be more consistent than qualitative assessments, such as marketing plans and other similar assignments. This is understandable because faculty must assess abstract concepts such as whether the chosen strategies are compatible with the firm’s strengths and weaknesses and specific characteristics of the customer and other environmental factors (e.g. competitors’ strategies and the economy). At NAU, to allow for a more consistent assessment of course learning outcomes from one semester to the next and for a comparison of different faculty teaching the same course, the focus is primarily on quantitative assessments, with qualitative assessments being used as appropriate. For the assessment of program learning outcomes, an end-of-program objective exam is used to assess the knowledge and understanding attained by students in core courses. The scoring standard is:

  • acceptable (score >75 percent meets standard): because of the time between when the material was taught and the program assessment could be several years, it was determine that a score of 75 percent was appropriate; and

  • unacceptable (score < 75 percent does not meet standard, which requires corrective curricular and instructional actions).

Specific exam questions that will be used to assess program learning outcomes are submitted by faculty who are responsible for teaching the core courses. The assessment committee reviews exam questions, then with the cooperation of these faculties, select those deemed most appropriate for the exam. The end-of-program objective exam has been useful for identifying curricular deficiencies in program learning outcomes (i.e. core competencies). An important note is that the assessment committee has found that some of the exam questions submitted by faculty have lacked clarity and relevance. As a result, the assessment committee arranged a faculty workshop on test construction offered through the university’s Office of Curriculum, Learning Design and Academic Assessment. In the future, the paper-based objective exam will be replaced with a computer-based exam to improve efficiency and to allow for item and test analysis capabilities for improving assessment fairness, validity and reliability.

To increase the objectivity and quantifiability of typically qualitative course and program learning outcomes, various rubrics are used. Rubric scores result in what are essentially subjective ratings that can be quantified and analyzed statistically (Suskie, 2005). Table III shows a rubric for evaluating student performances on written case assignments that address ethics and professional responsibility.

Another example shows the rubric criteria (see Table IV) that hospitality recruiters use to evaluate the interviewing skills of graduating seniors. The average mean scores in 2018 revealed that students met expectations for first impressions, appearance, general attitude/responsiveness and closing but need slight improvement on preparation for the interview. These findings were corroborated with recruiter’s written comments.

Making the program’s Assessment Committee, a subcommittee of the Curriculum Committee has been extremely helpful in NAU’s effort to implement a closed-loop assessment of learning outcomes process. The annual findings and recommended actions of the Assessment subcommittee are reported to the Curriculum committee, faculty, the University’s Office of Curriculum and Assessment and informally shared with other stakeholders, such as advisory board members. An effective assessment program also requires the active participation and engagement of faculty, which can be facilitated through a supportive administration and professional development programs that help faculty gain expertise in assessment as well as understand, develop, implement, communicate, and use evidence of student learning (Kelly et al., 2010). For example, a staff member from the university’s office of Curriculum and Assessment conducted a workshop for all faculties on a rubric for assessing student writing skills and how to use if for generating accurate, clear, consistent, and meaningful writing evaluations. Assessment is initially time-consuming but becomes easier and quicker with practice and supportive professional development opportunities. The program’s Assessment Committee receives a summer stipend for performing data analysis, preparing reports and publishing its findings.

With the Southern Association of Colleges and Schools as the accrediting body for UH, the attainment of student learning outcomes is reported to the university on an annual basis. Instead of a committee, once the faculty agreed on how and what to assess, one faculty member is tasked as the college’s representative to work with the university in this process. This faculty member works with faculty whose courses are being assessed to gather the data and compile the report. Subsequently, the university’s Associate Provost for Institutional Research and her/his staff will work with all the college representatives to compile the university level report and to assist each college representative on any questions they may have.

5. Include learning outcomes, learning activities, and assessment methods in the syllabus

The purpose of establishing program and course student learning outcomes is to provide an agreed-upon foundation for faculty to design their course and program learning opportunities for students and for the assessments of student learning (Education Resources Institute, Pathways to College Network, 2012). Equally important, however, are the benefits of learning outcomes to students. Student learning outcomes make clear what students should expect from their educational experience and encourage students to be intentional learners who direct and monitor their own learning.

The most efficient, effective and coincidently required means of communicating specific learning outcomes to students is in the course’s syllabus (Hirsch, 2010). In addition to learning outcomes, the syllabus should also contain the specific learning activities and assessment methods that will be used to help students achieve a specified level of competence and understanding of program and course learning outcomes. Educating students on the reasons for focusing on specific learning outcomes and in the supportive learning process helps them understand why focusing on each specific learning outcome is important, why each teaching method is being used, and why it is important for them to play an active role in the process. In The Promising Syllabus Enacted: One Teacher’s Experience, Bain (2004) views this process as three promises:

  1. what the course promises the student (e.g. key learning outcomes and a learner-centered environment);

  2. the student’s role in fulfilling that promise (i.e. the student’s responsibility to be an active learner); and

  3. how the student and instructor will work together in a cooperative manner, with the instructor as a mentor, rather than evaluator.

Slattery and Carlson (2005) specifically expressed the importance of the syllabus and its relationship to learning outcomes:

Most, if not all, colleges require faculty to share syllabi with their students. Although doing so is often an administrative requirement, seeing it as only that underestimates the importance of syllabi. A strong syllabus facilitates teaching and learning. It communicates the overall pattern of the course so a course does not feel like disjointed assignments and activities, but instead an organized and meaningful journey. In particular, a good syllabus clarifies the relationship between goals and assignments. Students who read a good syllabus are more likely to feel that course strategies have been designed to help them reach their goals, rather than merely as busywork or, worse, to torture them.

(Slattery and Carlson, 2005, p. 159)

Related to the importance of the syllabus in communicating course learning outcomes and how they will be learned is the concept that learning is more effective when students play an active role in their education and when they understand how to learn (Shen and Liu, 2011). This concept is often referred to as metacognition, the ability to make inferences between a current challenge and previous knowledge and skills and subsequently the ability to assess one’s performance and the learning strategies that influenced it (Gagne, 1985). Wang et al. (1990) found that metacognition ranked highest among student-related variables in their study of factors that influence learning. Dimensions of metacognition included concepts such as planning, monitoring the success of attempts, testing, changing strategies, assessing learning strategies and the ability to make generalizations about the experience. Allan and Clarke (2007) proposed that learning how to learn forms the basis for student success in higher education and that students must take responsibility for this process. They must become independent learners and be able to direct and improve their own progress.

A very closely related construct is that of autonomous learning. As in metacognition, students play an active role in the learning process. Autonomous learning adds a slightly more psychological orientation, in that it also focuses on intrinsic motivation, where the student takes on the primary responsibility of learning (Macaskill and Denovan, 2013). In other words, faculty who include learning outcomes, learning activities, and assessment methods in their syllabi would not only encourage the typical cognitive aspects of learning, but also the affective and motivational aspects that heavily influence what a student learns and retains. Both metacognition and autonomous learning lead to self-directed, life-long learning (Thompson et al., 2005).

The experience

For the majority of students’ college experience, instructors have been designing similar syllabi with information regarding reading the chapters, doing the homework and often a semester project of some type. At both NAU and UH, when instructors began to use syllabi based on specific learning outcomes, learning activities and assessments – and holding students accountable for learning – some students at first appeared to be a little shocked at the change. However, once they understood that they would be learning topics in a manner that would help their careers, their level of motivation – their inclination to be metacognitive and autonomous learners – blossomed. This led to increases in student confidence regarding their managerial ability and job satisfaction for instructors, in higher student evaluations of faculty, and in recruiters noticing that students were more knowledgeable, articulate and assertive in interviews. In support of the importance of the syllabus, Texas House Bill 2504 states syllabi of all undergraduate level courses are required to be posted before classes begin. On the UH website, one can search and view all UH undergraduate syllabi. This portal also serves as a resource for students to prepare for the first class or find information about classes they need or want to take in the future.

6. Close the assessment loop

A good business process is not complete unless there is a control process for measuring, monitoring and improving important activities. The business world has long subscribed to the concept of benchmarking for process improvements (Camp, 1995). However, it is not just finding the best process. Once the program’s current process is assessed and compared to the benchmark, it is the next step – the actions taken for improvement – closing the loop, that are most important and that lead to incremental improvements. Unfortunately, higher education has not been as effective as it should be in closing the loop. A study of assessment practices at 146 institutions revealed that very few could provide examples in which the use of assessment findings resulted in improved student learning (Banta et al., 2009).

The most crucial part of the assessment process occurs when the assessment results are translated into actions for improving student learning. Closing the loop begins with faculty and other stakeholders reviewing the assessment results and then collaborating on possible improvement strategies that will be shared as common goals for the entire program. Important questions to answer are:

  1. Were the specific program and course learning outcomes met?

  2. What strengths and weaknesses were identified in any of the steps of the process? For example, were effective program and course learning outcomes set? Were they communicated to students and to other stakeholders? Was the curriculum map effectively organized? Were the appropriate teaching methods utilized for each learning outcome? Were the assessments carried out in a valid and reliable manner?

  3. How did the results – the students’ ability to understand and to apply specific program and course learning outcome – compare with current goals and with previous assessment cycles (e.g. better, worse or about the same)?

Once the assessment is complete, the results should be used to shape improvements in the overall assessment process and in the assessment of specific program and course learning outcomes. Faculty will bear much of the responsibility for assessment improvements.

While even areas of strength should be assessed to determine if improvements are possible, the key focus of closing the loop is to focus on how weaker areas can be improved. Improvement actions should directly relate to the learning outcome evaluated and be effective, logical and prepared with the appropriate participation of all stakeholders (see Tables V and VI). Once actions are implemented, they need to be reviewed to determine their impact on student learning. This closes the loop for the assessment cycle. Thus, the assessment cycle begins anew, systematically looking for new opportunities to improve student learning (Maki, 2002). Additionally, the assessment results should be shared with students to help them understand their strengths and weaknesses and to reflect on what they need to do to improve. Measurement and accountability will increase confidence in the program among key stakeholders as they begin to understand the program’s earnest desire to give students the best education possible and to provide employers with students that have the skills that they need.

Obviously, assessment results can provide stakeholders with tangible results of the program’s performance and help to prepare accreditation reports for accountability and continuous improvement. While in this step, we are looking for potential short-comings and ways to improve the assessment process, we also need to take time to stop and smell the roses and to celebrate the accomplishments of the many small and large successes of the assessment process. We are teachers and this effort helps us to become better at our craft, so one result of this process is that we can be proud of what we have achieved.

The experience

NAU’s hospitality program has been engaged in assessment for almost 20 years and has received numerous awards and various other forms of recognition. However, when compared to a comprehensive closed-loop process and modern and recent updates to university and accreditation requirements, such efforts needed to be improved. Partially because of the historic efforts and the intense focus on meeting new university and accreditation requirements, the university has been pleasantly surprised with the results and the extent of what has been accomplished and learned in this process.

A common initial finding was that the closed-loop assessment for some learning outcomes focused on the need to improve teaching methods or pedagogy. For example, in the first writing assessment, student performance was less than ideal in a large percentage of the papers evaluated because of excessive use of quotes. This learning outcome required a more suitable and well-constructed writing assignment. After a few future assessment cycles, the assessment design will ideally produce even more meaningful results for informing various program changes and improvements. In order to reap the full benefits of conducting the assessment, programs should present their assessment results to key internal and external stakeholders and involve them in curricular and instructional decision making whenever possible.

At the end of each summer at UH, the Academic Program Assessment Report is submitted by a designated faculty member to UH central administration. Learning outcomes relative to performance standards are discussed, the entire assessment process is analyzed, and a program improvement plan is prepared. These steps ensure that the assessment process is not static. If a goal is met, the improvement plan might be for faculty to establish a more challenging goal for the next academic year.

7. Disseminate assessment results

After the economic downturn of 2008 and the rise of academic capitalism, all areas of higher education have placed a greater focus on measuring what we do and how well we do it (Watson, 2011). Other calls for change were directed at the historic reliance on educational reputation rather than educational performance (i.e. the image of the university as opposed to the actual quality of the university) (Lingenfelter, 2007). Argyrous (2012, p. 457) proposed that virtually any aspect of an organization, “will improve if standards of transparency and accountability are followed in the process of gathering, analyzing, interpreting, and presenting evidence for policy.”

It follows that without transparency, there will be limited accountability, and without accountability, faculties are less likely to put forth their best effort. In the initial investigation for this research (Reich et al., 2016), it was found that 3 out of 25 randomly selected high performing hospitality programs posted their program’s learning outcomes and none posted their assessment results on their program’s site (one university did post some program learning outcomes on the university site). Though no one knows for certain that these programs do not effectively assess their learning outcomes, it is known that they do not post them. The likelihood of something as involved as the assessment of learning outcomes occurring is very small without transparency and accountability – we need to see tangible results.

Managing students’ education without an effective measure of academic performance is like managing a business without an effective operational assessment or an income statement. Managing a business without knowing how well the company is doing operationally or financially would be the height of irresponsibility and would almost certainly lead to significant problems. If learning outcomes are not being assessed, then managing the program’s educational performance will be limited to window dressing. Management in such an environment becomes focused on trying to make faculty, students and other stakeholders feel good in spite of the fact that no one knows of the program’s educational performance or level of quality. The National institute for Learning Outcomes Assessment considers transparency to be important enough that they refer to their overall assessment process as the transparency framework. The organization states that evidence of student learning should be:

  • specific to institutional level and/or program level;

  • clearly expressed and understandable by multiple audiences;

  • prominently posted at or linked to multiple places across the website;

  • updated regularly to reflect current outcomes; and

  • receptive to feedback or comments on the quality and utility of the information provided (National Institute for Learning Outcomes Assessment, 2016).

To make sure that each program is complying with all applicable requirements, there should be a location where the results will be prominently displayed to show either the results of the assessment or at least the fact that the assessment was completed – something to give readers confidence in the process. Alternatively, if the assessment was not completed, this fact should be communicated to all pertinent parties. As an additional oversight, the results can be reviewed by various stakeholders, such as the dean and perhaps the provost’s office and/or the specific members of the programs advisory board. There should also be a third-party review of the results of each program and course assessment to make sure that the entire process is implemented in an effective, efficient, fair, ethical and transparent manner. The third-party review will generally be provided on an annual basis by the university’s own office of assessment and at the university, college and program level by applicable accreditors.

The experience

The first step in transparency and accountability should be the posting of program and course learning outcomes on the program’s website. About six years ago at NAU, the senate voted on and passed a policy regarding the posting of program learning outcomes. Subsequently, the curriculum and assessment website requests each program to also post a curriculum map, a table that includes where, when, and how program learning outcomes are assessed. The next step will be the posting, analysis and interpretation of assessment outcomes. The posting of the assessment of course learning outcomes was been considered. However, because this might expose specific instructors and be interpreted erroneously (e.g. a mean score of 3.8 for quantitative reasoning could be thought by some to be extremely low, when it may actually be very good), it was decided not to post individual course learning outcomes, at least not until a more effective process is created.

Posting of learning outcomes alone is obviously not the ultimate objective and if not taken seriously will do nothing for students or the program. However, in a well-run hospitality program, it does signal to faculty, current and prospective students, parents, administrators, advisory board members, industry recruiters, state governing bodies and other stakeholders that the program is serious about the quality of the education that it provides. In fact, as small a step as it is, it will likely be the only tangible evidence stakeholders have regarding educational quality. Table VII shows what the assessment results of program learning outcomes for a hypothetical hospitality program might look like.

For UH, the results of the annual Academic Program Assessment are posted on a website accessible to all college faculty. This is also an agenda item in the first faculty meeting each fall. Here, faculty reviews the report for the last reporting period to ensure accountability and transparency. The Program Improvement Plan is included in the Academic Program Assessment. Table VIII shows the results of the quantitative skill percentage score compared to the minimum standard of 75 percent.

The minimum standard was exceeded in each of the last four years, but the process of compiling the information was found to be overly time-consuming. It was discussed with faculty that timely feedback is critically important. Since timely feedback is critical, faculty are meeting to determine if there is a way to measure quantitative skills in a more efficient manner. One suggestion for increasing efficiency was to minimize assessment time by creating a process that would require only one assessment for both quantitative and other skills. This would allow our program to reduce the effort and time involved and to provide feedback on a more timely basis.

Conclusion

The steps in a closed-loop assessment of learning outcomes process are relatively detailed; however, because of changes in expectations of stakeholders and the requirements of accreditors, they are now mandatory for most hospitality programs. Therefore, the choice is not whether to implement them, but when. Since some unknown number of competing hospitality programs will be acting sooner than others, it is certainly to the program’s advantage to begin as soon as possible. Another factor to consider is that the implementation of a closed-loop assessment of learning outcomes process will take several years to complete, so a program cannot wait too long before faculty and administrators begin working on it.

While the authors have attempted to be as thorough as possible, there is no way to include everything that a program may require to develop its own closed-loop assessment of learning outcomes process. Your university assessment committee, assessment efforts of other universities, research on the topic and information from organizations such as the National Institute of Learning Outcomes Assessment can provide resources and support for developing an assessment process.

Lastly, each of the authors of this paper takes great pride in the teaching profession and continually strive to offer the best education possible to students. This effort has given each of the authors an improved understanding and appreciation for the profession. More importantly, it helped both NAU and UH hospitality programs to more effectively contribute to the student success.

Example of a work plan with assigned responsibilities and timeline

Data collection Data analysis Discussion of findings
Assessment tool Who is responsible for collecting the data When and/or where will data collection take place Who is responsible for data analysis When will data analysis take place Who will be part of the discussion of data and When will discussions likely take place
End-of-program objective exam Collins Spring 2016, two sections of senior seminar HRM Assessment Committee Summer 2018 Faculty and Staff and External Stakeholders Fall 2018 semester
Writing rubric Cauvin Spring 2016, two sections of senior seminar HRM Assessment Committee Summer 2018 Faculty and Staff and External Stakeholders Fall 2018 semester

Example of a plan for analyzing and interpreting the evidence

Assessment question(s) and/or program student learning outcome addressed by the measure Assessment tool name Standard(s) Description of how your assessment tool and standards will address your Assessment question(s) and or program student learning outcomes
Does the coursework in HA210, HA240, HA260, and HA270 prepare students with the necessary technical skills needed for a career in hotel and restaurant management? End-of-program objective exam Multiple choice/ true/false questions developed by faculty for key course competencies; Acceptable (score >75%, Marginally Acceptable (score 60 to 74% -Needs More Reinforcement), and Unacceptable (score < 60% – needs significant reinforcement) Mean exam item scores will provide actionable information for evaluating student technical knowledge and curricular deficiencies

Ethics rubric

Performance criteria Below expectations Meets expectations Exceeds expectations
Behavioral awareness Unaware that an ethics issue exists Identifies ethical dimensions, but leaves out facts that are ethically relevant Identifies all relevant ethical dimensions
Professional awareness Unaware that a professional issue exists Identifies professional aspects of the situation but leaves out professional relevant factors Identifies all relevant professional factors
Awareness of stakeholders Consideration of only one stakeholder (e.g. oneself) relevant to the ethical decision Identifies & considers many or most potential stakeholders to the ethical decision but leaves out some significant stakeholders Identifies & considers all potential stakeholders relevant to the ethical decision
Ethical reasoning Only legal compliance or selfish thinking used to determine and resolve ethical issue(s) Applies only two ethical decision rules/tests/approaches in an effort to resolve the ethics issue(s) Applies more than two ethical decision rules/tests/approaches in an effort to resolve the ethics issue(s)
Ethical decision making Does not arrive at an ethical decision Decision coheres w/problem, interested parties and/or general situation Arrives at an insightful comprehensive decision that coheres with problem, interested parties and situation

Interviewing rubric results in 2018 (Scoring key: meets expectations >17, needs improvement 14–17, and below expectations <14)

First impressions Appearance Preparation Attitude Closing Overall score (out of 100)
All Students 18.46 18.24 17.18 17.54 18.24 89.19

Example of HRM assessment committee findings and recommended actions for HA240, restaurant management

Course Technical course competencies not met Recommended actions Action taken (TBD after review by faculty, other stakeholders, and the HRM curriculum committee)
HA240 Describe the guidelines and principles necessary for proper sanitation in a food service operation aAdditional course materials and/or learning activities on sanitation to ensure this topic is adequately covered
aMake the ServSafe Certification a requirement
aStudents must take and pass the ServSafe Sanitation Certification Exam for Managers
HA240 Compute and interpret key operating statistics for a food service operation aMore practice sets using spreadsheet software on basic restaurant statistical calculations aMore practice sets on basic restaurant statistical calculations were added
HA240 Identify the key elements in restaurant planning, design, and equipment aAdditional course materials and/or learning activities on basic restaurant layout and design to ensure this topic is adequately covered aAdditional course material as added. A new elective (Design and Layout for Restaurant Facilities) was also created to provide more in-depth layout and design knowledge for students interested in restaurant careers

Example of HRM assessment committee findings and recommended actions for program learning outcomes

Writing assessment
The mean rubric scores of the 10–15 page “Leadership Portfolios” of 42 graduating seniors did not reveal significant writing deficiencies, although five percent of the students had an overall average of less than “2” (i.e. below expectations)
Content Organization Style Grammar APA Professionalism Visual Aids
All Graduating 2.66 2.44 2.59 2.42 2.53 2.74 2.85
Seniors
Scoring key: exceeds expectations = 2.8–3, meets expectations = 2–2.7 and below expectations = 0 – 1.9
Recommendations
 The writing rubric used in this assessment should be introduced in HA100, so that students at the very beginning of their academic careers understand what is important and what standards have been set by faculty
Instructors with writing assignments should provide students with copies of the writing skills rubric and incorporate them into performance evaluations whenever possible
The use of a writing rubric by faculty provides students with more constructive feedback and enables faculty to more easily distinguish between different levels of performance and to more readily identify students needing writing assistance
Faculty should receive information on what university services are available for students with writing and other deficiencies
HRM should consider mandating a section on all course syllabi that explains available college and university tutorial services
Faculty should require that all writing assignments be written utilizing the APA (American Psychological Association) style guide

Assessment results for the school of hospitality

Mean Score (4 pt. scale) % demonstrating competence
Assessment Period 2013 2014 2015 2016 2013 (%) 2014 (%) 2015 (%) 2016 (%)
Critical thinking 2.49 2.57 2.75 2.93 81.3 81.8 83.3 83.2
Written communication 2.36 2.39 2.56 2.68 74.5 73.9 75.2 74.9
Oral communication 2.35 2.56 2.59 2.61 79.1 82.3 82.4 82.6
Quantitative reasoning 2.87 2.95 3.13 3.08 73.2 74.1 75.3 75.4
Contrived data Competence = mean of 2+

Assessment of quantitative skill and program improvement plans

Year % students who scored 75% or above on the quantitative questions
2017–2018 86
2016–2017 82
2015–2016 84
2014–2015 79

References

Allan, J. and Clarke, K. (2007), “Nurturing supportive learning environments in higher education through the teaching of study skills: to embed or not to Embed?”, International Journal of Teaching and Learning in Higher Education, Vol. 19 No. 1, pp. 64-76.

Argyrous, G. (2012), “Evidence based policy: principles of transparency and accountability”, Australian Journal of Public Administration, Vol. 71, pp. 457-468.

Bain, K. (2004), What the Best College Teachers Do, Harvard University Press, Boston, MA.

Banta, T.W. and Blaich, C. (2011), “Closing the assessment loop”, Change, Vol. 43, pp. 22-27.

Banta, T.W., Jones, E.A. and Black, K.E. (2009), Designing Effective Assessment: Principles and Profiles of Good Practice, Jossey-Bass, San Francisco, CA.

Boud, D. and Falchikov, N. (2006), “Aligning assessment with long-term learning”, Assessment & Evaluation in Higher Education, Vol. 31 No. 4, pp. 399-413.

Camp, R. (1995), Business Process Benchmarking: Finding and Implementing Best Practices, ASQC Quality Press, Milwaukee, WI.

Case, S. (2007), “Reconfiguring and realigning the assessment feedback processes for an undergraduate criminology degree”, Assessment & Evaluation in Higher Education, Vol. 32 No. 3, pp. 285-299.

Craddock, D. and Mathias, H. (2009), “Assessment options in higher education”, Assessment & Evaluation in Higher Education, Vol. 34 No. 2, pp. 127-140.

Dwyer, C.A., Millet, C.M. and Payne, D.G. (2006), “A culture of evidence: postsecondary assessment and learning outcomes”, ETS. ets.org.

Education Resources Institute, Pathways to College Network (2012), “Student learning outcomes for institutonal accountability and transparency”, Pathways to College Network Brief, Lumina Foundation, p. 6.

Faculty Professional Development (2016), Mission and Overview, Northern Arizona University, available at: http://nau.edu/Provost/Faculty-Development/Mission-and-Overview/

Gagne’, R.M. (1985), The Conditions of Learning and Theory of Instruction, College Publishing, New York, NY.

Gibson, J.W. (2011), “Measuring course competencies in a school of business: the use of standardized curriculum and rubrics”, American Journal of Business Education, Vol. 4, pp. 1-6.

Hirsch, C.C. (2010), “The promising syllabus enacted: one teacher’s experience”, Communication Teacher, Vol. 24 No. 2, pp. 78-90.

Kelly, C., Tong, P. and Choi, B. (2010), “A review of assessment of student learning programs at AACSB schools: a dean’s perspective”, Journal of Education for Business, Vol. 85, pp. 299-306.

Kelly-Riley, D. and Elliot, N. (2014), “The WPA outcomes statement, validation, and the pursuit of localism”, Assessing Writing, Vol. 21, July, pp. 89-103.

Lingenfelter, P.E. (2007), “How should states respond to ‘A Test of Leadership’?”, Change: The Magazine of Higher Education, Vol. 39, pp. 13-19.

Macaskill, A. and Denovan, A. (2013), “Developing autonomous learning in first-year university students using perspectives from positive psychology”, Studies in Higher Education, Vol. 38 No. 1, pp. 124-142.

Maki, P. (2004), Assessing for Learning: Building a Sustainable Commitment Across the Institution, Stylus, Sterling, VA.

Maki, P.L. (2002), “Developing an assessment plan to learn about student learning”, The Journal of Academic Librarianship, Vol. 28 No. 21, pp. 8-13.

Mery, Y., Newby, J. and Peng, K. (2011), “Assessing the reliability and validity of locally developed information literacy test items”, Reference Services Review, Vol. 39 No. 1, pp. 98-122.

National Institute for Learning Outcomes Assessment (2016), Transparency Framework, University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA), Urbana, IL, available at: www.learningoutcomesassessment.org/TransparencyFramework.htm

Papinczak, T., Peterson, R., Babri, A., Ward, K., Kippers, V. and Wilkinson, F. (2012), “Using student-generated questions for student-centered assessment”, Assessment & Evaluation in Higher Education, Vol. 37 No. 4, pp. 439-452.

Prus, J. and Johnson, R. (1994), “A critical review of student assessment options”, in Bers, T.H. and Mittler, M.L. (Eds), Assessment & Testing Myths and Realities, Vol. 88, pp. 69-83.

Reich, A.Z., Collins, G.R. and DeFranco, A.L. (2016), “Is the road to effective assessment of learning outcomes paved with good intentions? Understanding the roadblocks to improving hospitality education”, Journal of Hospitality, Leisure, Sport & Tourism Education, Vol. 18, pp. 21-32.

Shen, C.Y. and Liu, H.C. (2011), “Metacognitive skills development: a web-based approach in higher education”, The Turkish Online Journal of Education Technology, Vol. 10, pp. 140-150.

Slattery, J.M. and Carlson, J.G. (2005), “Preparing an effective syllabus: current best practices”, College Teaching, Vol. 53, pp. 159-164.

Suskie, L. (2005), Assessing Student Learning, Anker Publishing Company, Boston, MA.

Suskie, L. (2010), Assessing Student Learning: A Common Sense Guide, 2nd ed., John Wiley & Sons, San Francisco, CA.

Thompson, N.S., Alford, E.M., Changyong, L., Johnson, R. and Matthews, M.A. (2005), “Integrating undergraduate research into engineering: a communications approach to holistic education”, Vol. 94, pp. 297-307.

Trotter, E. (2006), “Student perceptions of continuous summative assessment”, Assessment & Evaluation in Higher Education, Vol. 31 No. 5, pp. 505-521.

Wang, M.C., Haertel, G.D. and Walberg, H.J. (1990), “What influences learning? A content analysis of review literature”, Journal of Educational Research, Vol. 84 No. 1, pp. 30-43.

Watson, C. (2011), “Accountability, transparency, redundancy: academic identities in an era of ‘excellence’”, British Educational Research Journal, Vol. 37, pp. 955-971.

Corresponding author

Allen Z. Reich can be contacted at: allen.reich@nau.edu

Related articles