Measuring learning outcomes: bridging accreditation requirements and LMS functionalities

Hesham El Marsafawy (Department of Architectural and Interior Design Engineering, Gulf University, Sanad, Bahrain and Faculty of Applied Arts, Helwan University, Giza, Egypt)
Rumpa Roy (Department of Administrative Sciences, Gulf University, Sanad, Bahrain)
Fahema Ali (Quality Assurance and Development Center, Gulf University, Sanad, Bahrain)

Quality Assurance in Education

ISSN: 0968-4883

Article publication date: 12 August 2022

Issue publication date: 28 September 2022

2064

Abstract

Purpose

This study aims to identify the gap between the requirements of the accreditation bodies and the widely used learning management systems (LMSs) in assessing the intended learning outcomes (ILOs). In addition, this study aims to introduce a framework, along with the evaluation of the functionality of the LMS, for measuring the ILO.

Design/methodology/approach

A qualitative method was deployed to examine the gap between the requirements of the accreditation standards and the LMS functionalities. The researchers collaborated to design a mechanism, develop a system architecture to measure the ILO in alignment with the accreditation standards and guide the development of the Moodle plugin. The appropriateness and effectiveness of the plugin were evaluated within the scope of assessment mapping and design. Focus group interviews were conducted to collect feedback from the instructors and program leaders regarding its implementation.

Findings

The results of this study indicate that there is no standardized mechanism to measure course and program ILO objectively, using the existing LMS. The implementation of the plugin shows the appropriateness and effectiveness of the system in generating ILO achievement reports, which was confirmed by the users.

Originality/value

This study proposed a framework and developed a system architecture for the objective measurement of the ILO through direct assessment. The plugin was tested to generate consistent reports during the measurement of course and program ILO. The plugin has been implemented across Gulf University’s program courses, ensuring appropriate reporting and continuous improvement.

Keywords

Citation

El Marsafawy, H., Roy, R. and Ali, F. (2022), "Measuring learning outcomes: bridging accreditation requirements and LMS functionalities", Quality Assurance in Education, Vol. 30 No. 4, pp. 555-570. https://doi.org/10.1108/QAE-11-2021-0186

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Hesham El Marsafawy, Rumpa Roy and Fahema Ali.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Measuring the intended learning outcome (ILO) has become an integral practice of higher education institutions (HEIs) to ensure continuous improvement and accountability to prepare work-ready graduates. However, there is no unanimous solution to measure the ILO among national and international institutions with comparable standards of courses, programs and institutional levels (Martin and Mahat, 2017). The purpose behind measuring the ILO at the program level is to meet the expectation of the stakeholders and the requirements of the accreditation bodies and to measure the effectiveness of the program in terms of curriculum, pedagogy and students’ progress. As reflected in published research (Nusche, 2008), ratings and rankings of the HEIs do not truly reflect the knowledge, skill and competence of their graduates or the educational quality of their academic programs. Direct assessment of the ILO depends on the assessed work of the students in the courses, standardized tests of disciplinary knowledge and skills, student portfolio and other such aspects. Indirect assessment relies on surveys (current students, exit students and alumni), interview, graduate destination and the likes. Countries across the world, in their endeavors to ensure quality education, have developed a set of guidelines and principles for the HEI at program and institutional levels. For example, the “Tuning Project,” launched in Europe in 2001, focused on developing reference points at subject levels expressed in terms of the ILO (Wagenaar, 2008).

Within the context of outcome-based education (OBE), the HEIs either develop in-house mechanisms or rely on existing learning management systems (LMSs), to measure students’ achievement of learning outcomes. This is applicable for the direct assessment of the learning outcomes with the assumption that course assessments are linked to the learning outcomes, implying the alignment of students’ grades with course-intended learning outcomes (CILOs). This is extended to the measurement of program-intended learning outcomes (PILOs) as courses are mapped to PILOs. The popular LMSs, namely, Moodle, Canvas and Blackboard, measure the achievement of the learning outcomes within various mechanisms through their independent approaches. However, there is no standardized measure of the ILO, which can be consistently applied across HEIs aligned with accreditation bodies. Institutions are required, and expected, to develop explicit criteria and processes for assessing the ILO at course and program levels, which are objective, specific and measurable in quantitative terms.

The aim of this research is to identify the gap between the requirements of the accreditation bodies and the existing LMSs for measuring students’ achievement of learning outcomes. The research further aims to provide a mechanism to consistently empower the LMSs within the needed functions.

The research demonstrates the collaboration of the research team to introduce an approach, design a mechanism and develop a system architecture for measuring learning outcomes aligned with accreditation standards. This system architecture was used to develop a plugin that was integrated with Moodle, the LMS used by Gulf University (GU), for measuring the ILO, consistently and accurately. The research finally tested the developed system, where accurate measurement of the ILO was achieved, within the scope of assessment mapping and design. Focus group interviews were conducted with course instructors and program leaders who shared their experience of using the system for measuring ILO.

Within this context, the following research questions were developed and, subsequently, addressed:

RQ1.

Is there a gap between the requirements of the accreditation bodies and the available learning management systems towards objective measurement of the intended learning outcome?

RQ2.

How do the universities measure the intended learning outcome?

RQ3.

How does the learning management system support in measuring the intended learning outcome at course and program levels?

RQ4.

Does the plugin facilitate the measurement of the intended learning outcome on Moodle through direct assessment?

2. Background literature

2.1 Outcome-based education

OBE is a common trend in the HEIs that consider learners as the focal point rather than teaching and delivery of course material. The transition to OBE has a significant impact on the industry needs to hire graduates with twenty-first-century skills. Bloom’s Taxonomy, an OBE model, has been widely applied for assessing performance, resulting in better capability and self-esteem of the graduates (Agarkhed, 2017). OBE clearly defines what the students would demonstrate or achieve on course completion. It focuses on curriculum development and the knowledge, skill and attributes gained by the students, not the educational process. In addition, it provides appropriate description of course learning outcomes, program aims and program learning outcomes, which are interlinked (Rani, 2020).

2.1.1 Standards of accreditation bodies.

Accreditation plays a crucial role in the HEI achieving academic excellence. The recognition is targeted at both program and institutional level to enhance the quality of the educational programs and services. To meet the accreditation body standards, at local and international levels, is a comprehensive and rigorous process. However, the benefits of accreditation accrue to all the stakeholders of the institutions and add value to the academic programs (Due et al., 2019). Meeting the accreditation standards is challenging and requires careful process planning and development of a common understanding. It is imperative to draw a balance between flexibility and precision in accredited programs while ensuring quality standards (Cura and Ahmed Alani, 2018).

2.1.2 Why measure learning outcomes.

There is a growing debate around the logic behind measuring learning outcomes; it is to ensure accountability, evaluate performance, meet quality assurance expectations or for continuous improvement in teaching, learning and student experience. Whether internal quality assurance systems or external quality assurance systems should be used to set the learning outcomes of the HEIs. These are matters for deliberation, as the vision, mission and core values of an institution go beyond measuring the learning outcomes at the program level (Howson and Buckley, 2020).

Developing and implementing an ILO at course and program levels ensure transparency, consistency and reliability in the assessment process. It highlights the role of the students in demonstrating actual achievement and learning, as students are made responsible for their own learning, through performance-based tasks, supported by clear criteria of assessment (Holmes, 2019). The link between higher education and employment has been manifested to provide competence-based education. Employers seek both generic and specific competences when recruiting graduates from relevant disciplines (Shah et al., 2017). Thus, designing an integrated assessment system calls for a systematic mapping, in four phases, at course and program levels practiced at GU, Bahrain. This meets not only the assessment gap but also quality assurance standards, requirements of regulatory bodies, expectation of labor market and international best practices (El Marsafawy et al., 2018).

Well-balanced and structured assessment methods motivate students to engage in the learning process, thereby attaining the ILO. Traditional tests are designed to assess the ILO in the cognitive domain. However, ILOs for affective and psychomotor domains, namely, communication, leadership, teamwork, ethics, professionalism and time management, cannot be measured using traditional, time-bound, proctored examinations (Rahmat, 2011). Well-designed rubrics, defined indicators and comparable results provide better solutions to practical issues (Caspersen et al., 2017).

2.1.3 How to measure learning outcomes.

There are multiple ways of measuring learning outcomes based on the motive of measurement. Use of the Student Assessment Learning Gains instrument provides information on teaching and learning feedback and accountability measures for external stakeholders. This helps collect aggregate data of students’ reported learning outcomes with parameters, such as students’ understanding, skills, cognition, attitude and integration of learning, appropriate for learning activities and course objectives. This results in accountability, accreditation and continuous improvement of the teaching-learning process (Scholl and Olsen, 2014).

The application of SearchlightTM, as a web-based performance assessment tool, supports data-driven measurement of learning outcomes using rubrics. This software offers various course assessment methods, data import, assessment metrics, rubric or gradebook entry, performance review, course progress tracker, system generated course assessment reports, faculty training and support. This enables the course instructor to improve the course and supports the program leader plan for continuous enhancement (James-Okeke et al., 2013).

ILO assessment is done overtime with advanced tools and technologies. However, the contribution of a course toward achieving the PILO is underpinned by the assessment of students’ performance and course effectiveness. It is not enough to develop learning outcomes and OBE. It is also crucial to systematically record and analyze assessment data. This will confirm students’ competency, reflected in the CILO, followed by improvement in curriculum and course delivery (Anwar et al., 2012).

Extant literature indicates that various countries have developed ILO measurement systems based on their unique requirements. Australian universities focused on identifying a comprehensive set of generalized learning outcomes across the bachelor’s degree programs and a set of criterion-based standards for measuring learning outcomes (Martin and Mahat, 2017). The study proposed generalized assessment tasks for courses and used constructive alignment between courses, curriculum and PILOs for measuring the PILOs for different universities. The framework for the assessment of the learning outcomes reflects a bottom-up approach that starts with direct assessment tasks for CILOs, followed by PILOs, where CILO and PILO are interlinked. Subsequently, university-wide learning outcomes were developed, in alignment with the PILO, which were further linked to sector learning outcomes and standards.

In Saudi Arabia, research indicated different types of assessment – direct, indirect, quantitative and qualitative – for measuring the achievement of student learning outcomes and CILO (Alzubaidi, 2017). The study proposed a comprehensive combinational approach to measure the CILO, using the average, threshold and performance vector approach. The average and threshold approaches consider the success criteria of students’ grades during assessments.

In Romania, the importance of measuring learning outcomes relates to students’ information literacy. Web-based portfolio assessment was proposed to assess the learning outcomes within a set of criteria. The students were expected to meet the requirements for graduation by achieving the competencies in accordance with the criteria. This process incorporates both the assessment of portfolio content and the results of learning (Lile and Bran, 2014).

The case study regarding Canada shows that measuring learning outcomes acts as an important tool for enhancing the educational quality and accountability of the HEIs. The synergy between the learning outcomes, students’ learning experience and assessment tasks meets the requirements of quality assurance. This approach, further, ensures improvement in teaching and learning, as evidenced from student learning. The Higher Education Quality Council of Ontario outlined four domains of learning outcome – basic cognitive skills, disciplinary content, higher-order cognitive skills and transferable life skills (Brumwell et al., 2017).

The research on measuring PILO at Baccalaureate degree of Nursing in a college in UAE indicates innovative ways of collecting course evaluation data and final assessment results for courses of Year 4 (Al Hmaimat et al., 2021). The conceptual framework identified the expected student achievement at 70% for course level. By mapping the assessment tasks and the CILO and aligning the CILO and PILO, the framework indicated the average percentage of achievement of each PILO as 70% to provide curriculum enhancement recommendations.

2.2 Learning management system in measuring learning outcomes

LMS is a digital platform integrating pedagogical and course administration tools to facilitate the delivery of courses. Universities across the world use either open source or subscription-based platforms, which can be compared based on features, capabilities and technical facets. An LMS is used for synchronous and asynchronous communication between the stakeholders (instructors and students), delivery of course material, assessment (online and asynchronous) and management of class activities (including attendance and lesson and timetable planning) (Croitoru and Dinu, 2016). Open source LMS, particularly, proves cost effective, allows users to engage and interact on the digital platform and leverages digital fluency and innovative pedagogy (Kant et al., 2021). Standard LMS focusses on curriculum, the learning process (including lesson plan), course material, student assessment, content management and students’ perception. For example, Canvas is used worldwide because of its accessibility, simple architecture, reliability and openness, enhancing the effectiveness of student learning, both before and after COVID-19 pandemic (Wicaksono et al., 2021). Moodle is used as a cost-effective solution, as it is an open source LMS with user-friendly functionalities and interface.

Student perception of and satisfaction with LMS in the HEI are critical to the effectiveness of a blended learning framework (Patel and Patel, 2017). The research conducted by Moonsamy and Govender (2018) examined the use of Blackboard among the staff at a South African University (Moonsamy and Govender, 2018).

The idea of implementing student assessment data from LMS to generate course- and program-level assessment of Web-based courses supports the need to identify an automated process for aligning course activities and assignment with program objectives and institutional mission. A pilot study was conducted to develop a systematic and automated learning outcome assessment report integrated with LMS. System Development Life Cycle process was conceived with the following three phases: identification, design and implementation and system validation (Tello and Motiwalla, 2010).

3. Methodology

Qualitative research was conducted for this study, and we reviewed the program accreditation standards at HEIs in Bahrain, Oman, Saudi Arabia and UAE. The research relied on the information available in the published handbooks (relevant to the program) of the accreditation bodies in the above-mentioned countries. The standards and indicators, relevant to the measurement of the learning outcome, of the accreditation bodies were reviewed. Subsequently, the functionalities of popular LMS – Moodle, Canvas and Blackboard – for measuring the learning outcomes were compared. Official websites of Moodle, Canvas and Blackboard, videos related to assessing the learning outcomes and the description of the system mechanism were considered within the set criteria. The gap between the requirements of the accreditation standards and the functionalities of the compared LMS was determined. Single case study research method was deployed to focus on a contemporary phenomenon within real life situation where how and why questions are addressed (Yin, 2017). Within this context, the researchers concentrated on Moodle, as implemented by GU.

In addition, the researchers introduced a comprehensive mapping approach at both course and program levels, designed a mechanism and developed a system architecture to accurately measure the achievement of learning outcomes through direct assessment aligned with relevant standards of the accreditation bodies. Accordingly, a plugin was developed that generated the measurement reports, at course and program levels for each student and the entire cohort, provided the students were graded on Moodle for all the courses. The researchers contributed significantly to implementing the plugin where grades of sample students in sample courses were entered and aligned with the learning outcomes. Sample CILO report, within this scenario, was successfully generated for each course. Subsequently, PILO report was generated using the plugin within the perspective of CILO to PILO mapping.

Following the implementation of the plugin in one semester, a deductive approach was taken to collect qualitative data through focus group interviews with course instructors and program leaders for feedback on the usability of the system, accuracy of the CILO report, impact of CILO and PILO reports in course improvement and program effectiveness. Purposive sampling technique was deployed to include the perception of the instructors (eight) regarding one session, followed by the program leaders (four) and deans (two) of the colleges. Two set of structured questions were prepared for the instructors and program leaders, with some common questions, on the usability of the plugin, accuracy of the report, challenges and suggestions. Content analysis was performed to interpret the interview transcripts within identified categories, reflecting user satisfaction in measuring learning outcomes (Gundumogula, 2020).

4. Results

4.1 Accreditation/quality assurance standards oriented towards measuring learning outcomes

This section presents the review of the accreditation standards relevant to the assessment and measurement of the learning outcomes in the selected countries. In Bahrain, the HEI must comply with the Higher Education Council regulations and meet the requirements of the Education and Training Quality Authority, at both program and institutional levels. In addition, institutions and their academic programs are listed on the National Qualifications Framework as per the institutional listing standards. The Education and Training Evaluation Commission, Saudi Arabia published the standards for program accreditation, which specified that program learning outcomes would be assessed through different assessment tools. The Oman Academic Accreditation Authority Program Standards Handbook articulates the required standards, with a clear criterion for measuring students’ achievement of learning outcomes. Similarly, the Procedural Manual for Initial Program Accreditation, published by the Commission for Academic Accreditation, Ministry of Education, UAE, specifies the criteria to deliver any program. This is followed by the Renewal of Program Accreditation after the graduation of the first cohort from the program. Both the manuals include specific indicators for assessing student learning outcomes, assessment design and mapping. A comparative analysis of the accreditation standards of the mentioned countries, for measuring learning outcomes, depicts that measuring the achievement of the learning outcomes is an important criterion or indicator for program accreditation/quality assurance review. To meet the requirements, HEI must focus on implementing robust mechanisms to measure the achievement of learning outcomes at both course and program levels. This is reflected in assessment mapping and design, which ensures the direct assessment of the learning outcomes through students’ works.

4.2 Comparative analysis of learning management systems

To understand the LMS mechanism of measuring course and program learning outcomes and their compatibility with the required accreditation standards, the researchers considered the available published information about popular LMSs – Moodle, Canvas and Blackboard. A comparative analysis of their features summarizes the assessment mechanism. For Canvas, the learning outcomes/goals are stated at course, program and institutional levels. Assessment activities are aligned with the learning outcomes. Criterion rating (“exceeds expectation,” “meets expectation” and “does not meet expectation”) is used. Students’ Learning Mastery report is generated at course and program levels. Blackboard uses the term “goals” at course and program levels for outcomes. Students are assessed based on the rubrics, which are used for rating or scoring assessments within a course. Students’ achievement of learning outcomes at the course level is generated by aligning assessment and learning outcomes. For Moodle, outcome (subcomponent of a goal) is rated by scale of categories (“fully satisfied,” “somewhat satisfied” and “unsatisfied”). While grading an activity, rank is assigned based on student performance, followed by the outcome report at the course level.

ILOs are measured at the course level in Canvas, Blackboard and Moodle. They have mechanisms to measure learning outcomes and generate reports, which are comparable to a certain extent. However, the mechanism is different within similar perspective of ranking and does not directly reflect the alignment of student grades in each assessment linked to the ILO to calculate the overall achievement of course learning outcomes. Some HEIs develop in-house mechanisms and assess ILO within the platform, which lack objective measurement. Learning outcome measurement may take place outside the LMS, using various rubrics. In some cases, ILOs are ranked using a scale reflecting the subjective judgment of the assessors.

Because of the gap between the requirements of the accreditation bodies and the LMS functionalities, it becomes necessary to develop a digital application for accurate and systematic direct assessment of ILOs.

4.3 Measuring achievement of course-intended learning outcome and program-intended learning outcome: the Gulf University approach

Academic programs offered at GU are designed based on a quality approach where program learning outcomes are met through objective measurement.

The approach adopted by GU to measure the achievement of learning outcomes, at course and program levels, is summarized as follows:

  • PILOs are clear, measurable and meet the requirements of the National Qualifications Framework.

  • Each program consists of courses and all the courses are mapped to PILOs.

  • Each course has CILOs, which are measurable and mapped to PILOs.

  • Each course has specific assessment methods linked to CILOs.

  • Each assessment method (quiz, assignment, midterm/final exam, project or case study) is linked to one or more CILO. Moreover, each assessment comprises one or more tasks, and each task is linked to a CILO.

Once the student is graded for each assessment, the percentage achievement of each CILO is calculated. In other words, the measurement of CILO includes the following:

  • percentage achievement of CILOs for each student in the course; and

  • percentage achievement of CILOs for all the students in the course (average).

Once the cohort has graduated, the percentage achievement of the PILO for each student and the average of the whole cohort are calculated. The CILO report is calculated consistently for each course. As each CILO is linked to one of the PILOs, all the courses (achievement of CILO for each course) collectively contribute to the achievement of the PILO.

Within this approach, the researchers developed the mechanism, designed the system architecture and guided the development of the Moodle plugin for measuring the achievement of the learning outcomes, which was implemented across the academic programs.

4.4 System architecture

The system architecture is underpinned by both top-down and bottom-up approach. The top-down approach is to set up the system, evident in the mechanism to develop program aims and PILOs, design the curriculum, develop course aims and CILOs, design assessment methods for each course and develop assessment tasks within a particular assessment method. The design and development of the ILO and relevant mappings start at the program level, and the same is developed at the course level, along with assessment design where the assessment and CILOs are interlinked. The bottom-up approach shows the functioning of the system, reflected in the assessment at the course level, grading of each assessment for each student within a particular course and achievement of the CILO report for each student and the average for each course. Finally, the CILO reports for all the program courses are generated along with the PILO report for each student and the cohort.

4.5 Outcome of system implementation: the Gulf University experience

The GU has implemented the plugin for all the courses within all the programs. Figure 1 represents the CILO achievement report, generated after the plugin, for a sample course scenario, which can be replicated for all the courses. The below result was derived based on a sample scenario, where the program consisted of five courses and four students enrolled in each course. Each sample course had four or five CILOs, which were mapped to one of the relevant PILO.

Figure 1 illustrates that course ABC, in the sample scenario, included five CILOs and had four students in the course who completed all the assessments. The CILO report generated by the plugin shows the achievement of the CILO, whereby the achievement of CILO1-K-PA is the highest at 84.17% and the achievement of CILO4-S-IT is the lowest at 65%.

The system further generated achievement report for each CILO, for each student in the course, is presented in Figure 2.

Figure 2 illustrates the percentage achievement of the CILOs for each student in the sample course and the overall achievement of each CILO for all the students in the course. For example, the achievement of Student 1 for CILO1-K-PA was 93.3%, CILO2-K-TU was 80%, CILO3-S-A was 70%, CILO4-S-IT was 60% and CILO5-C-A was 70%. The overall achievement for the course for CILO1-K-PA was 84.17%, CILO2-K-TU was 73.7%, CILO3-S-A was 80%, CILO4-S-IT was 65% and CILO5-C-A was 72.5%.

Finally, Figure 3 shows the sample PILO report generated by the plugin within the perspective of CILO to PILO mapping and linking students’ grades with the CILO for each course.

Figure 3 demonstrates that program XYZ comprised five courses and the CILOs were mapped to one of the ten PILOs. It was assumed that only one student was enrolled in the program. The average achievement of each PILO was generated by the plugin, assuming that the student had completed all the assessments in each of the program courses. The highest achievement was observed for PILO6-S-C and the lowest achievement has been observed for PILO8-C-A.

The CILO reports for all the courses offered in fall semester 2021–2022 at GU were generated using the plugin. The actual PILO report will be generated once the system completes the successful implementation for all the courses in each program.

Hence, it can be derived that the developed plugin was successfully implemented and relevant reports were generated.

4.6 Focus group interview with the course instructors and program leaders

The researchers conducted focus group interviews with eight instructors, representing different programs. Another set of focus group interviews was conducted with four program leaders and two deans of the colleges. Interview transcripts were analyzed to interpret the results in the given context, using deductive qualitative analysis:

  • The plugin is user-friendly and appropriate for instructors from different backgrounds, requiring minimum technical skills. It was indicated that the system functioned so smoothly that even part-time instructors managed it appropriately.

  • Course instructors received training on inserting CILO, conducting mapping between CILO and PILO, aligning assessment with CILO, creating gradebook and generating CILO report. There was no specific usability challenge. One of the instructors mentioned that where assessment is conducted offline, manual entry of grades consumes time.

  • The system provides systematic, accurate and consistent measurement reports based on mapping. Assessments are aligned with CILO, including each question/task being linked to a CILO. Once a student’s work is assessed, it automatically generates CILO report. One instructor indicated that case samples were checked manually to check the accuracy of the measurement. One of the program leaders added that the plugin is reliable, as it is developed and tested by experts before actual implementation. The CILO report also matches the expected achievement of the learning outcomes, as per the instructors.

  • The system includes all types of assessments within a course. If the assessment is conducted on Moodle, then it directly contributes to gradebook and CILO achievement. Other assessments, which are not conducted in LMS, are entered manually under general assessment. Instructors can enter grades for all types of assessments, and all the assessments are aligned with CILO.

  • Instructors and program leaders acknowledged that the CILO report provided valuable inputs for improving the course. The course report provided recommendations to increase the achievement of CILO, if it is below the threshold level. One of the program leaders indicated that the CILO report provides opportunities to redesign teaching, learning and assessment practices, for example, giving more practical tasks/performance-based tasks, problem-based learning and so on. This was reflected in the session with the instructors where opportunities for improvement and the course in the following semester were discussed. The deans added that notable feedback and recommendations from the CILO report, at the course level, fed into program annual report.

  • In the session with program leaders and deans, there was a discussion around how the course report inputs are reflected in the annual program review, and an improvement plan is derived accordingly. Such a tool provides valuable input for revising our practices in teaching, learning and assessment, with emphasis on continuous improvement. One of the program leaders commented that this tool highlighted student progression and the appropriateness of preparatory courses, comparative analysis of student batches or courses to identify skill achievement and competence-oriented learning outcomes. He proposed to consider an acceptable percentage of students in each course and program to calculate the learning outcome achievement. Another program leader proposed to add a criterion for the threshold level of the achievement.

  • The CILO and PILO reports contribute directly to program review. The deans and program leaders mentioned that the achievement reports supported evidence-based decision-making and measured the effectiveness of the program in terms of teaching, learning and assessment strategies, ability of the staff to deliver, develop and empower the students in acquiring knowledge, skills and attributes. One of the deans added that the achievement of learning outcome provides input on revising the admission criteria, learning resources and curricular activities.

  • Regarding the employability of the graduates, program leaders commented that achievement of PILO can elevate the profile of the graduates. Employers might be interested in not only the grades but also their skills and competencies. The PILO achievement report reflects the suitability of the graduates in terms of acquired knowledge, skill and competence and readiness to cope with labor market challenges. One of the program leaders commented that if the achievement of the CILO in courses like Graduation Project and Internship are provided to the employers, then it would be useful for the students. Another program leader appreciated and added that these courses measure skills and competencies appropriate for various professions.

  • All the participants, in both the sessions, acknowledged that this mechanism met quality assurance requirements. Through comprehensive mapping, the direct assessment contributed to CILO achievement. The PILO achievement report provided valuable input for program review and tracked the performance of the graduates. Program leaders further commented that it encouraged them to rethink teaching, learning and assessment practices in the light of quality assurance.

  • One of the instructors proposed that once the entry of CILO and mapping between CILO and PILO are done for a course, these should be copied, for the same course, for the other semesters. Program leaders suggested avoiding mapping review for the same course in each semester. Another suggestion from a program leader suggested the generation of PILO report after each semester or year so that corrective actions may be taken based on partial achievement.

  • In the absence of this mechanism, assessment of learning outcomes would be done manually, which lacks consistency and reliability. It would be time-consuming and subjective. One of the deans expressed the risk of missing data in course report and restricted opportunities for improvement without this mechanism.

Within this approach, content analysis has been conducted (Table 1) to reflect the interpretation of the narratives collected from the focus groups. The analysis identified the categories underpinning the user experience of the plugin.

The qualitative analysis of the interview results categorically highlights the user-friendly interface and the hassle-free implementation of the plugin. Participants in the focus groups confirmed that, from mapping to grading assessments and generating CILO reports, the plugin does not require technical expertise. The instructors were able to use the plugin with basic training. The plugin is useful in program evaluation and is effective while generating consistent, accurate and objective measurement reports at course and program levels. Hence, the analysis of the focus group interviews confirms the effectiveness of the mechanism towards best practices in quality assurance.

4.7 Potential vulnerability of the mechanism

Focus group interviews with the users at GU show the successful implementation of the mechanism in generating objective and accurate measurement of the learning outcomes. This is applicable to all the programs across HEIs using Moodle, irrespective of the requirements of the accreditation bodies. The developed plugin is currently accessible only to the users at GU for generating CILO and PILO reports. This can be extended to any course or program in HEIs, provided the plugin is developed and implemented in alignment with the mechanism.

5. Discussion

5.1 Context analysis of higher education institutions in Bahrain

The higher education landscape in Bahrain experiences paradigm shift towards internationalization, partnership with industries and local and international accreditation in preparing global citizens. The Higher Education Council oversees the higher education offered by universities and higher education institutes in Bahrain. The Education and Training Quality Authority monitors and oversees the quality of education and training in the HEIs through institutional and program reviews based on set standards. Currently, there are 4 public universities and 13 private universities and HEIs offering degree, diploma and certificates at the bachelor and master levels in different specializations.

5.2 Measuring learning outcome in Moodle: the Gulf University experience

The researchers demonstrated the endeavor of GU in developing a plugin, which can objectively and consistently measure learning outcomes. The implementation of the plugin shows the appropriateness and effectiveness of the system in generating CILO and PILO achievement reports for each course and for the entire program.

Focus group interview with the instructors, program leaders and deans clearly demonstrates the effectiveness of the plugin from users’ perspective. The CILO report supports instructors in reflecting on course enhancement in terms of teaching, learning and assessment practices. Program leaders perceive the tool as effective in program evaluation and in identifying areas of improvement and supporting employability of graduates. The system has been consistently implemented across all the courses of all the programs. This is a remarkable achievement of GU and an example of best practices in meeting the quality assurance standards. Results of the study support the findings of the previous studies and present a more accurate and systematic measure of ILO, applicable to any HEI meeting the standards of accreditation.

5.3 Continuous quality enhancement

The developed plugin satisfies the standards of accreditation bodies and ensures continuous quality enhancement. The achievement of CILO report is important for preparing course reports. It provides the opportunity to identify areas of improvement if particular CILO falls below the threshold level of 60–70% for one course. This is supported by the research mentioned in background literature, which specifies expected student achievement of 70% for one course. The PILO achievement report for each student and the cohort provides valuable feedback to the program leaders while conducting program reviews. Improvement plans can be prepared for subsequent delivery of course/program.

6. Limitations and recommendations for further research

The developed plugin is applicable to the current version of Moodle. Upgrading to another version would require technical adjustments in the plugin. The scope for further research lies in measuring learning outcomes using both direct and indirect assessments.

7. Conclusion

The research reviewed the requirements of the accreditation bodies and compared the functionalities of Canvas, Moodle and Blackboard for measuring learning outcomes. Measuring learning outcome at HEIs is an evidence-based and policy-driven practice, ensuring quality assurance. Widely used LMSs like Canvas, Moodle and Blackboard follow mechanisms to measure learning outcomes within their perspectives. However, there is a gap between the accreditation standards and the systematic assessment of student learning at course and program levels in LMSs through direct assessment. Within GU practice, the researchers developed the mechanism and designed the system architecture of a plugin. The plugin was implemented across GU program courses in the academic year 2021–2022. The achievement of learning outcomes provides valuable input to instructors and program leaders for continuous enhancement. User experience feedback highlighted that the plugin functions smoothly with minimum technical skill. This ensures appropriate reporting and continuous improvement in student learning. This mechanism works as an ideal case study to be implemented at HEIs with similar approach.

Figures

Sample CILO achievement report

Figure 1.

Sample CILO achievement report

CILO report showing achievement per student

Figure 2.

CILO report showing achievement per student

Sample achievement of PILO report

Figure 3.

Sample achievement of PILO report

Content analysis of interview transcripts

Categories Identified issues Interpretation
Usability of the plugin
  • User friendly interface

  • Easy and smooth

  • Minimum technical skill

  • Easy to access and implement

  • Digital expertise not mandatory

Measurement reports
  • Systematic, accurate and consistent

  • System architecture and mapping

  • Inclusive of all assessments

  • Training and videos supporting implementation

  • Appropriate, objective and reliable measure

  • Course- and program-level achievement for each student and average for course and cohort

Benefits to instructors
  • Redesigning teaching, learning and assessment

  • Continuous improvement in the course

  • Recommendations in course report

  • Input for program review

Program evaluation and effectiveness
  • Valuable input for program review and improvement plan

  • Comparative analysis of student batches and courses

  • Enhancing program delivery

  • Evidence-based decision-making

  • Effective tool for continuous improvement

  • Corrective actions for academic practices

Employability
  • Employers might not be interested only in Grades

  • Readiness to enter the profession

  • CILO achievement of Graduation Project useful for employers

  • Reflection on skill and competence

  • Accountability to employers

Quality assurance requirements
  • Meets quality assurance requirements

  • Aligned with standards of accreditation bodies

  • CILO and PILO achievement towards institutional performance measurement

  • Compatible with the requirements of accreditation bodies

Challenges and suggestions
  • Tracking of students and batches in every semester/year

  • Export option for CILOs to PILOs mapping

  • No technical issue

  • Repetitive mapping exercise

Significance of the plugin
  • Consistent, reliable and objective measurement

  • Evidence-based decision-making

  • Course- and program-level enhancement

  • Time and cost-saving

References

Agarkhed, J. (2017), “Moving towards an outcome based education in engineering”, International Journal of Advanced Research in Computer Science, Vol. 8 No. 8, pp. 85-97, doi: 10.26483/ijarcs.v8i8.4844.

Al Hmaimat, N., Melhem, O., Rosita, A., Devada, B. and Abboud, H. (2021), “Curriculum evaluation: assessing the students’ achievement of a program level learning outcomes in the baccalaureate degree of nursing”, International Journal of Nursing and Health Care Research, Vol. 4, p. 1267.

Alzubaidi, L. (2017), “Program outcomes assessment using key performance indicators”, Proceeding of 62nd ISERD International Conference, Boston.

Anwar, M.A., Ahmed, N. and al Ameen, A.M. (2012), “An outcome-based assessment and improvement system for measuring student performance and course effectiveness”, Contemporary Issues in Education Research (CIER), Vol. 5 No. 4, pp. 279-294, doi: 10.19030/cier.v5i4.7272.

Brumwell, S., Deller, F. and MacFarlane, A. (2017), “Why measurement matters: the learning outcomes approach – a case study from Canada”, Journal of Higher Education in Africa/Revue de L’enseignement Supérieur En Afrique, Vol. 15 No. 1, pp. 5-22, available at: www.jstor.org/stable/90016697

Caspersen, J., Smeby, J. and Olaf Aamodt, P. (2017), “Measuring learning outcomes”, European Journal of Education, Vol. 52 No. 1, pp. 20-30, doi: 10.1111/ejed.12205.

Croitoru, M. and Dinu, C.-N. (2016), “A critical analysis of learning management systems in higher education”, Academy of Economic Studies. Economy Informatics, Vol. 16 No. 1, pp. 5-18, available at: www.economyinformatics.ase.ro/content/en16/01%20-%20croitoru,%20dima.pdf

Cura, F. and Ahmed Alani, T. (2018), “Accreditation effect on quality of education at business schools”, International Journal of Social Sciences and Educational Studies, Vol. 4 No. 5, pp. 71-82, doi: 10.23918/ijsses.v4i5p71.

Due, T.D., Thorsen, T. and Kousgaard, M.B. (2019), “Understanding accreditation standards in general practice–a qualitative study”, BMC Family Practice, Vol. 20 No. 1, pp. 1-12, doi: 10.1186/s12875-019-0910-2.

El Marsafawy, H., Matarid, N. and Babu, M.R. (2018), “Assessment of students work within outcomes based higher education: effective mapping and closing the gap”, Edulearn 18, 10th International Conference on Education and New Learning Technology: (Palma, 2nd-4th of July, 2018), Conference Proceedings, IATED Academy, pp. 10449-10458.

Gundumogula, M. (2020), “Importance of focus groups in qualitative research”, International Journal of Humanities and Social Science, Vol. 8 No. 11, pp. 299-302.

Holmes, A.G. (2019), “Learning outcomes–a good idea, yet with problems and lost opportunities”, Educational Process: International Journal, Vol. 8 No. 3, pp. 159-170, doi: 10.22521/edupij.2019.83.1.

Howson, C.K. and Buckley, A. (2020), “Quantifying learning: measuring student outcomes in higher education in England”, Politics and Governance, Vol. 8 No. 2, pp. 6-14, doi: 10.17645/pag.v8i2.2564.

James-Okeke, P.A., Scott, C.J., Astatke, Y., Ladeji-Osias, J.O., Partlow, L.E. and Nyarko, K. (2013), “A performance assessment framework for measuring online student learning outcomes”, 2013 ASEE Annual Conference and Exposition, pp. 23-88.

Kant, N., Prasad, K.D. and Anjali, K. (2021), “Selecting an appropriate learning management system in open and distance learning: a strategic approach”, Asian Association of Open Universities Journal, Vol. 16 No. 1, pp. 79-97, doi: 10.1108/AAOUJ-09-2020-0075.

Lile, R. and Bran, C. (2014), “The assessment of learning outcomes”, Procedia – Social and Behavioral Sciences, Vol. 163, pp. 125-131, doi: 10.1016/j.sbspro.2014.12.297.

Martin, L. and Mahat, M. (2017), “The assessment of learning outcomes in Australia: finding the Holy Grail”, AERA Open, Vol. 3 No. 1, p. 2332858416688904, doi: 10.1177/2332858416688904.

Moonsamy, D. and Govender, I. (2018), “Use of blackboard learning management system: an empirical study of staff behavior at a South African university”, EURASIA Journal of Mathematics, Science and Technology Education, Vol. 14 No. 7, pp. 3069-3082, doi: 10.29333/ejmste/91623.

Nusche, D. (2008), Assessment of Learning Outcomes in Higher Education: A Comparative Review of Selected Practices, OECD, doi: 10.1787/19939019.

Patel, D. and Patel, H.I. (2017), “Blended learning in higher education using MOODLE open source learning management tool”, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 8, pp. 439-442, doi: 10.26483/ijarcs.v8i5.3327.

Rahmat, R. (2011), “Achievement of program outcomes using assessment plan”, Procedia – Social and Behavioral Sciences, Vol. 18, pp. 87-93, doi: 10.1016/j.sbspro.2011.05.013.

Rani, C.N. (2020), “45. A study on outcome-based education – issues and challenges”, International Review of Business and Economics, Vol. 4, pp. 271-279.

Scholl, K. and Olsen, H.M. (2014), “Measuring student learning outcomes using the SALG instrument”, SCHOLE: A Journal of Leisure Studies and Recreation Education, Vol. 29 No. 1, pp. 37-50, doi: 10.1080/1937156X.2014.11949710.

Shah, A.A., Sarwar, M. and Shah, S.A. (2017), “Assessing generic competence development among higher education students”, Pakistan Journal of Education, Vol. 34 No. 1, doi: 10.30971/pje.v34i1.183.

Tello, S.F. and Motiwalla, L. (2010), “Using a learning management system to facilitate learning outcomes assessment”, Learning Management System Technologies and Software Solutions for Online Teaching: Tools and Applications, IGI Global, pp. 138-156, doi: 10.4018/978-1-61520-853-1.ch008.

Wagenaar, R. (2008), “Learning outcomes a fair way to measure performance in higher education: the TUNING approach”, IMHE, pp. 2-8.

Wicaksono, G.W., Nawisworo, P.B., Wahyuni, E.D. and Cholily, Y.M. (2021), “Canvas learning management system feature analysis using feature-oriented domain analysis (FODA)”, IOP Conference Series: Materials Science and Engineering, IOP Publishing, Vol. 1077, p. 12041.

Yin, R.K. (2017), Case Study Research and Applications: Design and Methods, SAGE Publications.

Acknowledgements

The researchers are highly indebted to Mr Faisal Kaleem, a software architect, in developing the plugin. The researchers would like to express heartfelt thanks to the university management for supporting the research.

Author’s contribution: The research was a collaborative work of three authors working at GU, Bahrain. The authors have read and unanimously agreed to the published version of the manuscript.

Corresponding author

Rumpa Roy is the corresponding author and can be contacted at: dqa@gulfuniversity.edu.bh

About the authors

Dr Hesham El Marsafawy is the Vice President for Academic Affairs at Gulf University in Bahrain. He has obtained his PhD at Duisburg-Essen University (Germany) in Health-care Design. Dr El Marsafawy taught at different universities in Egypt, Turkey and Bahrain; he has professional, teaching and research experience in fields related to Design of Built Environment, Human Factors/Ergonomics and Quality Assurance in Higher Education. Dr El Marsafawy has comprehensive experience and research interest in developing universities’ strategic plan, quality assurance system, policies, procedures and designing academic programs as well as initiatives towards development of university campus and educational environments.

Dr Rumpa Roy is the Director of Quality Assurance and Development Center at Gulf University in Bahrain. Dr Roy has valuable experience in developing universities’ strategic plan, quality assurance system, policies, procedures and designing academic programs. Her areas of research interest include Economics, Micro Finance, Employee Satisfaction, etc. Her role in strategic management and quality assurance has opened a new paradigm in her research are, namely, education, recent trends and innovation in teaching-learning, corporate social responsibility, governance, leadership and sustainability. Dr Roy attended and presented her research at international conferences in Spain and Hungary.

Ms Fahema Ali is the Performance Measurement Officer at Gulf University in Bahrain. is the Performance Measurement Officer at Gulf University, Bahrain. She has Bachelor in Information Technology degree with valuable experience in analysis, design and development of client/server, web based and ntier application, developing web applications, windows services and web services in Microsoft Visual Studio. Ms. Fahema has expertise in full System Development Life Cycle and in non-Microsoft technologies including jQuery, PHP, Dream Weaver, Android Studio and Moodle. She works closely with academic and administrative departments to provide accurate and up-to-date data and analysis that defines the actual performance against the planed standards, to conduct stakeholder feedback surveys for evidence-driven decision making.

Related articles