Chenicheri Sid Nair and Patricie Mertova
The purpose of this paper is to present a framework that can be utilized in the design of graduate employer surveys carried out by tertiary institutions as a form of monitoring…
Abstract
Purpose
The purpose of this paper is to present a framework that can be utilized in the design of graduate employer surveys carried out by tertiary institutions as a form of monitoring their graduate attributes. It further aims to identify the potential issues and challenges that may be involved in undertaking such a survey.
Design/methodology/approach
The paper describes an approach to administering a graduate employer survey conducted at Monash University, Australia. The survey utilized a combination of means, involving telephone, e‐mail and mail‐outs. During a period of approximately four months, 2,753 companies were contacted and response was obtained from 464 of them. The survey instrument was based on 23 graduate attributes. In the course of the survey, employers were asked to rate graduate attributes in terms of importance and their satisfaction with the extent to which each of these attributes was demonstrated by Monash University graduates employed by the particular company. Open‐ended feedback was also sought from the employers.
Findings
Universities world‐wide have increasingly incorporated the development of the so‐called graduate attributes into their quality development mechanisms. One way of monitoring these graduate attributes has been through conducting graduate employer surveys. The paper presented a workable approach to collecting employer feedback, which may offer some guidance to other higher education institutions that may be considering introducing similar employer surveys. It also identified some of the issues and challenges involved in undertaking such a survey.
Practical implications
The paper discusses a number of practical limitations to administering an employer survey. These include the need for: a well‐sourced database of employers of the institution's graduates; established relations with industry and professional bodies; proper staffing and infrastructure; and awareness of timelines suitable for individual employers to complete such a survey. The implications for the university resulting from the limitations are that the leadership need to address these limitations in order to increase the efficiency and effectiveness of the future iterations of the graduate employer survey. The limitations may also serve as guidance to other institutions concerning aspects they need to address when planning to conduct a similar survey.
Originality/value
Internationally, and certainly in Australia, there are very few higher education institutions that have well‐established graduate employer surveys. The Monash University graduate employer survey outlined here may offer some guidance to tertiary institutions considering conducting similar graduate employer surveys.
Details
Keywords
Patricie Mertova and Len Webster
This paper sets out to report on a research project investigating the academic voice in higher education quality in the UK and the Czech Republic. It aims to describe the origins…
Abstract
Purpose
This paper sets out to report on a research project investigating the academic voice in higher education quality in the UK and the Czech Republic. It aims to describe the origins and reasons for introducing quality monitoring and assurance into higher education, showing the differences and impacts on higher education quality in England and the Czech Republic, including the current practices and presenting the concerns and issues voiced by the academics and higher education leaders in both higher education systems.
Design/methodology/approach
The research utilised a critical event narrative inquiry method, which focuses on issues of complexity and human‐centredness in studied phenomena. In this way the method addresses issues that are frequently overlooked by quantitative research methods. It is argued that, by extracting “critical events,” the method is more efficient in dealing with large amounts of data, which often result from the use of qualitative research methods. In the presented research, “critical events” voicing important issues and concerns in higher education quality are extracted from stories of UK and Czech academics and higher education leaders.
Findings
Through extracting “critical events” in the professional practice of academics and higher education leaders, the research uncovered some similar and some culture‐specific issues voiced by Czech and UK academics and higher education leaders. The culture‐specific issues were revealed mainly in the Czech higher education context.
Practical implications
The research uncovered a number of issues and concerns which were overlooked in the current higher education quality practices in both the higher education systems. The paper does not present all the recommendations for educational practice and further research. These may be consulted in Mertova's Quality in Higher Education: Stories of English and Czech Academics and Higher Education Leader.
Originality/value
The research applied a critical event narrative inquiry methodology, which is a novel qualitative research method focusing on extracting “critical events” in the professional practice of individuals, in this case academics and higher education leaders. Even though the methodology was developed by Webster and Mertova, the study has further refined it and applied it in the field of higher education quality.
Details
Keywords
Chenicheri Sid Nair, Lorraine Bennett and Patricie Mertova
The purpose of this paper is to highlight the importance of collecting and acting on student feedback as a key component of quality improvement in higher education. The paper…
Abstract
Purpose
The purpose of this paper is to highlight the importance of collecting and acting on student feedback as a key component of quality improvement in higher education. The paper seeks to outline a systematic improvement strategy adopted at a faculty level within a large university in Australia but will be of interest to leaders and practitioners of quality assurance programmes across the sector.
Design/methodology/approach
A strategy to achieve quality improvement was designed and carried out jointly by the University Centre for Higher Education Quality (CHEQ) and the Centre for the Advancement of Learning and Teaching (CALT) with staff in one of the smaller faculties at the University. The faculty mean for student satisfaction lagged significantly below the means of other faculties, and five units (subjects) which were deemed to be “poorly performing” against the University's agreed target were selected for some intensive improvement. The guiding principles which underpinned the adopted strategy included: utilising student feedback data; targeting poorly performing units as a priority; linking staff and student development support; and documenting and demonstrating improvement as a consequence of the actions taken.
Findings
A post‐test evaluation of the five target units showed improvement in the form of increased student satisfaction. The strategy adopted at the University underlined the significance of collecting student feedback in quality enhancement, acting on the feedback and supporting academic staff in implementing improvements. Overall, the strategy signalled the interconnection between student evaluations and the quality of education programmes.
Practical implications
The successful implementation of a unit improvement strategy at a faculty level within the University demonstrated the value of the approach and recommended its application as an improvement strategy across the whole institution, provided that the internal context of individual faculties is taken into consideration. This case study may also offer some guidance to other tertiary institutions looking into utilising evidence‐based planning and decision making as a way of driving quality improvement.
Originality/value
Many tertiary institutions around the world are currently collecting student feedback. However, the interconnection between the student feedback and actual institutional change is not always evident or addressed. Therefore, this University case study offers some direction towards a more effective utilisation of student evaluation data.
Details
Keywords
Chenicheri Sid Nair, Nicolene Murdoch and Patricie Mertova
The purpose of this paper is to look at the role of the student experience questionnaire in collecting students' perceptions of their experiences in studying at an offshore campus…
Abstract
Purpose
The purpose of this paper is to look at the role of the student experience questionnaire in collecting students' perceptions of their experiences in studying at an offshore campus of an Australian University, compared with the experiences of the University's students in Australia. In particular, it seeks to highlight the difference in perceptions of students resulting, for example, from the size of the campus versus the size of other campuses and the whole institution.
Design/methodology/approach
The case study is based on a student learning experience questionnaire utilised by a large research‐intensive Australian tertiary institution (with two overseas campuses). The results of the questionnaire are compared between one overseas campus and the whole institution. The case study looks at the experiences of the overseas campus compared with the Australian experience. Although the case study is situated within one institution, there are aspects and lessons that are applicable to other institutions internationally, in particular when considering collecting student feedback in relation to multi‐campus or multi‐venue programmes.
Findings
Higher satisfaction rates for different aspects of student learning experience were identified at the overseas campus in comparison with the whole institution. This was attributed to the smaller size of the overseas campus and thus better engagement of and personal attention to students at this campus. This finding may be particularly relevant to other international institutions in relation to the role of an institution's size and collection of student feedback.
Practical implications
The case study outlines a number of strategies adopted to enhance student engagement and subsequently to improve the aspects which they have indicated as being less satisfactory. The institution was chosen as an example of how particular strategies may be adopted in other institutions.
Originality/value
Enhancement of student experience is pre‐conditioned by good survey response rates among students for an institution to base its actions on opinions of a sufficiently high number of students. This paper describes a successful strategy of engaging students adopted by a large research‐intensive Australian university, and thus increasing survey response rates.
Details
Keywords
Chenicheri Sid Nair, David Pawley and Patricie Mertova
This paper aims to report on how an Administrative Division at a research‐intensive Australian university utilised feedback data from the Learning and Growth Survey, to initiate…
Abstract
Purpose
This paper aims to report on how an Administrative Division at a research‐intensive Australian university utilised feedback data from the Learning and Growth Survey, to initiate changes.
Design/methodology/approach
This paper refers to the actions taken by the Administrative Division to the results obtained from the Learning and Growth Survey. The questionnaire items are based on the “Balanced Scorecard” system outlined by Kaplan and Norton in 1996. It consists of a number of items identified as integral to effective growth and learning strategies for staff‐development. The questionnaire seeks staff perceptions of the individual items and how their needs are met in the current University management practices.
Findings
The results of this survey indicate that employees were willing to provide practical feedback on a range of dimensions, which they felt would assist improvement of their development and growth opportunities. Further, this paper demonstrates that in any exercise which involves collecting information on staff perceptions, staff not only expect that the data would be utilized constructively, but also that the institution would also make practical changes based on their feedback and that they would be informed about these changes.
Practical implications
This survey revealed that some of the feedback obtained from participants had limitations as to what actions could be taken within the Administrative Division because of the ramifications for institutional budgets. However, issues related to improvement of the learning and growth environment were possible to address through practical changes within the bounds of the Division's budget. Further, when conducting similar surveys among university staff, it is essential that anonymity of the participants is ensured. It is also vital that the purposes, outcomes, proposed actions and progress in implementation of these actions are well communicated to all the staff.
Originality/value
A growing number of tertiary institutions have recently started conducting surveys among their staff concerning the staff satisfaction with their work in the organisation(s). Despite this growing number of employee surveys, there is a lack of academic literature available describing how such surveys are conducted and the issues that institutions face when designing, implementing and evaluating these surveys. From the available information, it was also unclear what aspects of employee experience these surveys cover and whether they focus on staff learning and development. Therefore, this paper attempts to make a step in that direction by describing an employee survey regularly conducted among staff within administrative units at a large Australian University.