Development of an instrument for programme evaluation: An illustration from a nursing programme

Linda Yin King Lee, Alisa Ka Po Wong, Joseph Kok Long Lee, Wai Shun Yuen and Joey Che Yan Yip
The Open University of Hong Kong
Hong Kong SAR, China

Yee Shan Wong
Queen Elizabeth Hospital
Hong Kong SAR, China

The Open University of Hong Kong launched the Bachelor of Nursing with Honours (General Health Care) programme in 2005. This professional programme adopts a number of flexible features. For example, it uses a mobile device for clinical teaching which makes learning outside the campus feasible. It also includes a three-term system with flexible arrangements between the classroom teaching and practicums, thereby maximizing the learning opportunities. Furthermore, it works closely with hospitals to bring clinical expertise into the teaching content.

A comprehensive programme evaluation from the graduates’ perspective was necessary for programme improvement. Owing to the unique nature of the programme, a specific instrument was constructed.

For evaluating the programme, a questionnaire was developed. Donabedian’s triad was adopted as the theoretical framework in which ‘structure,’ ‘process’ and ‘outcomes’ were considered as the three sub-scales of this instrument. Under this framework, ten evaluation themes were identified: programme goals, curriculum model, contents and delivery, clinical education, resources, technology structure, staff, alumni, policies and student learning outcomes. Seventy-one evaluation items were then developed.

The stability of the instrument was evaluated by testing and re-testing 10 graduates at a two-week interval; for assessing its internal consistency, Cronbach’s alpha was used; and the content validity of the instrument was evaluated by an expert panel of six nursing academics. Based on the percentage of items that were rated as relevant by the panel members, the content validity index was calculated.

The instrument has sound psychometric properties. The test-retest reliability coefficient was 0.81, indicating satisfactory stability. Cronbach’s alpha for the three sub-scales was 0.91, 0.93 and 0.74 respectively, showing satisfactory internal consistency. Finally, the content validity index was 0.90, indicating good content validity. The present experience is relevant to academics who are involved in programme evaluation.