Papers Presented at
the 1st National NADEOSA Conference
Held 11-13 August 1999
Author: A Bergh & G Kamper
Title: Action Research in Promoting Quality in Text Development: A case Study from UNISA
Abstract:
In the early nineties study material of the University of
South Africa (Unisa) came under fire from external evaluators (Swift 1993; SAIDE
1995:61-62). These evaluations comprised only a sample of study material and a group of
Unisa lecturers in the Faculty of Education responded by embarking on a more comprehensive
team research project, also with a view to assisting academic staff to improve existing
written texts.
The aim of this paper is to report on the development of this research project from its inception in 1993 until today. As the project was originally conceptualised before new national education policies came into being, the researchers involved had to reflect on their own assumptions around text and problematise and question the purpose and focus of the project continually. The original conceptualisation, reminding of a technicist, linear, factory-model process was gradually transformed into a non-linear, web-like model, shifting the aim of staff development for writing better texts to staff development for quality promotion, embedded in a more holistic quality assurance framework. This also enabled the project to expand further through the identification of other unexplored areas of importance.
To date a qualitative pilot study with ten BEd students has been completed (Lemmer et al 1995; Bergh et al 1996). A learner questionnaire was administered to all BEd students in 1997 (Schulze et al 1999) and at the beginning of 1998 an open-ended instrument for evaluating texts was tested by independent evaluators and by course teams for self-evaluation purposes. The paper will describe some of the highlights in the research process and will reflect on the implications of the experience and research findings for promoting quality in distance education course design and text development.
About the Authors:
Anne-Marie Bergh is a senior lecturer at
Unisa, teaching comparative and international education, and didactics. Her other research
interests include educational policy, educational transformation and development, higher
and teacher education, early childhood development, curriculum development, language in
education, examinations and assessment, and health education.
Gerrit Kamper taught German at secondary school level for almost 11 years before joining the HSRC in 1984 and UNISA in 1992 as educational researcher. His fields of interest are language teaching, community education (including ABET), open and distance learning and research methodology. He holds an MA (German) from PU for CHE, a MEd (Education Management) from RAU and a DEd (Education Management) from UNISA.
INTRODUCTIONThe quality assurance (QA) bug has seriously been with higher education institutions in South Africa since the early nineties, gaining momentum with different institutional and state versions. The aim of this paper is to give a description of a research project on the evaluation and improvement of written text in distance education in the Faculty of Education at the University of South Africa (Unisa). This project underwent a variety of mutations in the course of the development of the QA debate. Diagram 1 shows where it fits into the overall QA scene in South Africa. It is merely a project at the programme and course development level where input from certain actors, namely distance education practitioners and students, would be accounted for in relation to specific written texts.
Viewed in terms of the discussion document, "A Distance Education Quality Standards Framework for South Africa" (Department of Education 1996), the project focused on certain elements of some of these standards. The main focus was on the standard related to course materials and to a lesser extent course design. Other standards which were touched upon in passing (only one or two elements of each standard) include: learners, learner support, programme development, learner assessment, human resource strategy, collaboration between organisations, and quality assurance. As this project was done within the Faculty of Education at Unisa and deals with texts used for teacher education, training and development, the interpretation of quality, quality assurance and quality promotion in the exposition on "Procedures and Criteria for Quality-Assuring Teacher Education" of the Department of Education's (1998) document "Norms and Standards for Educators" was used as basis.
The paper has three main parts. Firstly, it gives a description of the conceptualisation of the project and the research process. This is followed by the other two parts comprising a reflection on the evolution of the character ('soul') of the project over time and the institutional response to the project. The conclusion reflects on the type of evaluation and the notion of quality assurance underlying the project.
CONCEPTUALISATION OF THE PROJECT
The original research project started during the course of 1993. It was during the time when Unisa staff members and outsiders were raising questions about the nature and quality of Unisa texts. In March 1993 an appraisal of instructional material used by first year students at Unisa (inter alia in Education) was carried out by Donald Swift, former principal of the British Open University who was acting as consultant for the South African Institute for Distance Education (SAIDE) at that time. According to Swift (1993:1-3), the texts were too dense and compact, boring, unattractive, authoritarian, impersonal, with a stilted tone. They also encouraged passive learning without creativity or critical thought and discouraged learner interaction with text and tutor. Swift's report was followed by an evaluation by the International Commission on Distance Education in South Africa under the auspices of SAIDE in 1994. Their report criticised teacher education texts as uninviting, dull and impersonal, often with an authoritarian tone "closer to catechisms than university texts". In addition, tone, language and content was altered in the translation of texts, usually from Afrikaans to English (1995:61-2,67).
It was against this background that the research was conceptualised. There was an urgent need "to write better texts" and the then existing Media Committee in the Faculty was strongly in favour of an action research project. "Interactive text" was the buzz word on everybody's lips. However, this was before new education policies came into being and the result was a rather technicist, linear, factory-model research design, consisting of three components - a qualitative pilot study with students, followed by a student questionnaire and an analysis of texts (mainly study guides) (see Diagram 2). Parallel to the research, a manual for "writing better texts" was also envisaged. Under the auspices of the Institute for Educational Research a grant was obtained from Unisa's Research and Bursary Committee to carry out the project, with one of the presenters (A-MB) as project coordinator.
The intention of the project was to be as participative and interactive as possible. Team members were active at different levels. Initially there was a 'management committee' of four people which later expanded to five and eventually to six. We recruited people as researchers, but also needed 'informants' and 'field workers' in the Faculty for certain aspects pertaining to the collection of information and the evaluation of texts. For many of these participants it was a 'come and go' situation - they participated in an aspect they were interested in and then left the project again or became 'dormant' until we needed them again. Furthermore, we wanted to accommodate interested people who could not become involved in the actual research. Although it was a Faculty of Education project we drew 'friends of the project' from three other faculties, as well as the Editorial Department, Student Counselling Services and the Bureau for University Teaching. Technikon SA also had an official representative. We tried to keep the 'friends of the project' informed of the progress of the research by organising a few report-back meetings in the beginning stages. After the release of the report of the first phase this method died out, mainly because of the tremendous effort it requires to keep such a network going and because a lack of time of researchers who did not get any time off from their other duties to do the project.
During phase two we tried to keep the Faculty informed by way of occasional e-mail messages (also used for recruiting volunteers to participate in a particular part of the project), updates to the Executive Committee of the Faculty and reports at Faculty Board meetings.
THE PROCESS
Phase one: the pilot study
Because of the few research findings available on the interaction between Unisa learners and their study texts, a qualitative approach was considered the most suitable for an investigation of such an exploratory and descriptive nature.
Two texts were selected for the purpose of this phase. We decided on two postgraduate texts as we could not identify two undergraduate Education texts which were sufficiently different at that stage. It was not even possible to find two study guides in the same programme for this purpose. The first text was a BEd tutorial letter used instead of a study guide which incorporated several features suggested to be typical of an interactive text. The second text was a traditional study guide used in one of the courses in the Diploma for Tertiary Education (DTE).
For various practical reasons the research was conducted on the main campus of Unisa in February and March 1994, with a volunteer sample of ten postgraduate BEd students living in the Gauteng region. Data pertaining to the participants' background was collected by means of a biographical questionnaire followed by a preliminary, in-depth interview in which participants were asked to elaborate on their background, present situation and study approaches. Thereafter, video recordings were made of the two consecutive half-hour 'study sessions' during which first one text and then the second text were read. This was followed by a stimulated recall interview in which the actual texts used, the markings in the text, notes made during the reading session and a replay of a video recording of their actions during the session were used to stimulate the students' recall of their responses to the text. Details of the research design and results of this phase have been reported extensively elsewhere (Lemmer et al, 1995; Bergh et al 1996).
After the publication of the research results there was a period of dormancy before the second phase took off. Again, time constraints of researchers and an increasing teaching workload played a major role in this.
Phase two: from linearity to complexity
During the dormant period the project made a major conceptual shift. The linear conceptualisation started to make way for a more octopus-like model with a number of `tentacles' (see Diagram 3). The original idea of an analysis of study guides started taking on a new form, namely that of evaluation of different aspects of text (open learning principles/instructional design versus subject matter) and evaluation by different groups of people, including students (the pilot study and the envisaged questionnaire), independent evaluators and staff members evaluating their own course texts. A staff development component was indirectly present in the original conceptualisation of writing a staff manual with instructions on "how to write an interactive text". This now changed to the notion of staff development for the sake of reflection on their own texts. Instead of research results informing the construction of a manual, research results were now seen to inform staff members.
As a result of the publication of new policy and discussion documents the term "quality assurance" was also starting to be heard more often in the corridors of Unisa between 1995 and 1997. And we started to realise that our idea of staff development could be harnessed for quality assurance and quality promotion. The Faculty also established a Quality Promotion Committee in 1997 and we tried to establish some informal links between the project and the institutionalised structure. We also had intensive debates on when and where would it be most appropriate to involve people outside the Faculty or Unisa as a form of triangulation for the research. Money and time constraints were at that stage still major obstacles.
The student questionnaire
In the meantime the construction of the student questionnaire was receiving attention during 1996 and 1997. This process was informed by the results of the first phase of the project, as well as activities in the other leg of this phase, namely the development of an evaluation 'instrument'. It was again decided to administer the questionnaire to the postgraduate BEd students in order to get some kind of comparability at the same level of study.
The structured questionnaire focused on students' feelings about studying at Unisa, their reasons for registering for the BEd degree, their study approaches (including communication with staff and time management), and their views on their BEd study material (with regard to appearance, objectives and outcomes, content, learning principles, learner activities, language and evaluation/ assessment). We also wanted to establish how the above variables influenced their performance.
The questionnaire was sent out in September and October 1997 and the response rate was nearly 34%. This is higher than the 10% response rate some other evaluations at Unisa have achieved in the past. The results of the survey are reported in an unpublished manuscript (Schulze et al 1999).
The text evaluation 'instrument'
When we were trying to implement that part of the original research proposal pertaining to text analysis ("analysis of study guides") we realised for the first time how vague the original proposal was with regard to this leg of the research. We again decided to focus on BEd study texts (particularly study guides and tutorial letters fulfilling the function of a study guide) to be in line with the rest of the project. And then the realities and constraints of team research really dawned on us for the first time. There were 42 BEd courses (modules) each with its own study package. Issues of generalisability, reliability and validity came under scrutiny and our original idea of an in-depth hermeneutic and content analysis started to seem impossible. We had to devise another strategy to do the analysis in a team.
After many discussions and debate in the project 'management committee' our mindsets gradually shifted from text analysis to text evaluation. This was also facilitated by the general discourse on quality assurance doing the rounds. At the same time one of the team members expressed interest to do more research on the academic content of study material. So we decided that for promoting quality we should rather develop a list of criteria with which to evaluate texts, or as one of the team members expressed it: "We wanted to have an instrument to standardise the evaluation of texts, and we then started searching for something which would encompass all the important aspects of open and distance learning (ODL)". So our search began in the literature and we found that existing checklists were either incomplete or overkilling, too technicist or not structured for our needs.
The construction of the preliminary 'instrument' went through various phases. Initially we were referring to it as a "checklist", but we found the checklist we were generating too general. Gradually the "checklist" became an "instrument". In the meantime there was an attempt at a structured, coded questionnaire type of instrument with open spaces for comments on each item. This also proved to be impractical and we eventually decided on an open-ended questionnaire which went through various phases of initial testing. In this process we became so involved with the instrument itself that it nearly became self-defeating as we had difficulty in maintaining a critical distance.
By the end of November 1997 the instrument was given to 'independent' evaluators in the Faculty. They were volunteers recruited in the Faculty to apply the instrument to the study material of courses they were not involved with. We tried to recruit people outside the Faculty but were unsuccessful. We also felt that we should perhaps do a first run before involving evaluators outside Unisa, particularly because of the cost aspect for which we had not budgeted. In March 1998, all BEd course coordinators were requested to apply the instrument, together with their course teams, to their own courses.
The data obtained from these two evaluation processes yielded surprising results and brought some of our hidden assumptions to the fore. In the end the data gave more insight into the assumptions and understandings of the text evaluators and text writers at Unisa than information about the nature and quality of the study texts. We realised, inter alia, that we had not differentiated with regard to study level and that instruments gleaned from the literature were more geared at undergraduate level. We had also assumed that people had the same understanding of certain aspects of text (eg outcomes and assessment). By keeping the instrument open, we had been overoptimistic about getting 'good' qualitative responses. In the process another question came to the fore: Who 'qualifies' as an evaluator of study texts?
Above all, we had assumed that all aspects of a study text could be
evaluated in one instrument. The result is that we are now contemplating a set of
evaluation strategies to deal with the problems we had. These may include the following:
- a grid summarising certain textual features (to be designed is such a way that it could
give an overview of one specific course, but also an overview of and specific trends in
textual design in a programme as a whole)
- a coded evaluation instrument concentrating on open learning principles and
instructional design
- an instrument for evaluating subject matter (content)
- an essay-type open-ended evaluation by independent evaluators or critical readers
(possibly not even suggesting categories or headings)
Phase three: the living web
When the summative report of phase two of the research project has been completed, the project for which a grant was obtained will end officially. However, while phase two was in progress, new legs were added to the project (Diagrams 4 and 5). It became a living web in which anyone interested in pursuing research about study material could be taken in under the wing of the project, but would be doing research independently, without having to ask about the goals of the original research project. The main function of the 'management committee' will be that of networking, bringing relevant threads together in overviews, putting researchers in touch with each other, and linking individual research projects to other projects outside the Faculty and Unisa. The project will have a life of its own, not steered as during the first two phases where the research team was accountable for the way in which the research grant was spent. In a way a centralised project became decentralised with only loose ties of research interest binding together research endeavours.
THE 'SOUL' OF THE PROJECT: EVOLUTION OVER TIME
The evolution of the character of this research project over time cannot be done justice to in this paper as it justifies a separate in-depth study. This will require withdrawing from the living web, taking a distance outside the web and trying to make sense and associations between obvious and not so obvious things. In this paper we will only share a few of the issues which have come to the fore during the course of the project.
Starting point, outcomes and legitimation. One of the painful lessons we learnt was that we had started out the project with the wrong assumptions and thought we could do a quick-fix job "to make our texts better". Starting out from just the texts on the one hand limited the scope of what we could do, but on the other hand did provide a kind of anchoring point for keeping the project manageable and on track. In the course of the six years that the project has been running, it did however evolve into a much more complex system in which more variables could be accounted for against the backdrop of the institutional and societal transformation. This course of events obviously had an influence on the outcomes of the project up to the end of phase two and also brought home again the realities of being accountable to the hand which gives the money. More than once during the project we were feeling forced to fit our research into the original research proposal, because "that was what we had promised to do with the money". This led to some gaps in the outcomes at the end of phase two, but this is probably part of the 'give and take' principle of many team research projects. It also forced us to continually rationalise and justify some of our actions to give more legitimacy to the project.
Participation, communication, dissemination and ownership. The participative management approach of the project was probably one of the highlights of the process. We managed to actively involve between 40 and 50 participants in aspects of the project. Despite the midway change in communication strategy, we tried to keep members of the Faculty informed on the progress of the project and the preliminary results. Despite the high participation rate there are still questions about ownership of the project and in how far participants coming and going through the revolving door identified with the project as a whole. More light will be shed on this issue in the near future when some staff members may have to be persuaded of the value of reflection along the lines of the guidelines encompassed in our text evaluation instrument.
The function of research. As has already been noted, the original view of the use of the research results changed during the course of the project. Initially the results were supposed to inform the construction of a manual on how to write interactive texts. Eventually the results were used to inform staff members with a view to reflect on the improvement of study texts. This demonstrates a change in assumption as well. Originally it was assumed that anyone could write 'good' texts as long as they had the necessary guidelines and if these guidelines were well written everybody would happily 'buy' them. The consumer model changed into a more reflective model after realising that writing ODL texts was not everybody's cup of tea (lack of experience, resistance to effort, time consuming, etc). The notion of staff development acquired an enabling function, but before any development could take place there should be a desire to get more skilled at writing. This could be achieved by the reflective function of staff development. The role of research is to bring the reflective and enabling functions together (although also acknowledging that the impact of larger institutional and societal changes on staff morale and the like could defeat the purpose).
The tension between evaluation and research. Although it is generally recognised that evaluation per se does not constitute research (Payne 1994:10), evaluation in some cases constitutes "the application of research skills to determine the worth of an educational practice" (McMillan & Schuhmacher 1993:518). Undertaking research of the scope of our project to evaluate study texts is a tedious process and participants even complained about the length of the text evaluation instrument we used. This brings to the fore the question to what extent thorough evaluation of texts could be done without research or whether one should use research to establish valid evaluation instruments which would yield reliable results. And then the question remains whether more user-friendly evaluation instruments could do justice to the requirements of reliability and validity. Another angle one could take in this regard is to rather consider our project a research project of which the findings could inform other, more complex, evaluation activities.
INSTITUTIONAL RESPONSE TO THE PROJECT
An important aspect of a project like ours is to reflect on the institutional response to this project. The initial response to our funding proposal was positive in the sense that a research grant was allocated. There were however other aspects of this response where there was a constant interchange between negative and positive responses. A few of them are the following:
Execution of the project. The time constraints of the researchers and the fact that they did not get additional time to work on the project resulted in some steps being skipped over too superficially and prolonging the original time schedule. From time to time agony was also caused by certain key participants withdrawing unexpectedly during a certain phase of the project.
Perceptions of the usefulness of the project. Here one should distinguish between expectations of researchers and perceptions of 'recipients' of research findings and how the particular institutional interactions between structure and agency may sink some of the initiatives. Apart from also experiencing the usual struggles one encounters in the course of action and team research projects, we still need to assess staff members' reaction to some of the findings. Calder (1994:28) discusses three types of short-term use of evaluation: no utilisation, passive utilisation and active utilisation. In our project planning we made provision for further staff development (see Diagram 3) and one could possibly expect a combination of passive utilisation (with some indirect influence on actions) and active utilisation of some of the recommendations emanating from the research findings.
The legitimacy of the project could be questioned at different levels. One of the issues is the fact that we used mostly internal staff in the execution of the project and some people do not believe in self-evaluation. Money and time constraints have up to now prohibited us from pursuing the ideal of including an 'outside' component. As this is an action research project the primary focus is however not on the evaluation of texts per se but on the improvement of written texts. Legitimacy should possibly be weighed up against the degree of ownership and commitment towards transformation that the project could eventually generate.
CONCLUSION
According to James Hartley (1995:285), there are many ways to evaluate distance learning materials. He refers to Schriver's distinction between text-focused, expert-focused and reader-focused studies and contends that these "three kinds of studies complement each other. ... sampling from each of these three approaches to evaluation may be helpful." The project that we have described used all three foci to some extent, although the expert focus will only become stronger towards the very end. We have also tried to indicate the ever-changing interaction between these three foci during the development process of the project.
In retrospect, our project had a peculiar character of mixing different forms of quality assurance and quality promotion. On the one hand, its ultimate goal was an internal form of QA and QP, namely to improve the quality of existing and future study texts in Education and to be accountable for what was available (cf Department of Education 1998:144). One the other hand, the project tried to do this by imitating an external audit (a once-off evaluation, which could be repeated with intervals). The question which will only be answered in a couple of years' time is whether this project could impact on the daily, cyclical process of self-evaluation.
Acknowledgements
We would like to thank the many participants from Unisa's Faculty of Education for their enthusiasm and willingness to participate in various facets of the project. Without their commitment we would not have been able to progress thus far.
The financial support of Unisa's the Research and Bursary Committee is acknowledged. Views expressed in this paper are those of the authors and not of Unisa.
References
Bergh A-M, Lemmer E, Van der Linde N, Van Niekerk P, Van Wyk N. 1996. Distance learners and their experience of text. South African Journal of Higher Education 10(2):169-174.
Calder J. 1994. Programme evaluation and quality. A comprehensive guide to setting up an evaluation system. London: Kogan Page.
Department of Education. 1996. A distance education quality standards framework for South Africa. A discussion document prepared by the Directorate: Distance Education, Media and Technological Services, Department of Education, September - December 1996.
Department of Education. 1998. Norms and standards for educators. Pretoria: Department of Education, Technical Committee on the Revision of Norms and Standards for Educators.
Hartley, J. 1995. The layout and design of textual materials for distance education. In Lockwood, F (ed). Open and distance learning today. London & New York: Routledge.
Lemmer E, Bergh A-M, Van der Linde N, Van Niekerk P, Van Wyk N. 1995. Distance learners and their experience of text. Report of a research project conducted under the auspices of the Faculty of Education and the Institute for Educational Research of the University of South Africa. Pretoria: Institute for Educational Research, Unisa.
McMillan J.H. & Schumacher S. 1993. Research in education: a conceptual introduction (3rd edition). New York: Harper Collins College Publishers.
Payne DA. 1994. Designing educational project and program evaluations. A practical overview based on research and experience. Boston et al: Kluwer Academic Publishers.
SAIDE (South African Institute for Distance Education). 1995. Open learning and distance education in South Africa. Report of an International Commission, January - April 1994. Manzini: SAIDE/Macmillan Boleswa.
Schulze W, Swanepoel CH & Bergh A-M. 1999. Distance learners' approach to their studies and their experience of text: updated report on a questionnaire to BEd students. Unpublished report, Institute for Educational Research, Unisa.
Swift DF. 1993. Impressionistic review of Unisa first-year courses. Unpublished manuscript, March 1993.
NADEOSA Homepage
| Ist NADEOSA Conference Papers Index |
Global Distance Education Network
: South African
Resources
Send comments on the web site to the web designer