Web Usability and Information Design Issues
The following is a critical review of Mouli and Ramakrishna's (1991) report "Readability of Distance Education Materials." This paper also studies the use of the Flesch Reading Ease score method. FLESCH is applied to reading materials of MDDE 602, a research method course offered by Athabasca University.
Raja Mouli and Pushpa Ramakrishna (1991) applied survey research methods to study the readability of course materials developed by the Andhra Prasesh Open University (APOU) using the Flesch Reading Ease score method. In the second part of their study, they examined the relationship between Reading-Ease and student performance on term-end examinations. Their paper is an interesting example of the survey research technique, because they carefully define all terms, clearly report their procedures, and choose appropriate methods for the research task at hand. However, in the second part of their study, although it can be shown that their readability results are reflective of their sample universe, their performance conclusions are weakly supported by their data.
Mouli and Ramakrishna surveyed 48 books used in distance education courses at APOU. Using the Random sampling survey technique, they extracted five paragraphs from each book for a total of 240 paragraphs of approximately 100 words. The total number of words, sentences, and monosyllables in each passage were then counted to determine “readability” using a Flesch Reading Ease score table. A high Reading Ease score indicates text to be more readable than text with a low score. Next, Mouli and Ramakrishna compared their readability results with the mean scores of learners on end-of-term exams for each course surveyed.
Appropriateness of Method Used
The survey research technique is well suited to answer questions about “the specific characteristics of a large group of persons, objects, or institutions (Jaeger 1997a, 449). To this end, Mouli and Ramakrishna offered three ways their institution could use the survey technique to assess the effectiveness of their distance education materials:
Mouli and Ramakrishna chose the third technique because they felt that both teachers and students at APOU were:
Part of the strength of Mouli and Ramakrishna’s methodology was their careful definition of key terms, description of procedures, and the justification of their approach as evidenced in their write up. They were also careful to point out that since APOU students are expected to gain a considerable amount of learning from the printed course materials, it is essential these materials be more readable than conventional materials: “If the materials are poor, the students won’t learn much” (p. 11). This rationale supports the relevance of their study.
However, although Mouli and Ramakrishna were accurately able to survey the Flesch Reading Ease of their course materials, their conclusion that a positive correlation exists between readability scores and end-of-term exam performance is unfounded.
Weaknesses of Method
From the results of student performances’ on end–of–term exams, Mouli and Ramakrishna reported that the Zoology pass percentages were the highest at 92.32 % and the Political Science pass percentages were the lowest at 42.66 %. With respective Flesch readability scores of 44 and 42, they then calculated that “a positive correlation (+0.68) was observed between the Reading Ease scores and the pass percentage using the product-moment method of calculations” (p. 13).
This correlation doesn’t make sense.
With a negligible difference of 2 points in Flesch Reading Ease scores and a whopping 42.9 % difference in exam performance, the assertion of a positive correlation between the two seems more like positive thinking, rather than science. Random error alone can easily account for the differences in scores.
A Graph of their Findings Tells the Truth
According to Mouli and Ramakrishna’s conclusions, a graph of their results should indicate an upward sloping line to support a positive correlation between Reading-Ease and performance. But as the graph in FIG. 1 shows, no such correlation exists. Although it can be weakly argued that line A best represents the data, lines B and C show a distinct negative correlation. It is much easier to argue that the data shows NO correlation positive or negative. This lack of correlation could be the result of inaccurate Flesch Reading Ease scores, or more likely, insufficient performance data.
FIG. #1 – Graph of Mouli and Ramakrishna’s Readability
Suggestions for Improvement
Mouli and Ramakrishna’s performance data is inconclusive. To increase the accuracy of their study, they need more data from more sources. The data outlined in Table #2 was compiled using a similar random sampling survey technique. Five passages were randomly selected from texts and readings used in Athabasca University’s MDDE 602 course, “Methods of Inquiry and Decision Making” (a core requisite in their Masters of Distance Education Program). Readability scores were then calculated using the built-in Flesch readability tool found in Microsoft’s Word97. The Flesch readability tool was then used to average the five passages from each text, as well as the 20 passages surveyed for the entire course.
The results of the 602 data reveal the following: The readability score of 40.6 for MDDE 602 course materials, compares similarly to the mean score of 41 for APOU course materials (see Table #1). This would suggest that the first part of Mouli and Ramakrishna’s survey was accurate. To verify the second part of their survey however, would demand more extensive research. Other universities would have to conduct similar studies comparing Flesch readability scores and final exam performance. The results would then have to be pooled to derive a more accurate correlation.
It is interesting to note that the survey of MDDE 602 materials also reveals the following: Huff’s (1995) text, with a readability score of 65.3 offers a needed break from both Jaeger’s text (1997b) and the MDDE 602 Readings (Coldeway 1999), each with respective scores of 30.2 and 26.4. In particular, administrators of this program should carefully consider the usefulness of the assigned readings of Colaizzi (1997) and Kaestle (1997) with respective scores of 0.8 and 10.2.
What is readability? Can readability actually be determined using the Flesch formula?
Colaizzi, P. F. (1978). Psychological research as the phenomenologist views it. In R.S. Vale & M. king (eds.), Existential-phenomenological alternatives for psychology (pp. 48-71). New York: Oxford university Press.
Coldeway, D. O. (Ed.). (1999). Methods of inquiry and decision making, MDDE 602, Readings. Athabasca, Canada: Athabasca University.
Garland, M. (1993). Ethnography penetrates the “I didn’t have time” rationale to elucidate higher order reasons for distance education withdrawal. Research in Distance Education, Vol. 5(1&2), 6-10.
Greene, M. (1997). A philosopher looks at qualitative research. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 189-206). Washington: American Educational Research Association.
Holdaway, E.A. (1986). Making research matter. The Alberta Journal of Educational Research, Vol. XXXII(3), 249-264.
Huff, D. (1993). How to lie with statistics. New York: W. W. Norton & Company.
Jaeger, R. M. (1997a). Survey research methods in education. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 449-476). Washington: American Educational Research Association.
Jaeger, R. M. (Ed.). (1997b). Complementary methods for research in education. Washington: American Educational Research Association.
Kaestle, C. F. (1997). Recent methodological developments in the history of american education. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 119-129). Washington: American Educational Research Association.
Keegan, D. (1996). Foundations of distance education (3rd ed.). New York: Routledge.
Mouli, C. R., & Ramakrishna, C. P. (1991). Readability of distance education course material. Research in Distance Education, Vol. 3(4), 11-13.
Patton, M. Q. (1986a). Focusing evaluation questions. In M.Q. Patton (Ed.), Utilization-focused evaluation (2nd edition, pp. 61-82).
Patton, M. Q. (1986b). The methodology dragon: The paradigms debate in perspective. In M. Q. Patton (Ed.), Utilization-focused evaluation (2nd edition, pp. 199-217).
Porter, A. C. (1997). Comparative experiments in educational research. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 523–544). Washington: American Educational Research Association.
Potashnik M., & Capper J. (1999). Distance education: growth and diversity. [Online]. Available: http://www.worldbank.org/fandd/english/0398/articles/0110398.htm [July 30, 1999].
Scriven, m. (1981). Product evaluation. In N.L. Smith (ed.), New techniques for evaluation. New Perspectives in Evaluation, Volume 2 (pp. 121-166). Beverly Hills, CA: Sage Publications, Inc.
Shulman, L. S. (1997). Disciplines of inquiry in education: A new overview. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 3-29). Washington: American Educational Research Association.
Simon, J. L., & Burnstein, P. (1985). Basic research methods in social science (3rd ed.). New York: McGraw-Hill.
Stake, R. E. (1997). Case study method in educational research: seeking sweet water. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 401-414). Washington: American Educational Research Association.
Wolcott, H. F. (1997). Ethnographic research in education. In R. M. Jaeger (Ed.), Complementary methods for research in education (pp. 327-353). Washington: American Educational Research Association.
|List of all Usableword columns|