Gregory S. HADLEY
Department of General Education
In the past few years, several articles have come out in Japan which describe techniques that encourage and measure oral communication, referred to in this paper as TEMOC. This paper asks the following questions: Are TEMOC in fact a valid measure of the learners= oral proficiency? Does language learning result from regularly monitoring and rewarding the use of spoken English in the classroom? And, in what way does the regular measure of oral communication complement the established testing norms of many Japanese schools? The results of this three-year study found that TEMOC may have some educational value in Japanese language classes, provided they are used fairly and consistently. This study suggests that TEMOC have the potential to be a valid measure of the learners= oral ability. Under certain circumstances TEMOC may even help learners to acquire the target language better. However, other findings warn against using TEMOC as a measure of grammatical knowledge or written skills.
The Japanese educational system, as with other institutions within the country, reflects many aspects of the social welfare establishment from which it was created. Some of these features that have received much attention over recent years have been the disparity between traditional Japanese approaches to teaching foreign languages and Western-based TEFL practices. Typically it has been the use of written grammar-based tests over tests that measure oral fluency, a focus on form as opposed to accuracy, and the elevation of classroom attendance over classroom participation that has been criticized by both Western and Japanese observers. Partly because of this criticism, reforms are taking place in some universities across Japan. However, most published reports on curricula reform rarely discuss how oral communication (often called Aclassroom participation@) should be measured or encouraged in the classroom.
As a result several articles have come out which describe techniques that encourage and measure oral communication, called in this paper TEMOC. Typically TEMOC are used in Japanese colleges or universities where the teacher often has greater freedom in grading. Most of these teaching strategies rely on consistently giving colorful cards, coins or other tokens as classroom credit to students who speak in English during class. This credit is often a significant part of the learners= overall grade, and are frequently used as or in the place of a formal oral test. The response of learners to TEMOC is reported to be quite enthusiastic, with an increased use of the target language.
Without questioning the potential such techniques have for encouraging class participation (and perhaps for lessening teacher fatigue), this paper asks the following questions: Are TEMOC in fact a valid measure of the learners= oral proficiency? Does language learning result from regularly monitoring and rewarding the use of spoken English in the classroom? Because most schools in Japan still value written tests as a more tangible measure of their students= progress, in what way does the regular measure of oral communication complement the established norms of many Japanese schools? The paper reports on the findings of a three-year research project that has sought answers to these questions, and discusses the perceived strengths and weaknesses that TEMOC may have for foreign language classes in Japan.
The technique chosen and used throughout this study came from a study I first administered in 1995. Despite my familiarity with this system, this choice was not an attempt to simply showcase my personal teaching strategy. It was chosen mainly because of its similarity in scope and practice to other published reports, and its affinity to TEMOC used by English language teachers throughout Japan.
In this technique, 33% of the total grade is dedicated to oral communication. Students earn points for each communicative act. One point is given to the learners each time they ask questions in English to clarify classroom tasks, speak in English during pairwork or information gap activities without reverting to their mother tongue, or volunteer one word answers to the teacher=s questions, despite whether or not the answer was correct. Learners gain two points each time they participate in more complex oral tasks which require analytical or problem-solving skills, or answer questions that call for greater use of the target language (e.g., AWhy did you like your trip?@). Students earn three points when volunteering to be a spokesperson of a group-based task, or attempting to answer questions which were considered challenging (e.g., AWhat are the most important things to think about when choosing a job?).
Most often, students who spoke fluently would receive more points per communicative act than students who, for one reason or another, communicated in one word utterances. The teacher walked among the students throughout the period. While students were on task, he passed out points immediately to each learner as white, blue or red chips. A white chip equaled one point, a blue chip two points, and a red chip three points. The learners received points consistently each time they participated verbally in the class -- if their responses could communicate meaning. Occasionally learners needed to have their spoken language corrected, and were awarded points only after they could make their message clear either to their partner or the teacher. At the end of each class, students would form a line and bring his or her tokens to the teacher, who would receive the tokens and record them as oral participation points. These points would then be tallied at the end of each semester and applied to their grade.
This procedure was piloted from 1994 to 1995 with four classes of second-year students at Keiwa College in Niigata Prefecture. At the end of the year a survey was administered to find out what impressions the learners had about the TEMOC (see Table One).
The survey found that most of the sample approved of giving regular credit for consistently speaking English in class. Most also thought that they attempted to speak English in class, and felt this gave them more opportunities to use the target language.
In the following year, another class of thirty students at Keiwa College was chosen for the first study that investigated whether TEMOC could be a valid measure of oral communicative ability. Validity in this sense was defined as construct validity, meaning that a test or technique actually measures what it claims to measure. Keiwa College proved an excellent venue for this experiment, because learners are placed by ability into different levels of language classes based on the school=s battery of written and oral tests. The students participating in this experiment were from the second year Oral English course. Students met for one semester, three times a week for sixty minutes. All are given oral communication tests twice a year in a format selected by the teacher. While space does not permit a full explanation of the oral test design used for this experiment, the teacher of this course, who was a trained examiner in the Cambridge PET and KET tests, constructed the classroom tests mainly after this model, but also included some of the methodology used at the Kanda University of International Studies.
The course was taught for the entire semester using the TEMOC. The learners also took midterm and final oral tests. At the end of the course, the scores of all the learners were totaled, and the data was analyzed using the VAR Grade for Windows 2.0 software package. The method of analysis was set up as a directional one-tailed test which used the Pearson r correlation coefficient. Correlating the test scores with the TEMOC scores resulted in a correlation coefficient of +0.82 (See Figure One). According to Hatch and Lazaraton, with the possibility of less than 1 percent that the findings are by chance, (p <0.01), the critical level of significance for a group of 30 is approximately +0.45. A correlation as high as +0.82 suggests that the formal oral tests and the technique for measuring regular classroom oral communication are measuring the same thing. However, these findings do not tell us whether the subjects learned anything because of the technique.
To study whether rewarding in-class oral communication fosters language learning, two groups of first year students at Keiwa College were selected in the following year for another experiment. Although the emphasis of the course was listening, the text chosen for the course required the learners to take part in many information gap activities, pairwork and groupwork activities. One group of 23 students was chosen as the experimental group, and another group of 23 learners was designated as the control group. Keiwa College was again chosen as the place to investigate this question because first year students all attend the same set of language classes. This meant that the only pedagogic difference between the two classes was the TEMOC. A selective deletion cloze test, which was established to have a high degree of reliability, was administered to both groups. The mean and median for both groups were virtually identical (See Table Two). At the end of the course, the same cloze test was administered to both groups. While the scores of the control group remained almost unchanged, the mean and median of the experimental group were noticeably higher. This may suggest that providing regular incentive for oral communication in the classroom helps to foster language learning, although more research will be needed verify the validity of this assertion.
Because their long tradition of use in Japan, grammar-based tests remain the standard by which most Japanese EFL are evaluated. Putting aside the question of whether or not this is an acceptable state of affairs, how would TEMOC correspond with established testing practices in Japanese schools?
One English for Oral Communication class at Niigata University was selected for this experiment. This venue was chosen because it was felt that both the subjects and the teaching environment typified what is normally found in traditional Japanese universities. The subjects (n = 26) were all first-year students from the university=s Science Department. No special criteria were used in selecting or excluding the subjects, neither were they pretested on their English proficiency level before entering this course.
The TEMOC was used in the same manner with these subjects as in the other research tasks. The subjects responded quite enthusiastically to the technique, and class morale was observed to be high. At the end of the first term, the subjects took a prepackaged grammar-based test designed by the company that produced the textbook. The scores of all the learners were totaled, and the data was analyzed again using the VAR Grade for Windows 2.0 software package. As before, the method of analysis was set up as a directional one-tailed test that used the Pearson r correlation coefficient. However this time, correlating the test scores with the TEMOC scores resulted in a correlation coefficient of -0.20 (See Figure Two). While the negative correlation does not reach the point of significance (- 0.37 at p < 0.05), it does at least suggest that TEMOC and grammar tests may be assessing different features of the learners= second language ability.
Although the research tasks and experiments in this study lacked the rigor of a team of researchers with greater time resources, this study nevertheless concludes that techniques that measure and encourage oral communication may have some practical educational value for the language classroom. Provided they are used fairly and consistently, they seem to have the potential to be a valid measure of the learners= oral ability. This research suggests that with positive correlations as high as +0.82, TEMOC might be used in place of a formal oral test, especially for those teachers who may not have the time or resources to do otherwise. Under certain circumstances TEMOC may even help learners to acquire the target language better. However, they might not be a good measure of grammatical knowledge or written skills.
It is suggested that TEMOC scores and grammar-related scores should be treated as separate modules from which to measure different aspects of the learners= ability in the target language. In this way, giving positive incentives for oral communication can be both fair and motivating for our learners.
. Hadley, G. (1997). A survey of cultural influences in Japanese ELT. Bulletin of Keiwa College (6), 61-87., and Finkelstein, B., Imamura, A., Tobin, J., (Eds). (1991). Transcending Stereotypes: Discovering Japanese Culture and Education. Yarmouth, Maine: Intercultural Press.
. Sabatini, Y., Matsumura, Y., and Tamura, Y. (1997). Report on a lecture by Dr. Stephen J. Gaies: "Philosophical and historical foundations of ELT in Japan." The Language Teacher 21 (11)., and Horio, T. (1988). Educational Thought and Ideology in Modern Japan: State Authority and Intellectual Freedom. edited and translated by Platzer, S. Tokyo: University of Tokyo Press.
. Wada, M. and Cominos, A. (Eds.).(1996). Japanese Schools: Reflections and Insights. Kyoto: Shugakusha., Law, G. (1995). Ideologies of English language education in Japan. The JALT Journal 17(2), 213-224., and Wadden, P. (Ed.). (1993). A Handbook for Teaching English at Japanese Colleges and Universities. New York: Oxford University Press.
. Poulshock, J.W. (1996). English language and content instruction for Christian academics and Christian language teachers. Christ and The World (6), 1-19., Otsubo, H. (1995). Japan=s higher education and Miyazaki international college: Problems and solutions. Comparative Culture: The Journal of Miyazaki International College (1), 1-10., and Fukuda, K., and Sasaki, M. Immersion program ni kanren suru shisatsu houkoku. (Task Group Report on Immersion Programs). Paper Presented at the Niigata University General Education and Language Research Annual Meeting. December 16, 1995.
. Garland, V. (1996). Teaching techniques and learning styles in Japanese universities. Journal of Crosscultural Studies, (6), 73-96.
. Messerklinger, J. (1997). Evaluating oral ability. The Language Teacher 21 (9), 67-68., Mills, S. (1995). Liven up class now! With participation cards. The Language Teacher 19 (7), 30-32., Hadley, G. (1995). Class participation: A solution for Japanese and Korean university English courses. Language Teaching 3 (3), 124-125., and Nelson, W. (1993). Increasing student-initiated communication and responses. The Language Teacher 17(7), 39-42.
. Nelson 1993, p. 39.
. Hadley 1995, pp. 124-125.
. See especially Mills 1995, Nelson 1993 and Fukuda and Sasaki 1995.
. Fried-Booth, D., and Hashemi, L. (1992). Pet Practice Tests 2. Cambridge: Cambridge University Press., and Delarche, M., and Marshall, N. (1996). Communicative oral testing. In On JALT 95: Curriculum and Evaluation. G. van Troyer, S. Cornwell, and H. Morikawa (Eds.) (1996). Tokyo: The Japan Association for Language Teaching.
. Revie, D. (1997). VAR Grade for Windows 2.0: Grading Tools for Teachers. Thousand Oaks, CA: VARed Software.
. Hatch, E., and Lazaraton, A. (1991). The Research Manual: Design and Statistics for Applied Linguistics. Boston: Heinle and Heinle, p 604.
. Hadley, G., and Naaykens, J. (In press). Testing the test: Comparing SEMAC and exact word scoring on the selective deletion cloze. Korea TESOL Journal 1 (1).
. Hadley 1997, Garland 1996.
. Wadden 1993.
. Wholey, M.L. and Sklar, A. (1996). Atlas 1 & 2 Testing Package. Boston: Heinle and Heinle Publishers.
. Revie 1997.