Mylast article (below) outlined errors made by successive governments in their handling of the teaching of reading. This one will deal with the far greater errors of teacher trainers and academics. These are rooted in good intentions. Across the world, public education does not ensure that everyone learns to read and write fluently and well. With rare exceptions, languages are complicated by issues such as complex grammatical rules (German), large numbers of silent letters (French) and, in English, by the fact that there is not a direct and consistent correspondence between every letter and every sound, so that trying to sound out words, one letter at a time, will often fail. As a result, English-speaking countries have always had a long tail of weak readers in international comparisons. Something needed to be done. But what?
The first step was to try to determine what we actually do as we read, and how we think. During the sixties, evidence of eye movements was confined to the large jumps our eyes make as we read down a page, while research on memory showed that we could not hold more than 6 or 7 items in working memory at a time. Taken together, these suggested that we could not hold in our mind all of the letters in longer words, and that we were not paying attention to every detail of print as we read. That led to the fallacy of reading as a “psycholinguistic guessing game”, and to Frank Smith’s proposition that “We learn to read by reading”. Smith’s brilliant, hilarious presentations at conferences won wide acceptance for the theory in teacher training colleges and university departments of education. Rebranded, it became the basis of Reading Recovery’s “multicue” theory and of Labour’s Searchlights.
Modern technology, also used by the military to direct smart bombs, shows that we track not only every word, but the form of every letter as we read. The second error, on the use of memory, is caused by not taking account of the grouping of letters into syllables. Many words have more than seven or eight letters but almost none have more than that number of syllables. The final flaw in the theory was the role of context in identifying the meaning of a word. A series of studies in the 80s, and a doctoral thesis by Morag MacMartin, showed that context and pictures were more likely to mislead than help, as children were likely to misread subsequent words to try to make them fit what they had gleaned from the context.
Having written about these battles for the Times Educational Supplement throughout the eighties, I had thought the issue was settled, but it is not, and the issue is complex. Some academics and teacher trainers use phonics as an Aunt Sally, imposed by governments, and what do governments know? Others, like Professor Dominic Wyse in his contributions to a recent paper publicised by our friends at The Guardian, say that the neglect of phonics has been a weakness, but that they need to be balanced by other methods. These “other methods”, though, do not involve additional teaching of the use of the alphabet in English, which I’ve described in previous postings and in my books – but elements of guessing, which don’t work, as there are too many possibilities in a writer’s choice of words, or in links between texts and pictures, for them to be relied on.
Phonics, though, are only a small part of the argument. Most of the academics involved are more interested in sociology than in reading. Professor Bradbury’s only published paper on phonics does not consider its impact on reading skills, but attacks it as leading teachers to group pupils by their abilities, an approach that the Director of the Education Endowment Foundation considers “symbolically violent”. Professor Wyse has been a primary teacher. His doctorate, The Teaching of English:Research Evidence and government policy argues that policy has been insufficiently supported by research evidence, but does not involve direct research on the teaching of reading. . Dr Mary Bousted, of the National Education Union, wrote her thesis on “The Ideology of English teaching”. Their knowledge and understanding of reading research is limited, and they apply arbitrary selection criteria. For example, they cite an important paper by the Australian, Anne Castles and UK colleagues, but not its key point on the role of phonics in teaching children the principles of alphabetic writing. There is no mention of the late Professor Katharine Perera’s discoveries of the role of accurate reading of words in the development of phrasing, or of the current findings of brain research, notably from Stanislas Dehaene.
Cherry-picking, or a reflection of the authors’ limited knowledge? Perhaps a little of both. In either event, their reliance on a “meta-analysis” of a large number of studies, some of which were not even carried out in English, is grasping at straws. Professor Wyse has long championed randomised controlled trials as the only way of providing reliable evidence, so it is fair to ask why he has chosen to rely on studies from other countries, rather than carry out any on his own approach. The fallacy of over-reliance on such studies is demonstrated by this one, carried out by two of their main advocates, and one of the worst-ever studies in the field of literacy. The academic critics of researchers such as Professors Shona Johnstone and Anne Castles need to up their game or find other work to do.
The government’s sharp response to this paper is fully justified. Phonics has improved matters, and there is a second wind that helps children to read the generally regular words derived from Latin and Greek that they will meet in secondary school. My reservations about “whole phonics”, and the neglect of alphabetic knowledge not directly related to current speech, remain. I continue to demonstrate that filling this gap is the key to solving many serious literacy problems. What we do not need is a return of Searchlights, multicue systems, and other guesswork under the banner of balance. We guess when we don’t know. It doesn’t work. To become literate, children need to know.