Concordancing, lexical chunks and the Lexical Syllabus

 

coll

In my “New Year’s Resolutions” I vowed to bash “The Lexical Approach”, and, in reply to some comments, promised to say more soon. There are already two pages on this website devoted to concordancing (see the list on the right), so I want here to just summarise these issues before explaining why I am not a fan of any lexically-driven syllabus, but why I am a fan of Nattinger and DeCarrico.

Given that using concordance programs to examine enormous corpora of English texts has led to more accurate and reliable descriptions of the English language, the question remains: To what extent do these new descriptions imply any particular pedagogical practice?  Before trying to answer that question, I want to recall that Nattinger and DeCarrico, drawing on Pawley and Syder and also on more recent research,  argue that what they call the “lexical phrase” is at the heart of the English language. Early work done by computational linguists (Hockey 1980, Sinclair 1987, Garside et al. 1987) on collocations uncovered recurring patterns of lexical co-occurrence, and more recent computer analysis has widened the scope of investigation to include the search for patterns among function words as well. As a result of such research, the 1990s saw several papers (see Concordance page) which argued that linguistic knowledge cannot be strictly divided into grammatical rules and lexical items, that rather, there is an entire range of items from the very specific (a lexical item) to the very general (a grammar rule), and since elements exist at every level of generality, it is impossible to draw a sharp border between them.  There is, in other words, a continuum between these different levels of language.

The suggested application of Nattinger and DeCarrico’s argument to language teaching is that lexis – and in particular the lexical phrase – should be the focus of instruction. This approach is quite different to Willis’ (which takes frequency as the main criterion – see below) and  rests on two main arguments.  First, some cognitive research (particularly in the area of PDP and related connectionist models of knowledge) suggests that we store different elements of language many times over in different chunks.  This multiple lexical storage is characteristic of recent connectionist models of knowledge, which assume that all knowledge is embedded in a network of processing units joined by complex connections, and accord no privilege to parsimonious, non-redundant systems. “Rather, they assume that redundancy is rampant in a model of language, and that units of description, whether they be specific categories such as “word” or “sentence”, or more general concepts such as “lexicon” or “syntax” are fluid, indistinctly bounded units, separated only as points on a continuum” (Nattinger and DeCarrico, 1992).  If this is so, then the role of analysis (grammar) in language learning becomes more limited, and the role of memory (the storage of, among other things, lexical phrases) more important.

The second argument is that language acquisition research suggests that formulaic language is highly significant.  Peters (1983) and Atkinson (1989) shows that a common pattern in language acquisition is that learners pass through a stage in which they use a large number of unanalyzed chunks of language – prefabricated language. This formulaic speech is seen as being basic to the creative rule-forming processes which follow. Starting with a few basic unvarying phrases, first language speakers subsequently, through analogy with similar phrases, learn to analyze them as smaller patterns, and finally into individual words, thus finding their own way to the regular rules of syntax.

Biber, Sinclair, Willis and Lewis, among others, argue even more forcefully than Nattinger and DeCarrico that teaching practice must fit the new, more accurate, descriptions of English revealed by corpus-based research. They go further, and suggest that now teachers have the data available to them, it should form the basis for instruction. One of the most strident expressions of this view is the following:

” Now that we have the means to observe samples of language which must be fairly close to representative samples, the clear messages are:

a)We are teaching English in ignorance of a vast amount of basic fact.  This is not our fault, but it should not inhibit the absorption of the new material.

b)The categories and methods we use to describe English are not appropriate to the new material.  We shall need to overhaul our descriptive systems.

c)Since our view of the language will change profoundly, we must expect substantial influence on the specification of syllabuses, design materials, and choice of method.” (Sinclair, 1985)

The last point Sinclair makes is, I think, as hugely important as it is minimally elaborated – for him it seems to follow as the logical consequence of the previous two points.  Sinclair argues that the work of the COBUILD team is the obvious application of the facts uncovered by concordancers: the COBUILD dictionary series draws on corpus-based research in order to better reflect real language use, the COBUILD Grammar “corrects” the previous impressionistic intuitions of pedagogic grammarians, and the COBUILD English coursebooks exemplify the methodology that a lexical syllabus implies.  Biber sees the teaching implications of corpus-based research as similarly obvious, and agrees with Sinclair that both grammar and vocabulary teaching must adjust to the new facts.

willis

Willis (1990), drawing on the work of Sinclair (1987, 1991) and the COBUILD team (led for a while by Sinclair), outlines a lexical syllabus which he claims provides a “new approach to language teaching”.  Willis starts from the “contradiction” between a grammatical syllabus and a communicative methodology.  A grammar syllabus is form-focused and aims at the correct production of target forms, but real communication demands that learners use whatever language best achieves the desired outcome of the communicative activity.  There is, says Willis, a dichotomy in the language classroom between activities which focus on form and activities which focus on the outcome and the exchange of meaning.

Willis argues that the presentation methodology which regards the language learning process as one of “accumulated entities”, where learners gradually amass a sequence of parts, trivialises grammar – learners need insights into the underlying system of language.  The method (and the course books employed) oversimplify, and make it difficult for learners to move beyond these entities or packages towards important generalisations.  Willis cites the typical way in which the present simple tense (which is neither simple nor present) is presented.  Even if the issues were dealt with less simplistically, presentation of language forms does not provide enough input for learning a language.  A successful methodology must be based on use not usage, yet must also offer a focus on form, rather than be based on form and give some incidental focus on use.

Willis claims that the COBUILD English course embodies this view.  The course looks at how words are used in practice by using data produced with a concordancer which examined the COBUILD corpus of more than 20 million words in order to discover the frequency of English words and, as Willis puts it “to better examine various aspects of English grammar”.  Word frequency determines the contents of the courses.  The COBUILD English Course Level 1 starts with 700 words and Levels 2 and 3 go out to 1,500 then 2,500.  Tasks are designed that allow the learners to use language in communicative activities, but also to examine the language (the corpus) and generalise from it.  For Level 1 they created a corpus which contextualised the 700 words and their meanings and uses, and provided a range of activities aimed at using and exploring these words.  Willis argues that the lexical syllabus does not simply identify the commonest words, it focuses on commonest patterns too, and indicates how grammatical structures should be exemplified by emphasising the importance of natural language.

lewis

Then comes Lewis and his The Lexical Approach (1993).  Taking advantage of Nattinger and DeCarrico in particular, Lewis cobbled together a confused jumble of half-digested ideas into what, typically, he saw as an original work of genius which represented a giant leap forward for ELT methodology.  What the book actually offers is almost nothing original and just as little in terms of any coherent or cohesive ELT methodology. Unlike Willis, Lewis offers no proper syllabus, or any principled way of teaching the “chunks” which he claims are the secret to the English language.  Nor did Lewis pay heed to the growing research being done in SLA which was indicating that the most promising way to see SLA is as the development of the learner’s “interlanguage”, a term which was being increasingly developed into more and more powerful cognitive-based hypotheses and theories, all of which assume that a generative grammar is at work.

discuss

Discussion

So, what are we to make of all this?  First, we must be clear about the limitations of the kind of descriptions concordancers offer us of the language. It may help us to see these limitation of these descriptions if we take Biber’s claim that computational text analysis has provided better criteria for defining discourse complexity, thus demonstrating that the former “intuitive” criteria for discourse complexity are inadequate.  Widdowson points out that the criteria Biber gives all relate to linguistic features and co-textual co-occurrences. “What is analyzed is text, not discourse” (Widdowson, 1993).  Biber takes readability to be a matter of the formal complexity in the text itself, without dealing with how, as Widdowson puts it “an appropriate discourse is realized from the text by reference to schematic knowledge, that is to say to established contextual constructs” (ibid).  Adequate guidelines for the construction of reading materials need to take discourse into account, and it is not self evident that the criteria for textual complexity suggested by Biber are relevant to reading.  Moreover, since concordancing is limited to the analysis of text, since the language is abstracted from the conditions of use, it cannot reveal the discourse functions of textual forms.

Concordancing tells us a lot about text that is new and revealing, but we must not be blinded by it. Although corpus analysis provides a detailed profile of what people do with the language, it does not tell us everything about what people know. Chomsky, Quirk et al.(1972, 1985), and Greenbaum (1988) argue that we need to describe language not just in terms of the performed (as Sinclair, Biber, Willis, and Lewis suggest) but in terms of the possible.  The implication of Sinclair and Biber’s argument is that what is not part of the corpus is not part of competence, and this is surely far too narrow a view, which seems to hark back to the behaviourist approach.  Surely Chomsky was right to insist that language is a cognitive process, and surely Hymes, in arguing for the need to broaden our view of  competence was not arguing that we look only at attested behaviour.

Externalised and Internalised Language

Widdowson (1991) uses Chomsky’s distinction between Externalized language (E-Language): a description of performance, the actualized instances of attested behaviour, and Internalized language (I-Language): competence as abstract knowledge or linguistic cognition, to suggest that we need to group the four aspects of Hymes’ communicative competence (possibility, feasibility, appropriateness and attestedness) into two sets. I-language studies are concerned with the first two of Hymes’ aspects, and E-language studies deal with the other two.  Discourse analysis deals with one E-linguistic aspect and corpus-based linguistics with the fourth.  The limitations of corpus-based research are immediately evident, and thus we should not restrict ourselves to its findings. As Greenbaum observes: “We cannot expect that a corpus, however large, will always display an adequate number of examples…. We cannot know that our sampling is sufficiently large or sufficiently representative to be confident that the absence or rarity of a feature is significant” (Greenbaum, 1988).  Significant, that is, of what users know as opposed to what they do.  Widdowson points out that in discourse analysis there is increasing recognition of the importance of participant rather than observer perspective. To the extent that those engaged in discourse analysis define observable data in terms of participant experience and recognise the psycho-sociological influences behind the observable behaviour, they too see the actual language as evidence for realities beyond it.

But how do we get at this I-Language, this linguistic cognition, without having to depend on the unreliable and unrepresentative intuitions of the analyst?  While the description of E-language is based on empirical observation, it is obviously far more difficult to describe I-language, since one is forced to rely on introspection. Conceptual elicitation is one answer. Widdowson cites Rosch (1975) who devised a questionnaire to elicit from subjects the word which first sprang to mind as an example of a particular category.  The results of this conceptual elicitation showed that subjects consistently chose the same hyponym for a particular category: given the superordinate “bird”, “robin” was elicited, the word “vegetable” consistently elicited “pea”, and so on.  The results did not coincide with frequency profiles, and are evidence of a “mental lexicon” that concordancers cannot reach.  In summary, the description of language that emerges from concordance-based text analysis has its limitations, as do the faulty way in which Sinclair, Biber , Lewis and others use the new findings of corpus-based research to argue for certain pedagogical prescriptions. Let’s take a look.

describe

Descriptions and prescriptions

Quite apart from the question of the way in which we choose to describe language, and of the limitations of choosing a narrow view of attested behaviour which can tell us nothing directly about knowledge, there is the wider issue of what kinds of conclusions can be drawn from empirically attested data. The claim made by Biber, Sinclair and others is that, faced with all the new evidence, we must abandon our traditionally-held, intuitive beliefs about language, accept the evidence, and consequently change our description of the language, our language materials, and our language instruction too. Now, the argument goes, that we have the facts, we should describe and teach the facts (and only the facts) about English.

But, as Widdowson (1990) points out so succinctly, the relationship between the description of language and the prescription of language for pedagogical purposes “cannot be one of determinacy.”  This strikes me as so obvious that I am surprised that Sinclair, Biber and others seem not to have fully grasped it. No description has any necessary prescriptive implications: one cannot jump from statements about the world to judgements and recommendations for action as if the facts made the recommendations obvious and undeniable. Thus, descriptions of language cannot determine what a teacher does. Descriptions of language tell us about the destinations that language learners are travelling towards, but they do not provide any directions about how to get there.  Only prescriptions can do that.

While Sinclair is justified in expecting corpus-based research to influence syllabus design, there is no justification for the assumption that it must necessarily do so, and much less that such research should determine syllabus design. A case must be made for the approach which he seems to regard as somehow self-evident.  When Sinclair says that the categories and methods we use to describe English are not appropriate to the new material, we need to know by what criteria appropriateness is being judged.  Similarly, when Biber says “Consensus does not mean validity”, and when he claims that corpus-based research offers the possibility of “more effective and appropriate pedagogical applications”, we need to ask by what criteria (pedagogical presumably) validity, effectiveness and appropriateness are to be judged.  When he talks of data from frequency counts “establishing” the “inadequacy” of discourse complexity he is presumably once again referring to assumptions, criteria which are not made explicit.  When he suggests that the evidence of corpus-based research indicates that there is something special about the written mode, in that it enables a kind of linguistic expression not possible in speech, he is once again making an inadmissible conclusion.

It is tempting to stop here. Since Biber and Sinclair do not seem to appreciate the need to make a case for their approach, to lay bare the assumptions and beliefs which underlie their work, and which inform the way they both select and examine data, one might think that it is enough to bring this glaring omission to their attention. But, of course, some extremely valuable work has been done, the case for concordancing and corpus-based research does not have to be thrown out simply because it has not been properly argued, and we must look a little more closely at some of the issues raised.

Facts do not “support” prescriptions,  but our view of language will influence our prescriptions about how to teach and learn it.  If we view language as attested behaviour, we are more likely, as Willis does, to recommend that frequently attested items of lexis form the core vocabulary of a general English course. Willis appreciates that his approach to syllabus design is in any way “proved” by facts, but he still takes a very narrow view.  To return to the discussion above about Rosch’s “prototype words” (the mental lexicon), I do not think that such words should be ignored simply because they are not frequently attested, and it could well be argued that they should be one of the criteria for identifying a core vocabulary. Widdowson takes the case further.  He suggests that Chomsky’s idea of “kernel sentences” indicates the possibility that there are also prototype sentences which have an intuitive role.  They do not figure as high frequency units in text, but they do figure in descriptive grammars, and their presence there can be said to be justified by their intuitive significance, their psychological reality, as prototypes. Furthermore, they are the stock in trade of language teaching. Teachers may all be wrong about the significance of such kernel sentences, but we cannot simply dismiss the possibility of their prescriptive value on the grounds that they do not occur frequently in electronically-readable corpora.

More evidence of the limitations of sticking to frequently attested language forms comes from research which led to the specification of core language to be included in Le Français Fundemental (Gougenheim et al. 1956).  The research team began with frequency counts of actual language, but they felt that some words were still missing: french people had a knowledge of words which the researchers felt intuitively should be included despite their poor showing in performance. So the researchers carried out an exercise in conceptual elicitation.  They identified categories like furniture, clothing, occupations, and asked thousands of school children which nouns they thought it would be most useful to know in these categories. Once again, the lists did not correspond to frequency counts, and gave rise to the idea of “disponibilité” or availability. As Widdowson says, the difference between the french research and Rosch’s is that availability is a prescriptive criterion: the words are prescribed as useful not because they are frequently used but because they appear to be readily available in the minds of the users.

Valency

Widdowson (1990) suggests that there are more direct pedagogical criteria to consider than those of frequency and range of language use.  In terms of the purpose of learning, he sights coverage as a criterion described by Mackay: The coverage .. of an item is the number of things one can say with it.  It can be measured by the number of things it can displace” (Mackay 1985).  Most obviously, this criterion will prevail where the purpose of learning is to acquire a minimal productive competence across a limited range of predictable situations.   The process version of coverage is what Widdowson calls valency – the potential of an item to generate further learning.  He gives the example of the lexical item “bet” as described in the COBUILD dictionary (1987). Analysis reveals that the canonical meaning of the word, “to lay a wager”, is not as frequently attested as its informal occurrence as a modal marker as in “I bet he’s late”.  It does not follow, however, that the more frequent usage should be given pedagogical preference.  First, the informal meaning tends to occur mainly in the environment of first person singular and present tense, and is idiomatic, and it is thus limited in its productive generality.  Second, the modal meaning is derivable from the canonical lexical meaning but not the other way round. In this sense the former has a greater valency and so constitutes a better learning investment. Widdowson proposes a general principle: high valency items are to be taught so that high frequency items can be more effectively learned.

Pedagogic prescription should, suggests Widdowson, specify a succession of prototypes – simplified versions of the language, each of which is a basis for later improved models.  The process of authentication through interim versions of the language has to be guided by other factors as well as those of frequency and range of actual use, factors to do with usefulness rather than use.  Words and structures might be identified as pedagogically core because they activate the learning process, even if their actual occurrence in contexts of use is slight.

It would seem then that while concordancer output gives us a clearer understanding of how language is put together in use (although it cannot reveal the discourse functions of any particular piece of text), it does not get us very far in our search for pedagogical prescriptions, and, indeed it can easily lead us astray.  Although I would agree largely with this conclusion, I think the case for using lexical phrases as a key element in language instruction is extremely strong; the work of Nattinger and DeCarrico strikes me as an important development which is both radical and far-reaching.  While Sinclair, Biber, Willis and others take too narrow a view of language competence, lexical phrases (more carefully described and better analysed units than earlier descriptions of formulaic language) occupy a crucial place in the continuum between grammatical rules and lexical items, and can therefore help to re-define language competence, and to identify pedagogically core parts of the language on which to base our instruction.

nattinger

A new look at Communicative Competence: rescuing Nattinger and DeCarrico 

In Knowledge of Language and Ability for Use (1989) Widdowson, having argued that Chomsky’s and Hymes’ views of competence are not commensurate (since one is interested in an abstract system of rules, and the other in using language) suggests that there are eight, not four aspects to Hymes’ competence: knowledge of each aspect, and ability in each one. He then reformulates these as grammatical competence (the parameter of possibility) and pragmatic competence (the rest), and characterises knowledge in terms of degrees of analysability, and ability in terms of accessibility.  Although both analysability and accessibility are necessary components, analysability has its limits. Nattinger and DeCarrico (after Pawley and Simon) draw attention to lexical phrases which are subject to differing degrees of syntactic variation. It seems that a great deal of knowledge consists of these formulaic chunks, lexical units completely or partially assembled in readiness for use, and if this is true, then not all access is dependent on analysis.  Gleason (1982) suggested that the importance of prefabricated routines, or “unopened packages” in language acquisition and second language learning has yet to be recognised.

If we accept this view then communicative competence can be seen in a fresh way.  Widdowson (1989) says this: “Communicative competence is a matter of knowing a stock of partially pre-assembled patterns, formulaic frameworks, and a kit of rules, so to speak, and being able to apply the rules to make whatever adjustments are necessary according to contextual demands. Communicative competence is a matter of adaption, and rules are not generative but regulative and subservient”.  In a different text, Widdowson (1990) says “Competence consists of knowing how the scale of variability in the legitimate application of generative rules is applied – when analysis is called for and when it is not. Ignorance of the variable application of grammatical rules constitutes incompetence”.

Our criteria for pedagogical prescription do not have to change as a result of this new formulation of competence, but I think we are nearer to identifying pedagogically key units of language – parts of the language that activate the learning process. The suggestion is that grammar’s role is subservient to lexis, and this implies a radical shift in pedagogical focus.  If, as Widdowson thinks, we should provide patterns of lexical co-occurrence for rules to operate on so that they are suitably adjusted to the communicative purpose required of the context, then Nattinger and DeCarrico’s work, which identifies lexical phrases and then prescribes exposure to and practice of sequences of such phrases, can surely play a key role. They present a language teaching program based on the lexical phrase which leads students to use prefabricated language in a similar way as first language speakers do, and which they claim avoids the shortcomings of relying too heavily on either theories of linguistic competence on the one hand or theories of communicative competence on the other. “Though the focus is on appropriate language use, the analysis of regular rules of syntax is not neglected”  (Nattinger and DeCarrico, 1992).

Despite the criticisms I have made of some of the more strident claims made by researchers using concordancers, and despite the limitations of text analysis and of frequency as a pedagogical criterion, there is no doubt that corpus-based research, as done by the experts, is throwing valuable light on the way English and other languages are actually used. The new information can help build a better, more accurate, description of English, and can help teachers, materials writers, and learners escape from the intuitions and prejudices of previous “authorities”.  Sinclair, Biber, and others are right to challenge traditional descriptions of the language and the current consensus about what weight to give certain structures and certain meanings of lexical items. It is surely positive to see new dictionaries, grammars, and course books appearing which take the new findings into account.

Nor, in my opinion, is there much doubt that the work done by Pawley and Syder and by Nattinger and DeCarrico  is leading to important modifications in present views of the distinction between grammar and lexis.  We are re-appraising the role of formulaic language, and, I think, stumbling towards a view where grammar is seen as a kit of regulative rules which are variably applied to chunks of language in order to make whatever adjustments are necessary according to contextual demands.  That is a very dramatic paradigm shift indeed!

Note: References cited above can be found at the end of the page “* Concordancers” (see list on the right).

About these ads

14 thoughts on “Concordancing, lexical chunks and the Lexical Syllabus

  1. hi geoff

    nice article regarding importance of phraseology, though the bulk of the article is based on Widdowsons critque of Sinclair , a critique that seems strawmannish? e.g. see this critique of Widdowsons critique if u have not done so already http://www.beaugrande.com/WiddowSincS.htm

    also the only reference i can find online for the time period 1985 and title Lexicographic Evidence by Sinclair is this ICAME newsletter http://icame.uib.no/archives/No_9_ICAME_News_index.pdf starting on page 11.

    the final sentences by Sinclair are telling that imo Widdowson was constructing a strawman?

    <>>

    btw will u be taking the #corpusmooc? :)

    ta
    mura

  2. oops here are those final sentences by Sinclair:

    Corpus linguists must beware of mistaking
    ((absence of knowledge)) for ((knowledge of absence)).
    Therefore, although the corpus work will change and improve lexicography,
    the evidence must be inter-preted in the light of other things known or felt
    about the language.

    ta
    mura

  3. Hi Mura,

    Thanks for this. I have read the Beaugrande article which you give the link to, and I’m sorry that I didn’t mention it in my own post. Robert de Beaugrande makes a number of interesting points, and while I think he argues persuasively that Widdowson’s arguments against Sinclair sometimes misrepresent Sinclair, I don’t think he’s right to say that Widdowson is guilty of building a strawman argument against Sinclair’s views.

    I should give the background to my post. It (and the 2 pages on the website on concordancing) is taken from the dissertation I did for my MA 16 years ago when Henry Widdowson was my tutor. A big part of the dissertation was producing the program Microconcord, published by OUP, and put together by Tim Johns, David Hardisty, Simon Morrison-Bowie and me, but I wrote a long text too, and yesterday I’m afraid I just dusted it off, so to speak, and cut and pasted bits into the post. This explains the dated references, why Beaugrande’s text is not included, and also, to some extent, the prominence given to Widdowson’s views.
    .
    But. while Widdowson certainly influenced me, I was not an uncritical fan: I thought then, and continue to think now, that he is right in his main points against Sinclair, Biber and others. The most important two issues are the ones I tried to highlight in the post:

    1. Sinclair says (and it’s either in Sinclair, J. (1987) (ed.) “Looking Up”. London, Collins., or Sinclair, J. (1991) “Corpus, Concordance, Collocation” Oxford, OUP., not 1985 – sorry) “Since our view of the language will change profoundly, we must expect substantial influence on the specification of syllabuses, design materials, and choice of method.” Elsewhere, he repeatedly claims that the COBUILD English coursebooks exemplify the methodology that a lexical syllabus implies. Against this, Widdowson argues that description has no necessary prescriptive implications: one cannot jump from statements about the world to judgements and recommendations for action as if the facts made the recommendations obvious and undeniable. Descriptions of language cannot determine what a teacher does.

    2. There are limitations to the kind of descriptions which concordance-based searches of corpora can offer us. The way we choose to describe language is important, and the limitations of choosing a narrow view of attested behaviour which can tell us nothing directly about knowledge are, I think, convincingly laid out by Widdowson. .

    Based on these 2 arguments, I think Widdowson’s re-formulation of competence is very interesting and I like his suggestion that grammar’s role is subservient to lexis. Providing EFL / ESL learners with patterns of lexical co-occurrence for rules to operate on so that they are suitably adjusted to the communicative purpose required of the context seems to me to be a very promising approach: better in all respects than the sketchy and half-baked lexical approach suggested by Michael Lewis. While I’m at it, I’ll take the chance to say that it’s also a lot better that Scott Thornbury’s appalling book, published by OUP in 2004 “Natural grammar: the keywords of English and how they work”. See the post “Crap Books” in this blog for a review.

      • Hi Mura,

        Tim Johns made an original concordancer in the 80s on a 48K machine called New Brain. I bought a New Brain on the strength of it, and I used to talk to Tim a lot about using concordancers for ELT. Tim, Chris Jones, Glyn Jones, David Hardisty and I were part of the CALL special interest group in IATEFL and Tim was the real force behind what I think he called “data-driven learning” – he used photocopies of KWIC searches he’d done with the search word or the word to the left or right blanked out with students.

        In 1992, I think, Simon Murison-Bowie from OUP asked David Hardisty and me to help him get Tim Johns to design a concordancer for OUP to publish. Tim was almost impossible to work with, but Simon, David, Tim and I all eventually met twice for a few days to hammer out the specs of the program. When we’d more or less agreed, Scott did the actual programming; can’t remember what language he used for the code. David, Simon and I then wrote a manual for the Microconcord program which was published in 1993. Simon claimed authorship of the manual, tho I think David and I got a mention.

        Sorry, but I have no screenshots or links.

        • hi geoff
          this is great info thank you, have updated timeline; i have put in microconcord (original?) from 1986 as there is an attested version for zx spectrum, i wonder if the program for the newbrain that u mention was an early original version of microconcord?

          btw i am running microconcord (oup) on a handheld it perfoms pretty well on the 1m brown corpus :)

          ta
          mura

  4. These posts are great and very helpful. It’s logically deducible but not obvious that corpus findings should not prescribe one or the other way of teaching grammar or lexis. Still, I wonder if you could think of an example of a time when a finding from corpus studies influenced your way of approaching a particular grammar point or lexical item.

    • Hi Mark,

      One of the most famous examples is the use of the word “any”. While many teachers explain that “any” is the interrogative and negative form of “some”, a search of “any” in any (!) large corpora reveals that the majority of the occurrences of “any” are in fact examples of its use in the affirmative, in the sense of “it does not matter which”.

      Willis (1990) points out that corpus-based research has shown that the passive voice is inadequately treated in most course books, and he is similarly critical of the way in which they insist on the “myth” of the three conditionals. Any search for “if” in a corpus will quickly illustrate that there are in fact far more than three conditionals, and that the three which course books usually focus on are not the commonest.

      The way in which course books and teachers present reported speech has also been influenced by corpus-based research, especially of spoken text, since it indicates that the set of reported speech procedures described in course books are in fact rarely used in natural spoken discourse.

      As for lexis, Sinclair’s (1991) analysis of the word “back” struck me forcefully. While most dictionaries list the human body part as the first meaning, a search of any big corpora will reveal that this meaning is relatively rare, and that the adverbial sense of “in, to or towards the original starting point” (not usually given prominence) is the most common.

      Another example is the word “certain”, which rarely marks “certainty”. More commonly it is used to mark a referent – a certain kind, in certain places.

      • Thanks, I thought maybe there were some that weren’t already mentioned. I suppose those are quite memorable, though. I do agree that descriptions of language don’t equal prescriptions for how to teach it, but I tend to take all of the above examples as support for letting quantities of authentic input and interaction guide the introduction of lexical items and grammar, a la Dogme ELT. After all, if “back” as in “go back home” is so much more common than “my back hurts” then it should be that much more common in conversation and input, right? Are the discoveries of corpus linguistics only a problem when we assume the grammatical and lexical syllabus must be set out in advance?

        • Hi Mark,

          There’s no doubt that corpus research has improved our understanding of English grammar and lexis, and I think teachers do indeed respond to the new information.

          You ask “Are the discoveries of corpus linguistics only a problem when we assume the grammatical and lexical syllabus must be set out in advance?” I agree with your implication that setting out a syllabus in advance is a problem in itself (harping right back to Breen’s criticism of “product syllabuses”) but, apart from that, I think that the attempts so far to set out a lexical syllabus have been quite poor. Many, like Willis, take frequency as the guiding principle for the “What” of the syllabus, and this, while intuitively appealing, suffers from the drawbacks discussed in my post. Additionally, I have to say that the Willis syllabus and the Cobuild materials supporting their lexical syllabus are very dull and I’m not surprised that few teachers or institutions bought them. The “COBUILD Grammar” and Scott Thornbury’s “Natural Grammar” are, IMHO, even duller and less helpful.

          The other “approach” to using the results of corpus-based research is that proposed by Nattinger and DeCarrico, and, as I’ve argued, I find this much more promising. But, as I’ve said, Lewis doesn’t offer an approach, let alone a coherent syllabus, and Nattinger and DeCarrico limit themselves to some suggestions about how lexical chunks might influence ELT practice.

          Which leads me to (finally!) address questions asked by Rose Bard, who, in an email to me says “I don’t quite know what the Lexical approach is… but I am very interested in how learners actually learn and expand their lexis…… What would an ideal lesson look like for you?”

          As I hope all the above makes clear, I don’t know what the “Lexical Approach” is either, Rose. For me, an ideal lesson would be one which took place in the context of a process syllabus which started with a needs analysis and consisted of a number of tasks agreed on by class members which were cyclically reviewed and updated. During these tasks there would be the kind of focus on form recommended by Long, and the tasks would introduce learners to, and encouraged the use of, prefabricated language of the sort discussed by Nattinger and DeCarrico.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s