Hi

Harold R. Keables

This website is for those doing a postgraduate course in Applied Linguistics and TESOL. It is completely independent, and has no support or connections with any university.

Check out the Resources Section, which offers:

* Links to articles on all aspects of the MA.
* A Video section offering lectures by Dörnyei, Crystal, Nunan, Larson-Freeman, Krashen, Scott Thornbury (who??) and many others.
* Suggested useful blogs and web pages.
* Presentations

Academics work in universities. Their job is to teach and to do research. Most academics prefer research to teaching and are not taught how to teach. So, if you study in any good university you’ll be taught by experts who haven’t been taught how to teach. Nevertheless, if you’re a good student, you’ll get an excellent education. This leads to the suggestion that in tertiary education, teaching methodology matters little: it’s the student who counts. The students who go to the best universities are carefully selected, and a key criterion in the selection process is the student’s ability to study without spoon-feeding. A good student does her own studying and knows how to draw on the resources offered. When you sign up for a post-graduate course know that you are in charge and that you, and you alone, will determine the outcome. Your tutor is an expert, not, usually, a teacher. Your job is to use your tutor’s expertise, which means asking the right questions. Don’t ask “What should I do?”, or “Please suggest a topic”. Ask for comments on your own drafts, ask for guidance on reading; ask for clarification. Get into a dialogue with your tutor; shoot the breeze; get familiar; build a relationship, but remember: your tutor is your mentor in the Greek sense of the word, not your teacher.

Bullshit and its critics

power
Last week I sent the following tweet:

V. sad that Russ gets star-struck by spin, doubletalk & bullshit from Scott T. & Harmer http://www.eltjam.com/tales-of-the-undead/

I was referring to comments which Scott Thornbury and Jeremy Harmer had made on Russ Mayne’s guest post on the excellent eltjam blog. Russ has recently sprung to fame by pointing out how some preposterous claims about learning languages are still supported by publishers and “experts”, even in the face of overwhelming evidence that they’re wrong. Three of these are “Multiple Intelligences”, “Learner styles”, and “Neuro-Linguistic Programming”. Having written an article demonstrating that “learner styles” is an empty construct, Russ was surprised to see that the subsequent edition of MET included “a refutation” of his article “based not on research, but on ‘feelings‘”. Russ continues: “The author, like Harmer and many others, describes learning styles/MI etc as ‘self-evident truths’, meaning that no evidence will ever be enough to discredit them. Feelings and great stories about how these things ‘really work’ beat smart-arse debunking every time”.

VAMP

You probably won’t be surprised to hear that Thornbury and Harmer were quick to make their appearance on the new star’s post. In the Comments section, David Warr, in an attempt to get Russ to clarify his criticisms, describes an activity where a student licks a word to confirm that he recognises it. David asks Russ if he’s saying that no learning will take place in such an activity. Enter Scott Thornbury, who tells us:

“There is an ample history of solid research into how memory works (see Baddeley 1997 for example) to suggest that the word retrieval process in your activity will have a positive effect on learning…. I hasten to add that the positive benefits of touching are not validation of a kinaesthetic learning style (since the effects work equally well for all learners) but rather that they confirm the findings of a rapidly-growing research focus on ‘embodied cognition’, that is, the way the mind and the body are components of an intricately integrated system. This is not a style, nor even an intelligence. It is just cognition, and we all have it”.

Notice that Thornbury ignores the point of David’s question (viz: please explain your objections to learner styles, etc., more fully), preferring to showcase his extraordinary grasp of cognitive science. But if you don’t get swept away by admiration of this man’s towering intellect, you’re entitled to ask what positive effects he’s talking about. The positive effects of touching what? How? Where? When? What the hell is he talking about? And how do the positive effects of touching (whatever that means) “validate” and “confirm” the findings of “a research focus” on embodied cognition”? Worst of all is the final sentence when we’re told: “This is not a style, nor even an intelligence. It is just cognition, and we all have it”. “What?”, we gasp, “All of us have embodied cognition? Even normal little people like us who buy your books, O Wise One?” I agree: this isn’t style, and it certainly isn’t intelligence, it’s pedantic, condescending bullshit.

Russ replies:

I once had an article about NLP rejected from a journal. The reviewer said: “A final point: while it’s clear – and reasonably convincingly argued in the paper in question – that the claims advanced for NLP are empirically unfounded, the writer is a little too dismissive of the relationship between perception and language, given recent work in embodied cognition (e.g Johnson, 1987; Clark 2011) and in neuroscience, including the discovery of mirror neurons (e.g. Iacoboni, 2009)…”

Blow us all down if Thornbury then doesn’t reveal, smiley thrown in, that he was the reviewer in question. Instead of taking the opportunity to ask what embodied cognition and mirror neurons have to do with the claim that NLP lacks empirical support (a question which should have occurred to the journal’s editor), Russ just says: “haha! How very odd!”

Harmer then makes his appearance in order to object to Russ’ claim that Harmer, in one edition of his appallingly tedious, poorly-written book on language teaching, had made no serious criticism of MI, etc. In his defence, Harmer asks Russ 3 questions:

1 do you believe that students have different kinds of mental abilities?
2 do you believe that students respond differently to different stimuli (think music for example)?
3 if you were writing a general methodology book and NLP and MI had made inroads in professional practice would you ignore them completely or would you describe them … and then subject them to critical comment?

Once again, Russ caves in. Instead of pointing out that nobody could reasonably object to the “beliefs” contained in the first two ridiculuous questions, and that, consequently, they have nothing to say about NLP (or about anything else for that matter), Russ meekly answers: 1:Yes; 2: Yes, and 3: I take your point.

critical

After an exchange of comments, Harper and Thornbury have Russ eating out of their hands and another potential threat to the ELT Global enterprise is safely dealt with. Harmer has published 16 books (not one of them worth reading) and Thornbury has published 14 (most of them well worth reading). They are ELT establishment figures who publish influential books and blogs, appear as big names at TEFL conferences; organise and advise on teacher training programmes; sit on exam boards; and generally wield influence. Harmer and Thornbury, unless they make a real effort to fight against it, are interested parties in the status quo of the global ELT industry and, as such, should be constantly questioned. Everything they say should be carefully examined and subjected to criticism, and, when necessary, to rebuttal. Harmer’s writings seem to pose no threat to anything, but they do in fact have a big effect: they endorse the pillars that hold up current mainstream teaching practice: the course level structure; the product syllabus; the use of commercial materials, particularly coursebooks; the testing procedures; the training programmes like CELTA; and on and on. Thornbury walks a more sophisticated path and has, I think, a less negative and deadening effect than Harmer on attempts to break the hold big business has on ELT. But he often speaks arrogant bullshit and occasionally publishes crap books, so we need to keep him honest, as they say.

Drew, a blogger commenting on some remark made on the Demand high blog, http://demandhighelt.wordpress.com/, says this:

You come over like a bunch of do-gooder girl guides, overcome with jargonitis (‘lexical inferencing’?) I understand that some of you want to justify what you’re doing and that the longer you do something you need to feel you’re progressing, make a name for yourselves even. But this is another reminder to me of how inward looking the whole thing is, even within the parameters of the industry you work in. If you really want to know what’s wrong with, then log into Tefl.com and read some of the adverts and what they’re offering in re-numeration. Note the McDonaldisation and the soulless maneouvering of private equity funded education. Stop thinking you’re moral crusaders. Unless you’re very fortunate, the business context you’re working within is not nearly as idealistic.

One might object tthat he loses the plot a bit at the end, but I think he’s making a powerful point.

critanal

Continuing with the theme that we must take a critical stance to what we’re told, and with reference to lexical inferencing, here’s an excerpt from a “Special Report” on lexical inferencing by Margaret Horrigan, available on the IH website here: http://ihjournal.com/special-report-lexical-inferencing%E2%80%A6a-chance-to-demand-high-by-margaret-horrigan

Pros &Cons
The outstanding question about lexical inferencing in the classroom is ‘Is it worth taking a whole lesson segment to deal with lexical inferencing strategies?’ We have the following arguments against lexical inferencing:
• Successful arrival at desired meaning is not guaranteed
• There are cultural issues connected to context
• The language proficiency level of learner and ultimately the difficulty of the text are important variables which contribute to the success of a lexical inferencing lesson segment
However, arguments in favour are:
• It involves both declarative and procedural knowledge to arrive at conclusions
• The more effort we put into arriving at those conclusions, the more likely successful retrieval will be
• It is giving students strategies for use beyond the classroom
• It may push learners towards more complex production
• It may be an opportunity for teachers to demand high
So the pros outweigh the cons and therefore lexical inferencing is certainly worth addressing in class

First, we must question the confident conclusion. What makes Horrigan say that the pros outweigh the cons? Because there are 5 pros against 3 cons? Surely one needs to look at the substance of the pros and cons offered and then see which ones have more weight. In fact, since all the points contain little more than hot air, I’d call it a draw. A close reading of the text shows it to be one more example of bullshit.

liter genius

To finish on a high note, here’s Harmer again, with a typically insightful, critically acute summary of his views on the still-raging controversy surrounding Mitra’s plenary at the IATEFL conference. I’m sorry, but I fell asleep reading it, so I can’t give the link to the website where it appears, but in any case, I for one won’t criticise the typos in his fifth point, a moving and no doubt heartfelt tribute to teachers around the world.

To sum up:
I admire Sugata Mitra’s work.
I congratulate him for investigating solutions to educational disadvantage.
I do not think technology and the internet are the only answers (though they may be an answer, temporarily)
I thoroughly approve of enquiry-based learning, but believe there are other modes too.
I am profoundly grateful to teachers around the world working often in difficult circumstances but, nevertheless doing their best to make children’s live setter.
Not all schools are good.
Not all schools are bad.

Preparing for Krashen Bashin’: Scientific Method and Theory Assessment

punch

Krashen has always insisted that his hypotheses are constructed in line with scientific method. In his comment on my post criticising his theory, he mentions the Higgs-Boson particle and Newton’s hypothesis about the existence of gravity, and says that his hypotheses are so easy to test that “one counterexample is enough to destroy them”. So before I look at criticisms made of Krashen’s theory of SLA, I’d like to look at some background issues concerning scientific method and theory assessment. The text is largely taken from my book Theory Construction in SLA.

We should resist attempts to formalise the scientific method. Reality cannot be fully apprehended; empirical data are often very hard to interpret; any individual instance of falsification can be challenged (and thousands of such instances are in fact ignored); there is no algorithm for hypothesis testing, and there is no hard and fast demarcation line between science and non-science. Science is conducted by people who can be, and indeed have been, prejudiced, corrupt, misguided, dishonest, ambitious, etc., and science takes place in history and within a community, so the development of a theory is neither steady nor linear. Nevertheless, while knowledge of the world is gained in all sorts of ways, I suggest that the most reliable knowledge comes from engaging in scientific research which leads to the development of theories which attempt to explain phenomena. These theories are developed with various rules of logic and language to guide the process and are scrutinised so as to discover flaws in terminology or reasoning, and to build the clearest, simplest version of the theory. Such theories should then lay themselves open to empirical tests.

Can researchers in SLA construct a scientific theory? Many natural scientists, Popper, Lakatos and Feyerabend among them, deny scientific status to the areas involved in SLA research (psychology, cognitive psychology, sociology, anthropology, social psychology, linguistics, applied linguistics), and there are also numerous academics working in the field of SLA who think that the so-called scientific method is inappropriate for their work. If science is defined as the study of natural phenomena then obviously SLA is not part of science, and neither for that matter is mathematics. There is an obvious difference between the natural sciences and those which study human behaviour, but science in general can be characterised by its insistence on the twin criteria of rational argument and empirical testing.

science

Scientists articulate problems, make observations, perform experiments, propose hypotheses, build theories and test them, all the while communicating their results to colleagues. The content of the messages that accumulate and that are available in the public domain (rather than the personal knowledge of individual scientists, their memories and thoughts), are what we can call scientific knowledge. Popper (1972) refers to it as “World 3”, and distinguishes it from subjective knowledge by saying that it is “knowledge without a knowing subject”. The objectivity of scientific knowledge stems from its being a social construct, not owing its origin to any particular individual but created communally. Einstein’s relativity theory established him as a great scientist, but the final product, the established theory and the body of evidence it has accumulated belong to humanity.

Scientific knowledge is gained primarily through observation, and the crucial principle here is that all observers are equivalent: anyone observing the event would agree on the report one person made of it. This fundamental principle of objective knowledge needs a lot of interpretation and qualification, but the criterion that all human beings are interchangeable as observers remains one of the most important pillars of science. Which is why experiments are such an important tool for scientists. Experiments are observations carried out under controlled, reproducible conditions, and one of their chief functions is to allow others to carry out similar experiments in different places at different times. Mistakes and misunderstandings are cleared up, and promising hypotheses are tested through replication studies and theoretical criticism. The coherent and consistent set of beliefs generated by all this activity is the paradigm, the generally-accepted theory in any given field which allows scientists to pursue their work in a systematic way. Of course, the paradigm is not necessarily close to any absolute truth; paradigms often contain fallacies and need unexpected discoveries or massive falsification of predictions to dislodge them.

A basic characteristic of science is the way its theories are scrutinised. Testing hypotheses is not a mechanical process whose outcome can be determined by simple logic. There are always questions about the reliability of the data, and of how the data should be interpreted, and in the end it is the expert judgement of the community that must decide if there is a good enough fit between the theory and the data. And different standards will be applied to different kinds of theories at different moments in their development. At the beginning of the development of a new theory, when there is little common ground, and where there are few accepted findings, a relatively unsubstantiated theory might be encouraged, despite flimsy empirical support or rigorous conceptualisation, for example, because it is seen as a useful guide to future research. Sometimes, scientists may even choose to work with two contradictory models of the same system. Ziman (1978:67) points out that in the theory of atomic nuclei, in the 1950s there were two theories: the “liquid drop model”, and the “shell model” which contradicted each other in terms of the behaviour of protons and neutrons. Both models earned Nobel prizes for their authors, and both are now part of a more complex but unified theory which deals with all the phenomena involved. It was therefore a wise decision for scientists working in this area in the 1950s to look for evidence that showed how the two theories could be reconciled, rather than assume that one of the two must be false.

assess

By what criteria do we judge theories? What makes one theory “better” than a rival? First, like any text, a theory needs to be coherent and cohesive, and expressed in the clearest possible terms. It should also be consistent – there should be no internal contradictions. Theories can be compared by these initial criteria which may help to expose fatal weaknesses or simply invite a better formulation. In the discussions among philosophers of science about the natural sciences, the big questions concern empirical adequacy, predictive ability, and so on. But in the field of SLA, there is a great deal of muddled thinking, there are poorly-argued assertions, and badly-defined terms. Consequently, discussions among researchers and academics in SLA often deal with conceptual issues. Similarly, research methodology is less of a problem in the natural scientists than it is in SLA, partly because in the former experiments are often easier to control, variables are easier to operationalise, etc.. Whatever the reasons, it is certainly the case that when judging theories of SLA, we should favour those that are most rigorously formulated.

Once a theory passes the test of coherence, cohesiveness, consistency, and clarity, we may pass on to questions of falsifiability and empirical adequacy. Here, I need to do a very quick summary of Karl Popper’s approach to scientific method. Popper (1959; 1963) insists that in scientific investigation we start with problems, not with empirical observations, and that we then leap to a solution of the problem we have identified – in any way we like. This second anarchic stage is crucial to an understanding of Popper’s epistemology: when we are at the stage of coming up with explanations, with theories or hypotheses, then, in a very real sense, anything goes. Inspiration can come from lowering yourself into a bath of water, being hit on the head by an apple, or by imbibing narcotics. It is at the next stage of the theory-building process that empirical observation comes in, and, according to Popper, its role is not to provide data that confirm the theory, but rather to find data that test it. Empirical observations should be carried out in attempts to falsify the theory: we should search high and low for a non-white swan, for an example of the sun rising in the West, etc. The implication is that, at this crucial stage in theory construction, the theory has to be formulated in such a way as to allow for empirical tests to be carried out: there must be, at least in principle, some empirical observation that could clash with the explanations and predictions that the theory offers. If the theory survives repeated attempts to falsify it, then we can hold on to it tentatively, but we will never know for certain that it is true. The bolder the theory (i.e. the more it exposes itself to testing, the more wide-ranging its consequences, the riskier it is) the better. If the theory does not stand up to the tests, if it is falsified, then we need to re-define the problem, come up with an improved solution, a better theory, and then test it again to see if it stands up to empirical tests more successfully. These successive cycles are an indication of the growth of knowledge.

Popper (1959) gives the following diagram to explain his view:

P1 -> TT -> EE -> P2

P = problem  TT = tentative theory  EE = Error Elimination (empirical experiments to test the theory)

It can also be represented like this (where GP0, GP1,GP2 are P1, P2, P3) :

popper

We begin with a problem (P1), which we should articulate as well as possible. We then propose a tentative theory (TT), that tries to explain the problem. We can arrive at this theory in any way we choose, but we must formulate it in such a way that it leaves itself open to empirical tests. The empirical tests and experiments (EE) that we devise for the theory have the aim of trying to falsify it. These experiments usually generate further problems (P2) because they contradict other experimental findings, or they clash with the theory’s predictions, or they cause us to widen our questions. The new problems give rise to a new tentative theory, the need for more empirical testing, and round we go again. Popper thus gives empirical experiments and observation a completely different role: their job now is to test a theory, not to prove it, and since this is a deductive approach it escapes the problem of induction. Popper takes advantage of the asymmetry between verification and falsification: while no number of empirical observations can ever prove a theory is true, just one such observation can prove that it is false. All you need is to find one black swan and the theory “All swans are white” is disproved. Falsifiability, said Popper, is the hallmark of a scientific theory, and allows us to make a demarcation line between science and non-science: if a theory does not make predictions that can be falsified, it is not scientific. According to such a demarcation, astronomy is scientific and astrology is not, since although there are millions of examples of true predictions made by astrologers, astrologers do not allow that false predictions constitute a challenge to their theory.

So that’s the bit about questions of falsifiability and empirical adequacy. Theories should lay themselves open to empirical testing: there must be a way that a theory can in principle be challenged by empirical observations, and ad hoc hypotheses that attempt to rescue a theory from “unwanted” findings are to be frowned on. The more a theory lays itself open to tests, the more risky it is, the stronger it is. Risky theories tend to be the ones that make the most daring and surprising predictions, which is perhaps the most valued criterion of them all, and they are often also the ones that solve persistent problems in their domain. Generally speaking the wider the scope of a theory, the better it is, although often in practice many broad theories have little empirical content. There are often “depth versus breadth” issues, and yet again, how these two factors are weighted will depend on other factors in the particular situation where the theory finds itself. Simplicity, often referred to as Occam’s Razor, is another criterion for judging rival theories: ceteris paribus, the one with the simplest formula, and the fewest number of basic types of entity postulated, is to be preferred for reasons of economy.

Now we’re ready to examine criticisms which have been made of Krashen’s theory.

Note: When constructing a theory, researchers distinguish between phenomena and data, they use theoretical constructs, and they attempt to give causal explanations. I’ve dealt with these issues in the pages “Science and SLA” and “Theoretical Constructs”. You can also find a summary of the criteria by which I think theories should be assessed in the page called “General Rational Requirements for a Theory of SLA”. For all these pages, scroll up and see the list of Pages in red on the right of the screen. 

Popper, K. R. (1972) Objective Knowledge. Oxford: Oxford University Press.

Popper, K. R. (1963) Conjectures and Refutations. London: Hutchinson.

Popper, K. R. (1959) The Logic of Scientific Discovery. London: Hutchinson.

Ziman, J. (1978) Reliable Knowledge. Cambridge: Cambridge University Press.

Newsflash: Krashen Well; Monitor Theory Refuses to Lie Down

An open letter to Stephen Krashen

Dear Stephen,

When I got your comments on the post I’d written about Michael Hoey’s defence of your theory, I was almost as excited as I was when, in 1983, I read The Natural Approach; Language Acquisition in the Classroom (let’s not forget your marvellous co-writer Tracy Terrell, sadly no longer with us). When I’d finished it, I went out and bought your 2 previous volumes (SLA and Second Language Learning (1981) and Principles and Practice in SLA (1982)), after which I felt qualified to join in the enormously animated discussions that were going on in teachers rooms and bars all over Barcelona. I’m sure you’re aware of the huge impact your theory of SLA had on the ELT world; I can honestly say that for me, as, I suspect, for hundreds of thousands of teachers, your work affected my teaching and thinking more than any other writer before or since. Whatever its shortcomings, your theory of SLA is surely one of the most influential works in the field of applied linguistics that hs been published in the last 60 years.

To the issue, then. In your reply to the criticisms I made of your theory, you urged me to read a list of replies you’ve made over the years to a number of critics. I’m in the process of gathering the texts and once I’ve read them, I’ll write my comments. But to pave the way for a critical discussion, I’d like to briefly summarise what I think are the main points of your theory and invite you to correct any mistakes I might make. This summary is taken from Jordan, 2004.

Krashen (1985) re-formulated what Corder (1967) had called in relation to SLA a “built-in syllabus” into a Natural Order Hypothesis.

To my knowledge, this hypothesis was first proposed by Corder (1967). It states that we acquire the rules of language in a predictable way, some rules tending to come early and others late. The order does not appear to be determined solely by formal simplicity, and there is evidence that it is independent of the order in which rules are taught in language classes (Krashen, 1985: 14).

Krashen (1977a, 1977b, 1978, 1981, 1982, 1985) developed these hypothesis into the Monitor Model, which contains the following five hypotheses:

A. The Acquisition-Learning Hypothesis.

According to Krashen, adults have two ways of developing competence in second languages. The first way is via acquisition, that is, by using language for communication. This is a subconscious process and the resulting acquired competence is also subconscious. The second way to develop second language competence is by language learning, which is a conscious process and results in formal knowledge of the language. For Krashen, acquisition, picking up a language naturally like children do their L1, is a process still available to adults, and is far more important that language learning. Furthermore, knowledge gained through one means (e.g., learning) cannot be internalised as knowledge of the other kind (e.g., acquisition), and only the acquisition system produces language, the learned system serving only as a monitor of the acquired system, checking the correctness of utterances against the formal knowledge stored therein.

B. The Natural Order Hypothesis

The rules of language are acquired in a predictable way, some rules coming early and others late. The order is not determined solely by formal simplicity, and it is independent of the order in which rules are taught in language classes.

C. The Monitor Hypothesis  

The learned system has only one, limited, function: to act as a Monitor. Further, the Monitor cannot be used unless three conditions are met:

  1. Time. “In order to think about and use conscious rules effectively, a second language performer needs to have sufficient time” (Krashen, 1982:12).
  2. Focus on form “The performer must also be focused on form, or thinking about correctness” (Krashen, 1982: 12).
  3. Knowledge of the rule.

D. The Input Hypothesis

If there is a Natural Order, how do learners move from one point to another, from one stage of competence to the next? The Input Hypothesis explains the learner’s progress. Second languages are acquired by understanding language that contains structure “a bit beyond our current level of competence (i + 1)” by receiving “comprehensible input”. “When the input is understood and there is enough of it, i + 1 will be provided automatically. Production ability emerges. It is not taught directly” (Krashen, 1982: 21-22).

E. The Affective Filter Hypothesis

The Affective Filter is “that part of the internal processing system that subconsciously screens incoming language based on … the learner’s motives, needs, attitudes, and emotional states” (Dulay, Burt, and Krashen, 1982: 46). If the affective Filter is high, (because of lack of motivation, or dislike of the L2 culture, or feelings of inadequacy, for example) input is prevented from passing through and hence there is no acquisition. The Affective Filter is responsible for individual variation in SLA (it is not something children use) and explains why some learners never acquire full competence.

In my book, I go on to suggest weaknesses in these hypotheses, some of which I referred to in my recent post on Hoey. For the moment, let’s concentrate on the theory itself. I wonder if you, Stephen, see the above as a fair summary of it? In particular, could you answer these questions:

  1. Is it right to say that the Monitor hypothesis claims that learning is available for use in production, but not in comprehension?
  2. You say in your comment on my recent post that your hypotheses “make correct predictions, predictions that are confirmed by many studies. The hypotheses are thus easy to test – one counterexample is enough to destroy them”. Could you give me an example of a counterexample which would destroy your hypotheses?
  3. You also say in your comment that we don’t need to know whether i+1 is present in input or in output, saying that “When the existience of electrons was hypothesized, nobody had seen one. The existence of the Higgs-Boson particle was hypothesized before it was observed”. I don’t get the connection between hypothesising the existence of so far unobserved things and not needing to know whether  i+1 is present in input or in output. Could you say a bit more about this, please?
  4. This comment prompts me to mention Nicola (1991) who defends aspects of your theory. Below I summarise Nicola’s argument (again, taken from my book) and I hope you’ll give your reaction.

Nicola (1991) reminds us that in order to explain why the moon moved around the earth (instead of travelling in a straight line, which, according to Newton, is what all bodies naturally do) Newton hypothesised that a body can exert a force on another body at a distance, and called this force “gravity”. Nicola says that Newton was subjected to the same main criticisms as Krashen – first that the onus was on him to prove his counter-intuitive hypothesis (about motion), which he did not do, and second that he gave no explanation for gravity any more than Krashen gives an explanation for how comprehensible input results in acquisition. Nicola continues the analogy by reminding us that, as Mach demonstrated, Newton’s laws were riddled with logical problems, such as the famous first law which states that every body perseveres in its state of rest or uniform motion, except when a force is impressed on it, which allows for a new “force” to be invented to explain any counter-observation. Mach re-formulated Newton’s theory and then Einstein took it an important step further. Nicola argues that while Gregg’s and McLaughlin’s critiques of Krashen are important, they are not necessarily fatal to his theory and that “by wholesale rejection of the theory the critics are passing up a valuable opportunity to accomplish for SLA theory what Mach and Einstein accomplished for physics.” (Nicola, 1991: 23)

Nicola suggests that in order to make the input hypothesis less than vacuous, i.e. to give it empirical content, we need to operationalise “comprehensible input”. While Nicole agrees with McLaughlin that comprehension is an introspective act that is “woefully inadequate” for empirical research, she argues that nonetheless “a workable operational definition for classroom purposes is not difficult to attain.” She suggests that classroom teachers can develop a faculty for “reading” student comprehension of input

in somewhat the same way as an experimental physicist develops a faculty for quick and accurate reading of laboratory instruments from extended work with them. The teacher can thus help the researcher in the quest for precise operational definitions of concepts (Nicola, 1991: 25).

Most of Nicola’s argument deals with what in the philosophy of science is known as the context of discovery. It is certainly true that many extremely important theories in the history of science, Newton’s and Darwin’s among them, started off with badly-defined terms and a poor track record in terms of empirical testability, and I agree that an awareness of the history of science should make us tolerant in our assessment of young theories. In order to give the hypotheses in Krashen’s model more empirical content, a good start would be, as Nicola suggests, to operationalise the concepts, starting with comprehensible input. The most important claim that Krashen makes is that no consciously-learned linguistic information can become part of one’s unconscious linguistic knowledge, and it seems that, unless we stick to circular arguments that make it necessarily so, this claim is contradicted by the evidence. But certainly it is true that as Nicola says, the hypotheses together have clear pedagogical implications, and so, in principle, any teacher interested in testing them could arrive at a good enough working definition of comprehensible input to begin the task of exploring them.

OK, that’s the first part concluded. I hope very much that you’ll respond, Stephen. For my part, I’ll write Part 2, replying to your list of responses to various critics, ASAP.

Best,

Geoff

Krashen, S. (1977a) The monitor model of adult second language performance. In Burt, M., Dulay, H. and Finocchiaro, M. (eds.), Viewpoints on English as a second Languaqe. New York: Regents, 152-61.

Krashen, S. (1977b) Some issues relating to the monitor model. In Brown, H., Yorio,C. and Crymes, R. (eds.). Teaching and learning English as a second language: some trends in research and practice. Washington, DC: TESOL, 144-48.

Krashen, S. (1978) Individual variation in the use of the monitor. In Ritchie, W. (ed.) Second language acquisition research: issues and implications. New York: Academic Press, 175-83.

Krashen, S. (1981) Second language acquisition and second language learning. Oxford: Pergamon.

Krashen, S. (1982) Principles and practice in second language acquisition. Oxford; Pergamon.

Krashen, S. (1985) The Input Hypothesis: Issues and Implications. New York: Longman.

Krashen, S. and Scarcella, R. (1978) On routines and patterns in second language acquisition and performance. Language Learning 28, 283—300.

Krashen, S. and Terrell, T. (1983) The natural approach: language acquisition in the  Classroom. Hayward, CA: Alemany Press.

Nicola, M. (1991) Theories of Second Language Acquisition and of Physics: Pedagogical Implications. Dialog on Language Instruction Vol. 7, No.1, 17-27.

Newsflash: Hoey Well; Monitor Theory and Lexical Approach Still Dead

hoey

On April 4th, 2014, Michael Hoey in his plenary address to the IATEFL conference made the following claims:

  • Michael Lewis’ Lexical Approach and Krashen’s Monitor Model are true.
  • Krashen’s & Lewis’ models are supported by the Lexical Priming theory.

I would like to make these counter-claims:

  • Michael Lewis’ Lexical Approach and Krashen’s Monitor Model are not true.
  • Krashen’s & Lewis’ models do not receive support from Hoey’s theory.
  • Hoey’s theory offends basic considerations of rational theory construction.

Summary of Hoey’s plenary address (You can watch a video of the address by clicking on this link: http://iatefl.britishcouncil.org/2014/sessions/2014-04-04/plenary-session-michael-hoey )

lewis

According to Michael Lewis, the successful language learner is someone who can recognise, understand and produce lexical phrases as ready-made chunks. So in teaching, the emphasis needs to be on vocabulary in context and particularly on fixed expressions in speech. When someone learns vocabulary in context, they pick up grammar naturally.

krashen

According to Krashen, the crucial requirement for successful language learning is comprehensible input. The only way to acquire a language is by reading and listening to naturally occurring spoken and written language input that is very slightly above the current level of the learner. This is a subconscious process, and conscious learning does not result in knowledge of the language, only knowledge about the language.

Hoey’s paper makes 3 main claims:

  1. Lewis’ Lexical Approach and Krashen’s Monitor Model are entirely compatible with (and supported by) reliable psycholinguistic evidence
  2. The Lexical Approach and the Monitor Model are supported by at least one worked-out linguistic theory
  3. The characteristics of language that the Lexical Approach and the Monitor Model treat as central are not limited to English.

In answer to the question “How do we learn language?” Hoey points to research done “in the psycholinguistic tradition”, namely: semantic priming and repetition priming. In semantic priming experiments, informants are shown a word or image (referred to as the prime) and then shown a second word or image (known as the target word). The speed with which the target word is recognized is measured. Some primes appear to slow up informants’ recognition of the target and others appear to accelerate informants’ recognition of the target. For example, the prime word MILK will have no effect on the recognition of the word AVAILABLE, will typically inhibit the recognition of the word HORSE, but will speed up the recognition of the word COW. Hoey claims that there is “ample proof” that words are closely linked to each other in the listener’s mind, and that words that are closely linked can be recognised more quickly.

In repetition priming, the prime and the target are identical. Experiments with repetition priming expose informants to word combinations and then, sometimes after a considerable amount of time and after they’ve seen or heard lots of other material, measuring how quickly or accurately the informants recognize the combination when they finally see/hear it again. For example, a listener may be shown the word SCARLET followed by the word ONION. A day later, if s/he is shown the word SCARLET again, s/he will recognise ONION more quickly than other words. The assumption must be, says Hoey, that s/he remembers the combination from the first time, since the words SCARLET ONION will only rarely have occurred before (if ever). Repetition priming thus “provides an explanation” in Hoey’s view, of both semantic priming and collocation. If a listener or reader encounters two words in combination, and stores them as a combination, then the ability of one of the words to accelerate recognition of the other is explained. If the listener or reader then draws upon this combination in his or her own utterance, then the reproduction of collocation is also explained. This provides “proof” that a listener’s encounters with words in combination may result (sic) in their being closely linked to each other in the listener’s mind, without there being any conscious learning.

At this point, Hoey says that he has “proved” that Lewis’ and Krashen’s models are supported by “reliable psycholinguistic evidence” and moves to his linguistic theory. Hoey’s account of his theory amounts to “The Lexical Priming Claim” that: “Whenever we encounter a word (or syllable or combination of words), we note subconsciously

  • the words it occurs with (its collocations),
  • the meanings with which it is associated (its semantic associations),
  • the pragmatics it is associated with (its pragmatic associations),
  • the grammatical patterns it is associated with (its colligations),
  • the genre and/or style and/or social situation it is used in,
  • whether it is typically cohesive (its textual collocations),
  • whether the word is associated with a particular textual relation (its textual semantic associations)
  • the positions in a text that it occurs in, e.g. does it like to begin sentences? Does it like to start paragraphs? (its textual colligations)”.

Hoey says that when we know a word we subconsciously know all the above about it. Hoey claims that the existence of collocation, semantic association, pragmatic association and colligation “wholly supports Michael Lewis’s view of the centrality of lexis”, and that the existence of textual collocation, textual semantic association, and textual colligation “wholly supports Stephen Krashen’s view that learners need to be exposed to naturally occurring data that interests them and slightly extends them. How else could the textual features of lexis be acquired?”

The rest of Hoey’s address is devoted to showing that languages as apparently different as English and Chinese operate according to the same lexical principles, an issue I don’t want to pursue. So let me now reply to Hoey’s address.

 

imagesCAS3AQPM

The Monitor Model and the Lexical Approach are not true

Even supposing that Hoey’s view of lexis and of how it’s acquired is right, this does precisely NOTHING to address the weaknesses pointed out by scholars such as Gregg and McLaughlin of Krashen’s theory. As I have said elsewhere on this blog, the biggest problem with Krashen’s account is that there is no way of testing its claims. There is no way of testing the Acquisition-Learning hypothesis: we are given no evidence to support the claim that two distinct systems exist, nor any means of determining whether they are, or are not, separate. Similarly, there is no way of testing the Monitor hypothesis because we have no way to determine whether the Monitor is in operation or not. The Input Hypothesis is equally mysterious and incapable of being tested: the levels of knowledge are nowhere defined and so it is impossible to know whether i + 1 is present in input, and, if it is, whether or not the learner moves on to the next level as a result. Thus, the hypotheses make up a circular and vacuous argument. Nor does Krashen’s account offer any causal explanation of what is described. The Acquisition-Leaning Hypothesis simply states that L2 competence is picked up through comprehensible input in a staged, systematic way, without giving any explanation of the process by which comprehensible input leads to acquisition. Similarly, we are given no account of how the Affective Filter works, of how input is filtered out by an unmotivated learner. In summary, Krashen’s key terms are ill-defined, and circular, so that the set is incoherent. The lack of empirical content in the five hypotheses means that there is no means of testing them. As a theory it has such serious faults that it is not really a theory at all.

As for Lewis’ Lexical Approach, no attempt to provide a theory of SLA, psycholinguistic or otherwise, is made, so there is no theory for Hoey to support. All Lewis offers is the rather tired claim that “language consists of grammaticalized lexis, not lexicalized grammar”, and that one of the central organizing principles of any meaning-centered syllabus should be lexis. This was hardly new when he wrote it in 1993, and Hoey should know better than most how much Lewis’ work owes to Nattinger and DeCarrico, Pawley and Syder, Peters, Sinclair, and the Willis team. The book was rightly criticised for its almost hysterical evangelistic tone and its lack of any coherent or cohesive ELT methodology. In stark contrast to Willis (who gives a rationale and design for lexically based language teaching, and offers a detailed lexical syllabus with a coherent instructional methodology), Lewis offers no proper syllabus, or any principled way of teaching the types of lexis and collocates he describes. At one point Lewis proposes an “Observe-Hypothesize-Experiment” model, which I think he got from Tim Johns, but, typically, Lewis fails to provide guidance for implementing the Lexical Approach: he offers no teaching sequences which might demonstrate how the model would be used in the language classroom. Again, Hoey has not one word to say in answer to these criticisms.

untitled

Hoey’s Lexical Priming Theory

Language is often seen as having a grammar and a vocabulary, and it is common to think that we produce sentences by putting words from the vocabulary into appropriate grammatical structures. While this view of words being slotted into grammatical frames might explain creativity, it does not explain fluency very satisfactorily. How can native speakers be more fluent than non-native speakers when they have so much greater a sum of words to choose from? And how do we explain that some sentences typically produced by non-native speakers sound unnatural even though they are perfectly grammatical? Hoey suggests that the reason why native speakers are fluent and natural is that they do not construct sentences out of single words, but rather from words which work together in predictable combinations, the general term for this being collocation.

Hoey argues that we store the words we know in the context in which they were heard or read. Every time we encounter a word or phrase, we store it along with all the words that accompanied it and with a note of the kind of context it was found in – spoken or written, colloquial or formal, friendly or hostile, etc.. Through repetition, we build up a collection of examples of the word or phrase in its contexts, and notice patterns in the contexts. Hoey gives a complete list of things we subconsciously notice in his address under the Lexical Priming Claim; see above when he says “Whenever we encounter a word (or syllable or combination of words), we note subconsciously the words it occurs with (its collocations)”, etc..

To quote from MED Magazine, Issue 52, January 2009: “This process of subconsciously noticing is referred to as lexical priming. Noticing all these things is what makes it possible for a speaker to use the right phrase in the right context at the right time. Without realizing what we are doing, we all reproduce in our own speech and writing the language we have heard or read before. We use the words and phrases in the contexts in which we have heard them used, with the meanings we have subconsciously identified as belonging to them and employing the same grammar. This is how native speakers are able to be fluent and because the things they say are subconsciously influenced by what everyone has previously said to them, it also explains why they almost always sound natural. Our ability to be fluent and natural is, however, limited to the situations we are familiar with. If we have heard a word used repeatedly in particular ways in casual conversation with friends, we will be able to use it confidently in the same situation. It does not follow that we will feel confident about using it in academic writing or talking to strangers. So learning a word means learning it in many different contexts.

Knowing all this is what it means to know a word. Native speakers have acquired a large corpus of examples of the words of English in their typical contexts, and from this they learn how the words are used. By contrast, non-native speakers have typically heard (or read) relatively few examples of even the more common words in natural use and have therefore had less opportunity to learn the way these words typically occur. The differences in practice between a native speaker and a non-native speaker are twofold. Firstly, a non-native speaker is typically exposed to less language and to a narrower range of language, and, secondly, the non-native speaker has previously been primed for another language, which initially affects the way he or she is primed in English”.

thinking

Discussion

What are we to make of this? Obviously, the aim is to connect corpus linguistics (the lexical aspect) with psycholinguistics (the priming aspect). Hoey’s address at IATEFL said almost nothing about the psycholinguistic background to the theory and his 2005 book, in Michael Pace-Sigge’s words “is only thinly represented and can be seen as insufficient to protect the theory submitted from charges of circularity in its argumentation” (Pace-Sigge, 2013). Hoey’s account of  lexical priming theory certainly does lay itself wide open to such a charge and we must ask for a proper theory of psycholinguistics which explains how the huge amounts of dynamic lexical information is stored and processed, how SLA differs from first language acquisition, and a number of related questions. Furthermore, we must ask for a linguistic explanation. Hoey shows us occurrence patterns in language but he doesn’t explain why they occur. Why do words (or parts or clusters of words) collocate? Why do they have certain semantic associations?

As I suggested at the beginning of this piece, neither Krashen’s nor Lewis’ models receive support from Hoey’s theory, and that’s because Hoey’s theory explains nothing in any satisfactory way and generally offends basic considerations of rational theory construction. Explanation is the purpose of a theory, and one of the most important criteria for judging theories is their explanatory power. An explanation is generally taken to be an answer to a “Why” or “How” question about phenomena; it involves causation or a causal mechanism. Why do most L2 learners not achieve native-like competence? How do L2 learners go through stages of development?  In the case of putative lexical priming, How does what we know about words get stored and retrieved? Hoey’s answer to this question is so far completely circular. The best theories are the ones that provide the most generally applicable explanations and which conform to criteria that I have discussed elsewhere (Jordan, 2004). Very briefly, theories should be coherent, cohesive, expressed in the clearest possible terms, and consistent There should be no internal contradictions in theories, and no circularity due to badly-defined terms. Badly-defined terms and unwarranted conclusions must be uncovered and the clearest, simplest expression of the theory must be sought. Theories should also have empirical content: propositions should be capable of being subjected to an empirical test. This implies that hypotheses should be capable of being supported or refuted, that hypotheses should not fly in the face of well-established empirical findings, and that research should be done in such a way that it can be observed, evaluated and replicated by others. The operational definition of variables is an extremely important way of ensuring that hypotheses and theories have empirical content.   A final part of this criteria is that theories should avoid ad hoc hypotheses. Finally, theories should be fruitful (they should make daring and surprising predictions, and solve persistent problems in their domain); theories should be broad in scope. Ceteris paribus, the wider the scope of a theory, the better it is; and theories should be simple. Following the Occam’s Razor principle, ceteris paribus, the theory with the simplest formula, and the fewest number of basic types of entity postulated, is to be preferred for reasons of economy.

Judged by most of the criteria stated above, Hoey’s theory is very bad indeed, which is why I claim that it lends no support to Krashen’s or Lewis’ models. Despite this, I find Hoey’s description of language extremely interesting and challenging. I’m personally very well-disposed to the suggestion that lexis not grammar underpins language; that, as he says “lexis is complexly and systematically structured and that grammar is an outcome of this lexical structure” (Hoey, 2005). I’m also intrigued by the suggestion that priming explains how collocation happens. We can, as Hoey says, only “account for collocation if we assume that every word is primed for collocational use.” But the theory is, I suggest, very young. Priming amounts to this: “every time we use a word, and every time we encounter it anew, the experience either reinforces the priming by confirming an existing association between the word and its co-texts and contexts, or it weakens the priming, if we encounter a word in unfamiliar contexts” (Hoey, 2005). Until the construct “priming” is operationally defined in such a way that statements about it are open to empirical refutation it remains as mysterious as Krashen’s construct of comprehensible input.

Hoey, M. (2005) “Lexical Priming: A New Theory of Words and Language”. Oxford: OUP.

Jordan, G. (2004) “Theory Construction in SLA”. Amsterdam, Benjamins.

Pace-Sigge, M. (2013) The concept of Lexical Priming in the context of language use. ICAME Journal No. 37

Starting an MA in TESOL / Applied Linguistics

 

mba

A new term is starting at universities offering Masters in TESOL or AL,  so I’ve moved this (edited) post to the front.

One of the most important aims of this website is to offer Distance Learning students support, by giving them clear, practical advice about how to manage their studies and how to make maximum use of their tutors and of the on-line facilities, especially the forums and the access provided to their university’s library facilities.  The menus at the top (in the black header) and at the side (on the right in red) have a “Doing an MA in TESOL” section, and I hope you’ll take a look at these pages.

My experience working with students on MA Applied Linguistics courses tells me that the biggest problems students face are: too much information;  choosing appropriate topics; getting the hang of academic writing. Let’s briefly look at these 3 points.

1. Too much Information.

An MA TESOL curriculum looks daunting, the reading lists look daunting, and the books themselves often look daunting. Many students spend far too long reading and taking notes in a non-focused way: they waste time by not thinking right from the start about the topics that they will eventually choose to base their assignments on.  Just about the first thing you should do when you start each module is think about what assignments you’ll do.  Having got a quick overview of the content of the module, make a tentative decision about what parts of it to concentrate on and about your assignment topics.  This will help you to choose reading material, and will give focus to studies.

Similarly, you have to learn what to read, and how to read. First, when you start each module, read the course material and don’t go out and buy a load of books.  The pages list on the right includes this one: * Xtra: Suggested Reading and References where I’ve tried to limit the number of books, and I hope you’ll find it useful. But even that list is too long! My advice is don’t buy anything until you’ve decided on your topic, and don’t read in any depth until then either.  And keep in mind that you can download at least 50% of the material you need from library and other web sites, and that more and more books can now be bought in digital format.

To sum up: to do well in this MA, you have to learn to read selectively.  Don’t just read. Read for a purpose: read with a particular topic (better still, with a well-formulated question) in mind. Don’t buy any books before you’re abslutely sure you’ll make good use of them .

2. Choosing an appropriate topic.

The trick here is to narrow down the topic so that it becomes possible to discuss it in detail, while still remaining central to the general area of study. So, for example, if you are asked to do a paper on language learning, “How do people learn a second language?” is not a good topic: it’s far too general. “What role does instrumental motivation play in SLA?” is a much better topic.

The best way to find a topic is to frame your topic as a question. Well-formulated questions are the key to all good research, and they are one of the keys to success in doing an MA. A few examples of well-formulated questions for an MA TESL are these:

• What’s the difference between the present perfect and the simple past tense? • Why is “stress” so important to English pronunciation? • How can I motivate my students to do extensive reading? • When’s the best time to offer correction in class? • What are the roles of “input” and “output” in SLA? • How does the feeling of “belonging” influence motivation? • What are the limitations of a Task-Based Syllabus? • What is the wash-back effect of the Cambridge FCE exam? • What is politeness? • How are blogs being used in EFL teaching?

To sum up: Choose a manageable topic for each written assignment. Narrow down the topic so that it becomes possible to discuss it in detail. Frame your topic as a well-defined question that your paper will address.

3. Academic Writing.

Writing a paper at Masters level demands a good understanding of all the various elements of academic writing. First, there’s the question of genre. In academic writing, you must express yourself as clearly and succinctly as possible: in academic writing “Less is more”! Examiners mark down “waffle”, “padding”, and generally loose expression of ideas. I can’t remember who, but somebody famous once said at the end of a letter: “I’m sorry this letter is so long, but I didn’t have time to write a short one”. There is, of course, scope for you to express yourself in your own way (indeed, examiners look for signs of enthusiasm and real engagement with the topic under discussion) and one of the things you have to do, like any writer, is to find your own, distinctive voice. But you have to stay faithful to the academic style.

While the content of your paper is, of course, the most important thing, the way you write, and the way you present the paper have a big impact on your final grade. Just for example, many examiners, when marking an MA paper, go straight to the Reference section and check if it’s properly formatted and contains all and only the references mentioned in the text. The way you present your paper (double-spaced, proper indentations, and all that stuff); the way you write it (so as to make it coherent); the way you organise it (so as to make it cohesive); the way you give in-text citations; the way you give references; the way you organise appendices; are all crucial.

time

Making the Course Manageable

1. Essential steps in working through a module.

Focus: that’s the key. Here are the key steps:

Step 1: Ask yourself: What is this module about? Just as important: What is it NOT about? The point is to quickly identify the core content of the module. Read the Course Notes and the Course Handbook, and DON’T READ ANYTHING ELSE, YET.

Step 2: Identify the components of the module. If, for example, the module is concerned with grammar, then clearly identify the various parts that you’re expected to study. Again, don’t get lost in detail: you’re still just trying to get the overall picture. See the chapters on each module below for more help with this.

Step 3: Do the small assignments that are required. If these do not count towards your formal assessment , then do them in order to prepare yourself for the assignments that do count, and don’t spend too much time on them.  Study the requirements of the MA TESL programme closely to identify which parts of your writing assignments count towards your formal assessment and which do not. • Some small assignments are required (you MUST submit them), but they do not influence your mark or grade. Don’t spend too mch time on these, unless they help you prepare for the main asignments.

Step 4: Identify the topic that you will choose for the written assignment that will determine your grade. THIS IS THE CRUCIAL STEP! Reach this point as fast as you can in each module: the sooner you decide what you’re going to focus on, the better your reading, studying, writing and results will be. Once you have identified your topic, then you can start reading for a purpose, and start marshalling your ideas. Again, we will look at each module below, to help you find good, well-defined, manageable topics for your main written assignments.

Step 5: Write an Outline of your paper. The outline is for your tutor, and should give a brief outline of your paper. You should make sure that your tutor reviews your outline and gives it approval.

Step 6: Write the First Draft of the paper. Write this draft as if it were the final version: don’t say “I’ll deal with the details (references, appendices, formatting) later”. Make it as good as you can.

Step 7: If you are allowed to do so, submit the first draft to your Tutor. Some universities don’t approve of this, so check with your tutor. If your tutor allows such a step, try to get detailed feedback on it. Don’t be content with any general “Well that look’s OK” stuff. Ask “How can I improve it?” and get the fullest feedback possible. Take note of ALL suggestions, and make sure you incorporate ALL of them in the final version.

Step 8: Write the final version of the paper.

Step 9: Carefully proof read the final version. Use a spell-checker. Check all the details of formatting, citations, Reference section, Appendices. Ask a friend or colleage to check it. If allowed, ask your tutor to check it.

Step 10: Submit the paper: you’re done!

3. Using Resources

Your first resource is your tutor. You’ve paid lots of money for this MA, so make sure you get all the support you need from him or her! Most importantly: don’t be afraid to ask help whenever you need it. Ask any question you like (while it’s obviously not quite true that “There’s no such thing as a stupid question”, don’t feel intimidated or afraid to ask very basic questions) , and as many as you like. Ask your tutor for suggstions on reading, on suitable topics for the written assignments, on where to find materials, on anything at all that you have doubts about. Never submit any written work for assessment until your tutor has said it’s the best you can do. If you think your tutor is not doing a good job, say so, and if necessary, ask for a change.

Your second resource is your fellow students. When I did my MA, I learned a lot in the students’ bar! Whatever means you have of talking to your fellow-students, use them to the full. Ask them what they’re reading, what they’re having trouble with, and share not only your thoughts but your feelings about the course with them.

Your third resource is the library. It is ABSOLUTELY ESSENTIAL to teach yourself, if you don’t already know, how to use a university library. Again, don’t be afraid to ask for help: most library staff are wonderful: the unsung heroes of the academic world. At Leicester University where I work as an associate tutor on the Distance Learning MA in Applied Linguistics and TESOL course, the library staff exemplify good library practice. They can be contacted by phone, and by email, and they have always, without fail, solved the problems I’ve asked them for help with. Whatever university you are studying at, the library staff are probably your most important resource, so be nice to them, and use them to the max. If you’re doing a presential course, the most important thing is to learn how the journals and books that the library holds are organised. Since most of you have aleady studied at university, I suppose you’ve got a good handle on this, but if you haven’t, well do something! Just as important as the physical library at your university are the internet resources offered by it. This is so important that I have dedicated Chapter 10 to it.

Your fourth resource is the internet. Apart from the resources offered by the university library, there is an enormous amount of valuable material available on the internet. See the “RESCOURCES” section of this website for a collection of Videos and other stuff.

I can’t resist mentioning David Crystal’s Encyclopedia of The English Language as a constant resource. A friend of mine claimed that she got through her MA TESL by using this book most of the time, and, while I only bought it recently, I wish I’d had it to refer to when I was doing my MA. Lexis, grammar, pronunciation, discourse, learning English – it’s all there.

Please use this website to ask questions and to discuss any issues related to your course. You might like to subscribe to it: see the box on the right.

Good luck!

Emergentism: The Truth Revealed

emergen

You’ll doubtless know that “emergence” is one of the key principles of Dogme, and you might well have noticed that more and more people are banging on about emergence these days. I did a Google search the other day on “emergence and language learning” and among the results I noticed an article by Scott Thornbury which he’d written in 2009 for English Teaching Professional called “Slow Release Grammar”.  The article is remarkable for its tone;  it makes a number of sweeping assertions with breathtaking assurance. If you didn’t know better (didn’t know, that is, that there is no generally accepted explanation of SLA), you’d be tempted to think that you were reading a new book of revelations. Scott writes as if he’s finally cracked it, as if he were in possession of the truth. According to this article, emergence improves on Darwin as an explanation of natural development, it explains language, language learning, and the failure of classroom-based adult ELT. Just to top it off, emergence is also the key to  successful syllabus design. Why, one wonders, does such a seemingly transcendental work remain tucked away in the middle of a lack-lustre journal? Why isn’t it as well-kmown as the vaunted Dogme tracts themselves? I’ll briefly summarise it below, using mostly Scott’s own words. 

First, emergence is everywhere in nature, where a system is said to have emergent properties when it displays complexity at a global level that is not specified at a local level. There are millions of such systems; the capacity of an ant colony to react in unison to a threat is an example. Because there is no “central executive” determining the emergent organisation of the system, the patterns and regularities which result have been characterised as “order for free”.

chunk

Next, language.  Language exhibits emergent properties. There are 2 processes by which language “grows and organises itself”. The first is our capacity to detect and remember frequently-occurring sequences in the sensory data we are exposed to. In language terms, these sequences typically take the form of chunks (AKA formulaic expressions or lexical phrases). The second is our capacity to unpack the regularities within these chunks, and to use these patterns as templates for the later development of a more systematic grammar. It is as if the chunks – memorised initially as unanalysed wholes – slowly release their internal structure like slow-release pain-killers release aspirin. Language emerges as “grammar for free”.

Thirdly, there is emergence in learning. Hoey notes how particular words and chunks re-occur in the same patterns. These can be seen in collocations, such as good morning; good clean fun; on a good day …; fixed phrases, such as one good turn deserves another, the good, the bad and the ugly; and colligations, as in it’s no good + -ing. Hoey argues that, through repeated use and association, words are ‘primed’ to occur in predictable combinations and contexts. The accumulation of lexical priming creates semantic associations and colligations which, in Hoey’s words, “nest and combine and give rise to an incomplete, inconsistent and leaky, but nevertheless workable, grammatical system”.  But note that adults learning a second language  are less successful in their capacity both to take formulaic chunks on board, and to re-analyse them for the grammatical information that they encapsulate.

Fourthly, the problems which adults have remembering and unpacking formulaic chunks don’t find their solution in most ELT classrooms where few opportunities for real communication are offered. Wray says: “Classroom learners are rarely aiming to communicate a genuine message…, so there is no drive to use formulaic sequences for manipulative  purposes”. Even when adult learners do internalise formulaic chunks, they are often incapable of unpacking the grammar, perhaps because many chunks are not really grammatical (expressions like if I were you; you’d better not; by and large; come what may, etc, yield little or no generalisable grammar) and perhaps because they fail to notice the form.

Finally, we can put emergence into the classroom through the syllabus. If the productive potential of formulaic language is to be optimised, then, at least four conditions need to prevail:

  • Exposure – to a rich diet of formulaic language
  • Focus on form – to promote noticing and pattern extraction
  • A positive social dynamic – to encourage pragmatic and interpersonal language use
  • Opportunities for use – to increase automaticity, and to stimulate storage in long-term memory, and recall.

Well, there you have it: all is revealed.  And, as I suggested above, revealed as the unequivocal-no-ifs-or-buts-not-a-hint-of-a-doubt, truth.  So, to return to the question, why hasn’t the ELT world “taken on board” (to air one of the many awful clichés which Scott is not afraid of using) the full import of this article? Why haven’t we all enthusiastically clambered aboard the good ship Emergence and set sail to the happy land of “grammar for free” language learning?  Maybe because the good ship Emergence is an old tub which is as leaky as Hoey’s grammar.

emerg3

Scott starts with Stuart Kauffman’s claim that the phenomenon whereby certain natural systems display complexity at a global level that is not specified at a local level is evidence of emergence and “order for free”.  This highly-controversial view is then used in an attempt to add credibility to the suggestion that lexical chunks provide “grammar for free”. We may begin by noting that Scott tells us that many formulaic chunks “yield little or no generalisable grammar”, which surely must impede their wonderous ability to “slowly release their internal structure like slow-release pain-killers release aspirin”.  Or does their magic extend to releasing qualities which they don’t possess? Scott gives an inadequate and mangled account of emergentism which, according to him, says that lexical phrases explain English grammar, how children learn English and why adults have difficulties learning English as a foreign language. Using Michael Hoey as the spokesman for emergentism, while avoiding any mention of William O’Grady’s “Syntactic Carpentry: An Emergentist Approach to Syntax” or of the works of Bates and MacWhinney is another indication  of the skewed account on offer here.

emerg2

I discuss emergentism, including work by Bates, MacWhinney, O’Grady and Ellis, in a page you can find in the menu on the right. Suffice it to say here that Scott’s unqualified assertion that language learning can be explained as the detection and memorisation of “frequently-occurring sequences in the sensory data we are exposed to” is probably wrong and certainly not the whole story. At the very least, Scott should give a more measured description and discussion of emergentist views of language learning and acknowledge that it faces severe challenges as a theory. How can general conceptual representations acting on stimuli from the environment explain the representational system of language that children demonstrate? As Eubank and Gregg ask: “How come children know which form-function pairings are possible in human-language grammars and which are not, regardless of exposure?” How can emergentists deal with cases of instantaneous learning, or knowledge that comes about in the absence of exposure, including knowledge of what is not possible?  Scott’s suggestion that we have an innate capacity to “unpack the regularities within lexical chunks, and to use these patterns as templates for the later development of a more systematic grammar” begs more questions than it answers and, anyway, contradicts the empiricist epistemology adopted by most emergentists who say that there aren’t, indeed can’t be, any such things as innate capacities.

Finally, we get Scott’s depressing picture of the arid desert which is the standard adult EFL classroom followed by the triumphant portrayal of an emergentist syllabus, where the  “productive potential” of formulaic language is unleashed.  The illusive, definitive recipe of language learning has been revealed: lashings of formulaic language, sprinkled with a little focus on form, served on a bed of positive social dynamic, with the chance of asking for more. In the likely event that the positive social dynamic gets out of hand in these joyous classrooms, and the adult students start running amok, babbling formulaic chunks of colloquial language at each other, I recommend that the teacher gives out copies of that most calming, not to say soporific, textbook “Natural Grammar”.

Eubank, L. and Gregg, K. R. (2002) News Flash – Hume Still Dead. Studies in Second Language Acquisition, 24, 2, 237-248.

Hoey, M. /(2005)  Lexical Priming. Routledge.

Wray, A. (2002) Formulaic Language and the Lexicon. CUP.

Concordancing, lexical chunks and the Lexical Syllabus

 

coll

In my “New Year’s Resolutions” I vowed to bash “The Lexical Approach”, and, in reply to some comments, promised to say more soon. There are already two pages on this website devoted to concordancing (see the list on the right), so I want here to just summarise these issues before explaining why I am not a fan of any lexically-driven syllabus, but why I am a fan of Nattinger and DeCarrico.

Given that using concordance programs to examine enormous corpora of English texts has led to more accurate and reliable descriptions of the English language, the question remains: To what extent do these new descriptions imply any particular pedagogical practice?  Before trying to answer that question, I want to recall that Nattinger and DeCarrico, drawing on Pawley and Syder and also on more recent research,  argue that what they call the “lexical phrase” is at the heart of the English language. Early work done by computational linguists (Hockey 1980, Sinclair 1987, Garside et al. 1987) on collocations uncovered recurring patterns of lexical co-occurrence, and more recent computer analysis has widened the scope of investigation to include the search for patterns among function words as well. As a result of such research, the 1990s saw several papers (see Concordance page) which argued that linguistic knowledge cannot be strictly divided into grammatical rules and lexical items, that rather, there is an entire range of items from the very specific (a lexical item) to the very general (a grammar rule), and since elements exist at every level of generality, it is impossible to draw a sharp border between them.  There is, in other words, a continuum between these different levels of language.

The suggested application of Nattinger and DeCarrico’s argument to language teaching is that lexis – and in particular the lexical phrase – should be the focus of instruction. This approach is quite different to Willis’ (which takes frequency as the main criterion – see below) and  rests on two main arguments.  First, some cognitive research (particularly in the area of PDP and related connectionist models of knowledge) suggests that we store different elements of language many times over in different chunks.  This multiple lexical storage is characteristic of recent connectionist models of knowledge, which assume that all knowledge is embedded in a network of processing units joined by complex connections, and accord no privilege to parsimonious, non-redundant systems. “Rather, they assume that redundancy is rampant in a model of language, and that units of description, whether they be specific categories such as “word” or “sentence”, or more general concepts such as “lexicon” or “syntax” are fluid, indistinctly bounded units, separated only as points on a continuum” (Nattinger and DeCarrico, 1992).  If this is so, then the role of analysis (grammar) in language learning becomes more limited, and the role of memory (the storage of, among other things, lexical phrases) more important.

The second argument is that language acquisition research suggests that formulaic language is highly significant.  Peters (1983) and Atkinson (1989) shows that a common pattern in language acquisition is that learners pass through a stage in which they use a large number of unanalyzed chunks of language – prefabricated language. This formulaic speech is seen as being basic to the creative rule-forming processes which follow. Starting with a few basic unvarying phrases, first language speakers subsequently, through analogy with similar phrases, learn to analyze them as smaller patterns, and finally into individual words, thus finding their own way to the regular rules of syntax.

Biber, Sinclair, Willis and Lewis, among others, argue even more forcefully than Nattinger and DeCarrico that teaching practice must fit the new, more accurate, descriptions of English revealed by corpus-based research. They go further, and suggest that now teachers have the data available to them, it should form the basis for instruction. One of the most strident expressions of this view is the following:

” Now that we have the means to observe samples of language which must be fairly close to representative samples, the clear messages are:

a)We are teaching English in ignorance of a vast amount of basic fact.  This is not our fault, but it should not inhibit the absorption of the new material.

b)The categories and methods we use to describe English are not appropriate to the new material.  We shall need to overhaul our descriptive systems.

c)Since our view of the language will change profoundly, we must expect substantial influence on the specification of syllabuses, design materials, and choice of method.” (Sinclair, 1985)

The last point Sinclair makes is, I think, as hugely important as it is minimally elaborated – for him it seems to follow as the logical consequence of the previous two points.  Sinclair argues that the work of the COBUILD team is the obvious application of the facts uncovered by concordancers: the COBUILD dictionary series draws on corpus-based research in order to better reflect real language use, the COBUILD Grammar “corrects” the previous impressionistic intuitions of pedagogic grammarians, and the COBUILD English coursebooks exemplify the methodology that a lexical syllabus implies.  Biber sees the teaching implications of corpus-based research as similarly obvious, and agrees with Sinclair that both grammar and vocabulary teaching must adjust to the new facts.

willis

Willis (1990), drawing on the work of Sinclair (1987, 1991) and the COBUILD team (led for a while by Sinclair), outlines a lexical syllabus which he claims provides a “new approach to language teaching”.  Willis starts from the “contradiction” between a grammatical syllabus and a communicative methodology.  A grammar syllabus is form-focused and aims at the correct production of target forms, but real communication demands that learners use whatever language best achieves the desired outcome of the communicative activity.  There is, says Willis, a dichotomy in the language classroom between activities which focus on form and activities which focus on the outcome and the exchange of meaning.

Willis argues that the presentation methodology which regards the language learning process as one of “accumulated entities”, where learners gradually amass a sequence of parts, trivialises grammar – learners need insights into the underlying system of language.  The method (and the course books employed) oversimplify, and make it difficult for learners to move beyond these entities or packages towards important generalisations.  Willis cites the typical way in which the present simple tense (which is neither simple nor present) is presented.  Even if the issues were dealt with less simplistically, presentation of language forms does not provide enough input for learning a language.  A successful methodology must be based on use not usage, yet must also offer a focus on form, rather than be based on form and give some incidental focus on use.

Willis claims that the COBUILD English course embodies this view.  The course looks at how words are used in practice by using data produced with a concordancer which examined the COBUILD corpus of more than 20 million words in order to discover the frequency of English words and, as Willis puts it “to better examine various aspects of English grammar”.  Word frequency determines the contents of the courses.  The COBUILD English Course Level 1 starts with 700 words and Levels 2 and 3 go out to 1,500 then 2,500.  Tasks are designed that allow the learners to use language in communicative activities, but also to examine the language (the corpus) and generalise from it.  For Level 1 they created a corpus which contextualised the 700 words and their meanings and uses, and provided a range of activities aimed at using and exploring these words.  Willis argues that the lexical syllabus does not simply identify the commonest words, it focuses on commonest patterns too, and indicates how grammatical structures should be exemplified by emphasising the importance of natural language.

lewis

Then comes Lewis and his The Lexical Approach (1993).  Taking advantage of Nattinger and DeCarrico in particular, Lewis cobbled together a confused jumble of half-digested ideas into what, typically, he saw as an original work of genius which represented a giant leap forward for ELT methodology.  What the book actually offers is almost nothing original and just as little in terms of any coherent or cohesive ELT methodology. Unlike Willis, Lewis offers no proper syllabus, or any principled way of teaching the “chunks” which he claims are the secret to the English language.  Nor did Lewis pay heed to the growing research being done in SLA which was indicating that the most promising way to see SLA is as the development of the learner’s “interlanguage”, a term which was being increasingly developed into more and more powerful cognitive-based hypotheses and theories, all of which assume that a generative grammar is at work.

discuss

Discussion

So, what are we to make of all this?  First, we must be clear about the limitations of the kind of descriptions concordancers offer us of the language. It may help us to see these limitation of these descriptions if we take Biber’s claim that computational text analysis has provided better criteria for defining discourse complexity, thus demonstrating that the former “intuitive” criteria for discourse complexity are inadequate.  Widdowson points out that the criteria Biber gives all relate to linguistic features and co-textual co-occurrences. “What is analyzed is text, not discourse” (Widdowson, 1993).  Biber takes readability to be a matter of the formal complexity in the text itself, without dealing with how, as Widdowson puts it “an appropriate discourse is realized from the text by reference to schematic knowledge, that is to say to established contextual constructs” (ibid).  Adequate guidelines for the construction of reading materials need to take discourse into account, and it is not self evident that the criteria for textual complexity suggested by Biber are relevant to reading.  Moreover, since concordancing is limited to the analysis of text, since the language is abstracted from the conditions of use, it cannot reveal the discourse functions of textual forms.

Concordancing tells us a lot about text that is new and revealing, but we must not be blinded by it. Although corpus analysis provides a detailed profile of what people do with the language, it does not tell us everything about what people know. Chomsky, Quirk et al.(1972, 1985), and Greenbaum (1988) argue that we need to describe language not just in terms of the performed (as Sinclair, Biber, Willis, and Lewis suggest) but in terms of the possible.  The implication of Sinclair and Biber’s argument is that what is not part of the corpus is not part of competence, and this is surely far too narrow a view, which seems to hark back to the behaviourist approach.  Surely Chomsky was right to insist that language is a cognitive process, and surely Hymes, in arguing for the need to broaden our view of  competence was not arguing that we look only at attested behaviour.

Externalised and Internalised Language

Widdowson (1991) uses Chomsky’s distinction between Externalized language (E-Language): a description of performance, the actualized instances of attested behaviour, and Internalized language (I-Language): competence as abstract knowledge or linguistic cognition, to suggest that we need to group the four aspects of Hymes’ communicative competence (possibility, feasibility, appropriateness and attestedness) into two sets. I-language studies are concerned with the first two of Hymes’ aspects, and E-language studies deal with the other two.  Discourse analysis deals with one E-linguistic aspect and corpus-based linguistics with the fourth.  The limitations of corpus-based research are immediately evident, and thus we should not restrict ourselves to its findings. As Greenbaum observes: “We cannot expect that a corpus, however large, will always display an adequate number of examples…. We cannot know that our sampling is sufficiently large or sufficiently representative to be confident that the absence or rarity of a feature is significant” (Greenbaum, 1988).  Significant, that is, of what users know as opposed to what they do.  Widdowson points out that in discourse analysis there is increasing recognition of the importance of participant rather than observer perspective. To the extent that those engaged in discourse analysis define observable data in terms of participant experience and recognise the psycho-sociological influences behind the observable behaviour, they too see the actual language as evidence for realities beyond it.

But how do we get at this I-Language, this linguistic cognition, without having to depend on the unreliable and unrepresentative intuitions of the analyst?  While the description of E-language is based on empirical observation, it is obviously far more difficult to describe I-language, since one is forced to rely on introspection. Conceptual elicitation is one answer. Widdowson cites Rosch (1975) who devised a questionnaire to elicit from subjects the word which first sprang to mind as an example of a particular category.  The results of this conceptual elicitation showed that subjects consistently chose the same hyponym for a particular category: given the superordinate “bird”, “robin” was elicited, the word “vegetable” consistently elicited “pea”, and so on.  The results did not coincide with frequency profiles, and are evidence of a “mental lexicon” that concordancers cannot reach.  In summary, the description of language that emerges from concordance-based text analysis has its limitations, as do the faulty way in which Sinclair, Biber , Lewis and others use the new findings of corpus-based research to argue for certain pedagogical prescriptions. Let’s take a look.

describe

Descriptions and prescriptions

Quite apart from the question of the way in which we choose to describe language, and of the limitations of choosing a narrow view of attested behaviour which can tell us nothing directly about knowledge, there is the wider issue of what kinds of conclusions can be drawn from empirically attested data. The claim made by Biber, Sinclair and others is that, faced with all the new evidence, we must abandon our traditionally-held, intuitive beliefs about language, accept the evidence, and consequently change our description of the language, our language materials, and our language instruction too. Now, the argument goes, that we have the facts, we should describe and teach the facts (and only the facts) about English.

But, as Widdowson (1990) points out so succinctly, the relationship between the description of language and the prescription of language for pedagogical purposes “cannot be one of determinacy.”  This strikes me as so obvious that I am surprised that Sinclair, Biber and others seem not to have fully grasped it. No description has any necessary prescriptive implications: one cannot jump from statements about the world to judgements and recommendations for action as if the facts made the recommendations obvious and undeniable. Thus, descriptions of language cannot determine what a teacher does. Descriptions of language tell us about the destinations that language learners are travelling towards, but they do not provide any directions about how to get there.  Only prescriptions can do that.

While Sinclair is justified in expecting corpus-based research to influence syllabus design, there is no justification for the assumption that it must necessarily do so, and much less that such research should determine syllabus design. A case must be made for the approach which he seems to regard as somehow self-evident.  When Sinclair says that the categories and methods we use to describe English are not appropriate to the new material, we need to know by what criteria appropriateness is being judged.  Similarly, when Biber says “Consensus does not mean validity”, and when he claims that corpus-based research offers the possibility of “more effective and appropriate pedagogical applications”, we need to ask by what criteria (pedagogical presumably) validity, effectiveness and appropriateness are to be judged.  When he talks of data from frequency counts “establishing” the “inadequacy” of discourse complexity he is presumably once again referring to assumptions, criteria which are not made explicit.  When he suggests that the evidence of corpus-based research indicates that there is something special about the written mode, in that it enables a kind of linguistic expression not possible in speech, he is once again making an inadmissible conclusion.

It is tempting to stop here. Since Biber and Sinclair do not seem to appreciate the need to make a case for their approach, to lay bare the assumptions and beliefs which underlie their work, and which inform the way they both select and examine data, one might think that it is enough to bring this glaring omission to their attention. But, of course, some extremely valuable work has been done, the case for concordancing and corpus-based research does not have to be thrown out simply because it has not been properly argued, and we must look a little more closely at some of the issues raised.

Facts do not “support” prescriptions,  but our view of language will influence our prescriptions about how to teach and learn it.  If we view language as attested behaviour, we are more likely, as Willis does, to recommend that frequently attested items of lexis form the core vocabulary of a general English course. Willis appreciates that his approach to syllabus design is in any way “proved” by facts, but he still takes a very narrow view.  To return to the discussion above about Rosch’s “prototype words” (the mental lexicon), I do not think that such words should be ignored simply because they are not frequently attested, and it could well be argued that they should be one of the criteria for identifying a core vocabulary. Widdowson takes the case further.  He suggests that Chomsky’s idea of “kernel sentences” indicates the possibility that there are also prototype sentences which have an intuitive role.  They do not figure as high frequency units in text, but they do figure in descriptive grammars, and their presence there can be said to be justified by their intuitive significance, their psychological reality, as prototypes. Furthermore, they are the stock in trade of language teaching. Teachers may all be wrong about the significance of such kernel sentences, but we cannot simply dismiss the possibility of their prescriptive value on the grounds that they do not occur frequently in electronically-readable corpora.

More evidence of the limitations of sticking to frequently attested language forms comes from research which led to the specification of core language to be included in Le Français Fundemental (Gougenheim et al. 1956).  The research team began with frequency counts of actual language, but they felt that some words were still missing: french people had a knowledge of words which the researchers felt intuitively should be included despite their poor showing in performance. So the researchers carried out an exercise in conceptual elicitation.  They identified categories like furniture, clothing, occupations, and asked thousands of school children which nouns they thought it would be most useful to know in these categories. Once again, the lists did not correspond to frequency counts, and gave rise to the idea of “disponibilité” or availability. As Widdowson says, the difference between the french research and Rosch’s is that availability is a prescriptive criterion: the words are prescribed as useful not because they are frequently used but because they appear to be readily available in the minds of the users.

Valency

Widdowson (1990) suggests that there are more direct pedagogical criteria to consider than those of frequency and range of language use.  In terms of the purpose of learning, he sights coverage as a criterion described by Mackay: The coverage .. of an item is the number of things one can say with it.  It can be measured by the number of things it can displace” (Mackay 1985).  Most obviously, this criterion will prevail where the purpose of learning is to acquire a minimal productive competence across a limited range of predictable situations.   The process version of coverage is what Widdowson calls valency – the potential of an item to generate further learning.  He gives the example of the lexical item “bet” as described in the COBUILD dictionary (1987). Analysis reveals that the canonical meaning of the word, “to lay a wager”, is not as frequently attested as its informal occurrence as a modal marker as in “I bet he’s late”.  It does not follow, however, that the more frequent usage should be given pedagogical preference.  First, the informal meaning tends to occur mainly in the environment of first person singular and present tense, and is idiomatic, and it is thus limited in its productive generality.  Second, the modal meaning is derivable from the canonical lexical meaning but not the other way round. In this sense the former has a greater valency and so constitutes a better learning investment. Widdowson proposes a general principle: high valency items are to be taught so that high frequency items can be more effectively learned.

Pedagogic prescription should, suggests Widdowson, specify a succession of prototypes – simplified versions of the language, each of which is a basis for later improved models.  The process of authentication through interim versions of the language has to be guided by other factors as well as those of frequency and range of actual use, factors to do with usefulness rather than use.  Words and structures might be identified as pedagogically core because they activate the learning process, even if their actual occurrence in contexts of use is slight.

It would seem then that while concordancer output gives us a clearer understanding of how language is put together in use (although it cannot reveal the discourse functions of any particular piece of text), it does not get us very far in our search for pedagogical prescriptions, and, indeed it can easily lead us astray.  Although I would agree largely with this conclusion, I think the case for using lexical phrases as a key element in language instruction is extremely strong; the work of Nattinger and DeCarrico strikes me as an important development which is both radical and far-reaching.  While Sinclair, Biber, Willis and others take too narrow a view of language competence, lexical phrases (more carefully described and better analysed units than earlier descriptions of formulaic language) occupy a crucial place in the continuum between grammatical rules and lexical items, and can therefore help to re-define language competence, and to identify pedagogically core parts of the language on which to base our instruction.

nattinger

A new look at Communicative Competence: rescuing Nattinger and DeCarrico 

In Knowledge of Language and Ability for Use (1989) Widdowson, having argued that Chomsky’s and Hymes’ views of competence are not commensurate (since one is interested in an abstract system of rules, and the other in using language) suggests that there are eight, not four aspects to Hymes’ competence: knowledge of each aspect, and ability in each one. He then reformulates these as grammatical competence (the parameter of possibility) and pragmatic competence (the rest), and characterises knowledge in terms of degrees of analysability, and ability in terms of accessibility.  Although both analysability and accessibility are necessary components, analysability has its limits. Nattinger and DeCarrico (after Pawley and Simon) draw attention to lexical phrases which are subject to differing degrees of syntactic variation. It seems that a great deal of knowledge consists of these formulaic chunks, lexical units completely or partially assembled in readiness for use, and if this is true, then not all access is dependent on analysis.  Gleason (1982) suggested that the importance of prefabricated routines, or “unopened packages” in language acquisition and second language learning has yet to be recognised.

If we accept this view then communicative competence can be seen in a fresh way.  Widdowson (1989) says this: “Communicative competence is a matter of knowing a stock of partially pre-assembled patterns, formulaic frameworks, and a kit of rules, so to speak, and being able to apply the rules to make whatever adjustments are necessary according to contextual demands. Communicative competence is a matter of adaption, and rules are not generative but regulative and subservient”.  In a different text, Widdowson (1990) says “Competence consists of knowing how the scale of variability in the legitimate application of generative rules is applied – when analysis is called for and when it is not. Ignorance of the variable application of grammatical rules constitutes incompetence”.

Our criteria for pedagogical prescription do not have to change as a result of this new formulation of competence, but I think we are nearer to identifying pedagogically key units of language – parts of the language that activate the learning process. The suggestion is that grammar’s role is subservient to lexis, and this implies a radical shift in pedagogical focus.  If, as Widdowson thinks, we should provide patterns of lexical co-occurrence for rules to operate on so that they are suitably adjusted to the communicative purpose required of the context, then Nattinger and DeCarrico’s work, which identifies lexical phrases and then prescribes exposure to and practice of sequences of such phrases, can surely play a key role. They present a language teaching program based on the lexical phrase which leads students to use prefabricated language in a similar way as first language speakers do, and which they claim avoids the shortcomings of relying too heavily on either theories of linguistic competence on the one hand or theories of communicative competence on the other. “Though the focus is on appropriate language use, the analysis of regular rules of syntax is not neglected”  (Nattinger and DeCarrico, 1992).

Despite the criticisms I have made of some of the more strident claims made by researchers using concordancers, and despite the limitations of text analysis and of frequency as a pedagogical criterion, there is no doubt that corpus-based research, as done by the experts, is throwing valuable light on the way English and other languages are actually used. The new information can help build a better, more accurate, description of English, and can help teachers, materials writers, and learners escape from the intuitions and prejudices of previous “authorities”.  Sinclair, Biber, and others are right to challenge traditional descriptions of the language and the current consensus about what weight to give certain structures and certain meanings of lexical items. It is surely positive to see new dictionaries, grammars, and course books appearing which take the new findings into account.

Nor, in my opinion, is there much doubt that the work done by Pawley and Syder and by Nattinger and DeCarrico  is leading to important modifications in present views of the distinction between grammar and lexis.  We are re-appraising the role of formulaic language, and, I think, stumbling towards a view where grammar is seen as a kit of regulative rules which are variably applied to chunks of language in order to make whatever adjustments are necessary according to contextual demands.  That is a very dramatic paradigm shift indeed!

Note: References cited above can be found at the end of the page “* Concordancers” (see list on the right).