Hi

Harold R. Keables

This blog has two aims.

1. To provide those doing a postgraduate course in Applied Linguistics and TESOL with a forum, where issues related to their studies are discussed and some extra materials provided. It is completely independent, and has no support or connections with any university. Let me make these preliminary remarks:

Academics teach and do research. Most of them prefer research to teaching and they haven’t been taught how to teach. So in tertiary education, teaching methodology matters little: it’s the student who counts. The students who go to the best universities are carefully selected, and a key criterion in the selection process is the student’s ability to study without spoon-feeding. A good student does her own studying and knows how to draw on the resources offered. When you sign up for a post-graduate course know that you are in charge and that you, and you alone, will determine the outcome. Your tutor is an expert, not, usually, a teacher. Your job is to use your tutor’s expertise, which means asking the right questions. Don’t ask “What should I do?”, or “Please suggest a topic”. Ask for comments on your own drafts, ask for guidance on reading; ask for clarification. Get into a dialogue with your tutor; shoot the breeze; get familiar; build a relationship, but remember: your tutor is your mentor in the Greek sense of the word, not your teacher.

2. To question the ELT Establishment

The increasing commercialisation of ELT and the corresponding weakening of genuinely educational concerns has resulted in most teachers being forced to teach in a way that shows scant regard for their worth, their training, their opinions, their job satisfaction, or the use of appropriate methods and materials. This is, in my opinion, a disgraceful state of affairs, and one which teachers need to become more aware of.

The biggest single obstacle to good ELT is the coursebook, which forces teachers to work within a framework where students are led through successive units of the book, spending too much time working on isolated linguistic structures and carefully-controlled vocabulary in a sequence which is externally predetermined and imposed on them by the textbook writer. These best-selling, globally-marketed coursebooks (and their attendant teacher books, workbooks, audio, video multimedia and web-based material) have huge promotional budgets aimed at persuading stakeholders in the ELT business that they represent the best practical way to teach English as a second or foreign language. Part of this budget is spent on sponsoring teaching conferences like TESOL International, IATEFL and all the national conferences, where the stars of the ELT world strut their stuff, and, loathe to bite the hand that feeds them, refrain from any serious criticism of the current teaching orthodoxy neatly packaged into shiny coursebooks.

In the last 50 years, studies into SLA have provided supporting evidence for the theory that SLA is a process whereby the learner’s interlanguage (a dynamic, idiosyncratic, evolving linguistic system approximating to the target language) develops as a result of attempts to communicate in the target language. The research suggests that interlanguage development progresses in stages and that it’s impossible to alter stage order or to make learners skip stages. Thus, teachability is constrained by learnability and any coursebook-driven syllabus which attempts to impose an external linguistic syllabus on learners is futile: learning happens in spite of and not because of the course design.

So this blog sets out to question the establishment and the status quo by challenging the role of coursebooks, by being critical of the so-called experts and leaders of the ELT industry – the textbook writers, teacher trainers and examiners; and by promoting the ideas of all those who are trying to buck the trend.

Are we on the brink of a paradigm shift in ELT?

imagesE62B9KW6

Kuhn used the term “paradigm shift” to challenge the account given by philosophers of science such as Popper of how scientific theories evolved and progressed. Popper said that scientific progress was gradual and accumulative; Kuhn said it was sudden and revolutionary and involved paradigm shifts where one way of thinking was suddenly swept away and replaced by another. A paradigm shift involves a revolution, a transformation, a metamorphosis in the way we see something and it has profound practical implications. Change begins with a change in awareness and perception. Our perception is heavily influenced by our past and by social conditioning, and most of the time we go along with the paradigm view / normal science / the status quo / the theory taught at MIT / the prevalent narrative. But there are revolutionary moments in history when we prove ourselves to be capable of transforming and transcending the prevailing paradigms which so affect our lives, and I wonder if we are currently approaching a paradigm shift in ELT?

robert-leighton-when-i-was-your-age-things-were-exactly-the-way-they-are-now-new-yorker-cartoon

The present ELT paradigm has these characteristics:

  • Standard English is the subject taught.
  • Vocabulary and grammar are the subject matter of EFL / ESL.
  • SLA involves learning the grammar and lexicon of the language and practicing the 4 skills.
  • A product syllabus is used. This focuses on what is to be taught, and, to make the “what” manageable, chops language into discrete linguistic items which are presented and practiced separately and step by step in an accumulative way.
  • A coursebook is used. The coursebook is the most important element determining the course. It’s usually grammar-based and presents the chopped up bits of language progressively. Other material and activities aim at practicing the 4 skills.
  • The teacher implements the syllabus , using the coursebook. The teacher makes all day-to-day decisions affecting its implementation.
  • The students are not consulted about the syllabus and have only a small say in its implementation.
  • Assessment is in terms of achievement or mastery, using external tests and exams.

Paradigm_Shift_-_06_X-Raycers_5-Pack

The rival view of ELT has very different characteristics:

  • Standard English is one variety of English; it is not the subject taught.
  • Texts (discourse) are the subject matter of EFL /ESL.
  • SLA involves the socially-mediated development of interlanguage.
  • A process syllabus is used. This focuses on how the language is to be learned. There’s no pre-selection or arrangement of items; objectives are determined by a process of negotiation between teacher and learners as a course evolves. The syllabus is thus internal to the learner, negotiated between learners and teacher as joint decision makers, and emphasises the process of learning rather than the subject matter.
  • No coursebook is used.
  • The teacher implements the evolving syllabus in consultation with the students.
  • The students participate in decision-making about course objectives, content, activities and assessment.
  • Assessment is in terms of low-stakes formative assessment whose purpose is “to act as a way of providing individual learners with feedback that helps them to improve in an ongoing cycle of teaching and learning” (Rea-Dickens, 2001).

If this rival view were to be widely-adopted in ELT it would certainly constitute a revolution, a complete paradigm shift. But will it happen? When one looks at the arguments for and against the 2 views of ELT sketched above, it’s difficult to escape the feeling that the current paradigm is becoming less and less defensible, in the light of increasing knowledge of the the SLA process; poor results of classroom-based ELT courses; poor morale among teachers (apart from suffering from bad working conditions and pay, most teachers are denied the freedom to teach as they’d like to); and the increasing viability of alternatives.

Doesn’t the alternative seem so much more appealing? What’s better, that course content grows out of the experiences of the learners and is based on topics which reflect their reality, or that it derives from a coursebook made in London or New York? What’s better, that conversational dialogue is the essential component of the course, or that the teacher talks most of the time, gives presentations about English and leads the learners through prefabricated activities? What’s better, that the teacher follows orders and carries out a plan made by somebody in London or New York, or that the teacher is given permission to build the course as it goes along, involving learners in all the important decisions concerning objectives, content, activities and assessment? From both the learners’ and the teachers’ point of view, which approach is likely to lead to higher levels of interest, motivation, energy, engagement and satisfaction? Which approach is likely to lead to better results?

And don’t the replies to criticism of those who promote the current paradigm add further weight to the alternative argument? I’ve discussed elsewhere how some of the leading lights in ELT respond to criticisms of the current paradigm, and I think it’s fair to say that none of them has offered any proper defence of it. The gist of the argument is that alternatives are “unrealistic” and that ELT practice under the present paradigm is slowly but surely improving. As Harmer puts it, unafraid as always of using a handy cliché, “tests are getting better all the time”.

ostri

Another supporter of the present paradigm, Jim Scrivener, shows how little importance he gives to any real examination of alternatives. Scrivener simply assumes that teachers must run the show and that “Made in the UK (or USA)” coursebooks and test materials should determine course objectives and content. Rather than question these two fundamental assumptions, Scrivener takes them as given and thinks exclusively in terms of doing the same thing in a more carefully-considered way. In Scrivener’s scheme of things, everything in the ELT world stays the same, but the cobwebs of complacency are swept away and everybody demands high (whatever that means). So teachers are exhorted to up their game: to use coursebooks more cleverly, to check comprehension more comprehensively, to practice grammar more perspicaciously, to re-cycle vocabulary more robustly, and so on, but never to think outside the long-established framework of a teacher-led, coursebook-driven course of English instruction. Recently Scrivener commented that a good coursebook is “a brilliant exploitable all-bound-up-in-one-package resource.” No attempt is made to argue the place of coursebooks in ELT, but Scrivener does take the opportunity to caution on the need for teachers to be trained in how to use coursebooks. Some teachers find reading pages of coursebooks (in the sense of appreciating the links between different parts of the page and pages) “baffling” and so they need to be shown how to “swim” in the coursebook, how to take advantages of all that it has to offer. Apart from giving the impression that he thinks he’s very smart and that most teachers are very dumb, Scrivener gives more evidence of the limits of his vision: nowhere does he discuss training teachers how to do without a coursebook, for example. After all, why on earth would anybody want to do that?

frank-modell-i-don-t-get-it-new-yorker-cartoon

In the same discussion of coursebooks on Steve Brown’s blog, Scott Thornbury eloquently summarized the case against them. I cut and pasted his summary on this blog, leading Hugh Dellar to tweet “Shocking disdain for the craft of writers & editors, as well as the vast majority of teachers from @thornburyscott.” This is typical of Dellar’s response to criticism of coursebooks in two respects. First it is badly-written, and second it takes offence rather than offering any evidence or arguments to the contrary. Dellar has made a number of comments on my criticisms of the dominant role of coursebooks in current ELT, but none of them offers any argument to refute the claim that coursebooks are based on false assumptions and that a process syllabus better respects research findings in SLA, and represents a better model of education. In all the recent discussions of teaching methodology, the use of coursebooks, the design and use of tests, teacher training, and so on, both in the big conferences and in blogs, nobody who defends the current paradigm of ELT has properly addressed the arguments above or the arguments for an alternative offered by Richard Breen, Chris Candlin, John Faneslow, Mike Long, Rose Bard, Graham Crookes, Scott Thornbury, Luke Meddings, and many others. These are met with a barrage of fallacious arguments and very little else.

robert-crumb

While I believe that those who fight against the current paradigm have the more persuasive arguments, not to mention the more exciting agenda, I unfortunately don’t believe that we’re on the brink of a paradigm shift in ELT. The status quo is too strong and the business interests that support and sustain this status quo and its institutions are too powerful. The alternative view of ELT described here is essentially a left-wing view which is just too democratic to stand a chance in today’s world. I suppose the best that those of us who believe in an alternative can do is to argue our case and make our voice heard. Whether or not to compromise is another important issue. I was interested to see Luke Meddings propose a 50-50 deal recently: “OK”, he suggested, “just put the book and the tests away for 50% of the time!” I don’t feel comfortable with that, but he might well be on the right track.  So anyway, “Keep on truckin’!” as the great Robert Crumb (see graphic) advised.

Test Validity

validity-experimental

Lado (1961) succinctly summarises validity in this way: “Does a test measure what it is supposed to measure? If it does, it is valid.” Six years earlier Cronbach and Meehl had introduced the ‘trinitarian’ view of validity which was dominant until the 1990s, and which Harmer used his three fingers in an unsuccessful attempt to remember in his 2015 TOBELTA Online Conference talk. Validity was seen as comprising content validity, criterion-related validity (the one which eluded Harmer), and construct validity. Messick (1989) challenged this view by drawing attention to the importance of HOW a test is used, thus shifting perspectives on validity from a property of a test to that of test score interpretation. If we follow Messick, we see validity as a judgement on the adequacy and appropriateness of inferences and actions based on test scores, and this leads to more attention being given to the social consequences of a test. Washback, ethics, administration procedures, the test environment, test-taker characteristics (emotional state, concentration, familiarity with the test task), and, perhaps most importantly, the sorting and gate-keeping roles of a test, are all aspects of validity. Furthermore, score interpretation involves questions of values, and thus the assumption that a test elicits the communicative ability of the test-taker, and then arrives at a “true”, objective assessment of that ability ignores the fact that all assessment is value-laden and ‘truth’ is a relative concept.

In light of all this, we must take a critical look at the use – and misuse – of tests. Large-scale tests are often used by the state or other authorities to ration limited resources and opportunities, and such tests are currently being used all over the world to achieve a wide range of political goals, including curbing immigration and promoting private education. Shohamy (2001) argues that “centralized systems” use externally imposed, standardized, one-shot, high-stakes tests to control educational systems by defining what kind of knowledge is prestigious. Fulcher goes further and suggests that we need to understand the political philosophies which lead to centralised or decentralised types of government, and their associated ways of using tests as policy tools. Since political philosophy is concerned with the balance between the state and the individual, Fulcher argues that “depending on where a political philosophy stands on the cline between the two, we can identify the kind of government likely to be favored, and the kind of society valued. It is my contention that it also explains (and predicts) the uses of tests that we are likely to find” (Fulcher, 2009, p. 5).

econguys1

Fulcher defines “collectivist societies” as “those in which the identity, life, and value of the individual is determined by membership of the state and its institutions. Decisions are made to benefit the collective and its survival rather than its individual members.” In contrast, “modern individualism” starts from the claim that, as Locke put it, “men are by nature all free, equal, and independent”, that “no one can be subjected to the political power of another without his own consent”, and that there are limits upon the authority of the state, such that laws apply to all equally, that they protect the rights of individuals and that laws can only be made by the legislative who must be democratically elected. I’m not entirely happy with Fulcher’s use of these 2 “isms”, but it’s clear that they don’t equate with left- and right-wing politics, and, anyway, they can certainly be used to examine test use.

Collectivism and Testing

Fulcher argues that in societies that tend towards collectivism, the centralization of both educational systems and testing is a priority. Modern collectives use testing to control the educational system, to select and allocate individuals to roles or tasks that benefit the collective, and to ensure uniformity and standardization. While we might think immediately of countries like North Korea or China in this regard, Fulcher argues that established democracies are not immune from “neocollectivism”: we need look no further than the UK.

Examples of centrally controlled standards-based education systems, with a high level of control over teacher training and school learning, are not hard to find (Brindley, 2008). The clearest example is that of the United Kingdom, which has systematically introduced standards-based testing in an accountability framework that ensures total state control over the national curriculum and national tests, as well as teacher training; even educational staff are rewarded or disciplined based on national league tables (Mansell, 2007).  (Fulcher, 2009., p.7).

Fulcher argues that that these hyperaccountability policies are pursued by the state in an attempt to improve performance in the global market place; “the educational system is reengineered to deliver the kinds of people who will serve the perceived needs of the economy” (Fulcher, 2009, p.7).

Fulcher goes on to give the Common European Framework of Reference (CEFR) as an example of neocollectivism at the supranational level, claiming that the system is used to control language learning so as to deal with its weakened position in global markets. Fulcher claims that the CEFR is being used “as a tool for designing curricula, reporting both standards and outcomes on its scales, and for the recognition of language qualifications through linking test scores to levels on the CEFR scales.” He goes on

We now see stronger evidence for more intrusive collectivist policy emerging in calls for claims of linkage to the CEFR to be approved by a central body (Alderson, 2007), and the removal of the principle of subsidiarity from language education in Europe (Bonnet, 2007). If realized, these changes would lead to unaccountable centralized control of education and qualification recognition across the continent. (Fulcher, 2009, p.8).

french

Individualism and Testing

Enlightenment individualism claims “the right of each person to be free from control or oppression from a state that acquires too much power and begins to control the lives of citizens” (Fulcher, 2009, p. 9). Fulcher is quick to point out that “this is not a right-wing position” and that “attempts to summarily dismiss individualistic critiques of test use as right-wing reactionism by labeling them “Eurosceptic” (Alderson, 2007, p. 660) …fail to engage with the social consequences of test use and misuse” (p.10).

In societies that lean towards Fulcher’s individualistic political philosophy, the state has little say in what is taught, or how it’s taught, and the role of tests is to promote personal growth, or to provide individuals with new learning opportunities. Fulcher gives these examples of the uses of tests which are in keeping with individualism:

  • The original Binet tests, designed for the sole purpose of identifying children in need of additional help.
  • Diagnostic and classroom testing, loosely defined as “low-stakes formative assessment”. “Its purpose is to act as a way of providing individual learners with feedback that helps them to improve in an ongoing cycle of teaching and learning (Rea-Dickens, 2001). In such a context Dewey’s notion of personal growth as a validity criterion is echoed by current researchers, such as Moss (2003)” (Fulcher, 2009, p.11).
  • Dynamic assessment. “In DA [dynamic assessment], assessment and instruction are a single activity that seeks to simultaneously diagnose and promote learner development by offering learners mediation, a qualitatively different form of support from feedback” (Lantolf & Poehner, 2008a, p. 273).

According to Fulcher, the general characteristics of this “individualistic paradigm” are:

  • Classroom assessment is used to help individuals to develop their own potential.
  • Large-scale, high-stake tests are used to ensure that individuals acquire the key knowledge and skills they need to innovate in their own lives and participate in democratic societies.
  • Large-scale, high-stake tests can also provide access to employment through the assessment of critical skills where practicing without those skills would be detrimental to others.
  • Validity is assessed in terms of the success in helping individuals to achieve their goals and develop necessary skills.
  • External systems are never imposed upon teachers.
  • Teachers are involved in defining the knowledge and skills to be taught and assessed, or design their own assessments as part of the learning process.
  • One of the criteria for success is the empowerment of professional educators to make their own judgments and decisions in their own contexts of work.

formative_assessment

Large-scale Testing versus Classroom Assessment

In my post about Harmer’s talks on testing, I said that classroom teaching should be 100% test-free, but that there was, surely, some place for testing.  When I said that, I had in mind Fulcher’s  distinction between the “collectivist” uses of standardized large-scale tests, and the “individualist” classroom assessment. I think there is a place for standardised large-scale tests inside the restraints of Fulcher’s individualistic paradigm, when they’re used as an index of proficiency and are intended to give test takers the opportunity to demonstrate their mastery in a range of skills and abilities so as to gain access to further education, jobs and other opportunities. Likewise, from the same perspective, I think classroom assessment is fine when it is used to make decisions about learning and teaching which result in further language proficiency.  Standardised large-scale tests should not be used by the state or other authorities to carry out political objectives, and should not influence normal language classroom practice, although, in my opinion, there’s a legitimate place for well-defined exam preparation courses.

The fundamental difference I want to make between standardised tests and classroom assessment is the one Fulchar makes between the uses to which the two are put. As a result of these different uses, while standardized tests must be fair to all who take them, classroom assessment need not concern itself with fairness, but instead concentrate on further growth. While collaboration in a standardized test is labelled ‘cheating’, in the classroom it is valued and praised. In standardized tests the score users are concerned with how meaningful the score is beyond the specific context that generated that score. Thus, score reliability (dependent on consistency of measurement, discrimination between test takers, the length of the test, and the homogeneity of what is tested) is of prime importance. But in a learning environment like the language classroom, we value divergent and conflicting opinion, and we often encourage it by dialogue and debate. “The only meaning we could ascribe to ‘reliability’ would be the extent to which the decisions we make for future growth are more appropriate than inappropriate” (Fulcher and Davidson, 2007, p.7).

images

Conclusion

In the 2015 TOBELTA Online Conference Luke Meddings reiterated his call to give tests a rest, supporting his argument with a few not particularly well-articulated, but nevertheless powerful objections to the over-dominant role that tests play in so many ELT environments. Jeremy Harmer’s response to Meddings was so poor that it provoked me to write a review of it, which in turn provoked Scott Thornbury to say that I should explain my own view. By attempting a brief summary of Fulcher’s views on those two issues, views which I completely agree with, I hope I’ve complied.

One question remains, and that’s the one Rose Bard raised concerning the Pearson Education company. In his IATEFL talk at Harrogate, Harmer said that tests were getting much better and he called the Pearson test of Academic English “bloody wonderful”, citing the “massive research” they’d done in support of this view. Rose begs to differ, and explains why in comments you can find under the now “stripped” post on Harmer. I think this deserves separate treatment, and I invite everybody to help me build a file on Pearson Education, prior to discussing their contribution to language testing and to ELT.

References

See Fulcher, G. (2009) Test Use and Political Philosophy. Annual Review of Applied Linguistics 29, 3–20 for all references except:

Fulcher, G. and Davidson, F. (2007) Tests in Life and Learning: A deathly dialogue. Educational Philosophy and Theory, 40, 3. 407-417.

Harmer on Testing (A PC Version of 2 Previous Posts)

I’ve revised the 2 posts on Harmer’s presentations, trying to avoid anything that can be seen as a personal attack.

In this post, I’d like to suggest that Jeremy Harmer’s public pronouncements on testing fail to say anything new or interesting and demonstrate a regrettable lack of knowledge of the matters discussed.

web-banner-2015-url-hashtag-text-330x250

First Presentation

At the 2015 IATEFL conference, Harmer gave a talk which you can see by clicking its title here: An uncertain and approximate business? Why teachers should love testing. Harmer’s basic thesis is that teachers should love testing because it’s a necessary part of their job, but it’s an opinion that’s brashly asserted rather than a proposition that’s reasonably argued. Harmer begins by listing objections to testing:

  • Tests don’t measure creativity.
  • Chomsky says “testing is an anathema.”
  • Some people on Facebook don’t like testing.
  • Testing 4 year olds is weird.
  • Testing is only a snapshot.
  • Some people are good at testing, some aren’t.

Note that none of these points is developed and that no coherent argument is attempted.

Harmer then gives reasons why teachers should love testing:

  • He got a Grade 1 in playing the tuba because there was a test, and he performed badly in a concert because there wasn’t a test. Testing is thus a powerful motivator.
  • Neurosurgeons and pilots must be tested. So we need tests.
  • Tests tell us where students are. ”A test if it’s well done will tell you how well your students have done.”
  • Tests are getting better. “The Pearson test of academic English is bloody wonderful. I’m saying that because I believe it, not just because they pay me.” The designers claim that their speech-recognition software evaluates speech “as reliably and accurately as any human being can. And I have no reason to doubt that, because the research behind it is er.., er.., massive.”
  • Lots of tests are bad. If you want to change testing you can moan or do something; so learn about tests and do something. .

I suggest that this talk makes no worthwhile contribution to our understanding of language testing and that we should expect more than this from a presentation held at prime time in the biggest room in the entire conference centre, and streamed live on the conference website.

TOBELTA-2015-poster-v2

Second Presentation

Harmer’s second presentation was a videoconference given as part of of the 2015 TOBELTA Online Conference. This is something of a volte-face, since here Harmer asks “Should teachers love tests or hate them?” and begins by confiding that the question is so knotty that it drives him “to schizophrenia”. Harmer devotes the first 20 minutes of his talk to saying that while he agrees with Luke Meddings that testing is badly-affected by big business, and that the commodification of language is a bad thing, he still thinks that neurosurgeons and pilots should be properly assessed. Harmer spends the rest of the hour variously stating the view that teachers need to become “test literate” experts in the field of testing. At one point Harmer says that teachers need to know about concepts of validity, reliability, and test item types, and at another point he says that knowledge of the two “profound concepts” of content validity and construct validity is vital if teachers are to “get inside the test.” One other point that can be identified in the talk is that teachers and students should explore testing together. Students should be asked what questions they would include in a test, and students and teachers should “discuss together what it is they need to do and want to do with the full understanding of how a test works.”

Harmer concludes:

How do you stop a huge corporation dominating the testing world? How do you stop tests being designed that are absurd and ridiculous? And, guess what? I have no easy answer to that… but I know perfectly well that there’s no merit in, or virtue in complaining about this in private, and, by the way, I say this absolutely genuinely, the reason why listening to Luke and others is so important is that it was not a private event, it was a public event and the more of us who are public about what we think, the greater the opportunity is that, er, things might change.

I suggest that we should expect more than these empty words from an invited speaker at an international conference. The presentation is poorly-structured, seriously lacking in coherence and cohesion, and very low in substantial content. Watching the video, it becomes clear that Harmer doesn’t have a good grasp of even the basic vocabulary of testing, and that he’s unable to offer anything informative or well-considered to a discussion of the uses and abuses of language testing.

Jeremy Harmer

Harmer’s Website

Finally, Harmer’s website offers some thoughts on testing in the post Testophile or Testophobe? I leave it the viewer to decide on its merits.

courtorder_091007079414_640x360-624x351

Conclusion

To seriously address the question of the pros and cons of language testing, we have to look not just at how tests are designed, but at how they’re used, an area which Harmer hardly mentions. I would argue that classroom teaching should be 100% test-free, while the use of large-scale tests should be carefully restricted. As Fulcher (2009) says, large-scale testing is a social tool used to ration limited resources and opportunities, and it’s currently being used “to carry a larger social burden than it can reasonably bear.” It should not, for example, as Fulcher (2011) argues, be used to implement immigration policies, to evaluate teachers, or to rank order schools. A place remains for testing, of course, but, on the basis of his latest offerings, Harmer is unlikely to be of much help in deciding what that place might be.

References

The 2 references to Fulcher can be found on Glenn Fulcher’s excellent website here: http://languagetesting.info/gf/glennfulcher.php Scroll down 4 pages till you come to “Selected Papers”. I particularly recommend his 2009 article “Test use and political philosophy” which you can download from his website. At the end of the article, Fulcher proposes an “effect-driven test architecture” which he hopes can serve “as a method for testers to proscribe unintended uses of their tests.”

Scott Thornbury’s Definitive (200-Word) Dismissal of Coursebooks

I can’t resist cutting and pasting Scott’s comment below, which I’ve just read on Steve Brown’s blog.  It’s brilliantly concise and “bang on”,  as I think they say in Oz land.

images

“If it’s syllabuses that teachers want, these can be fabricated out of existing coursebook syllabuses and printed on a sheet of A4. No violation of copyright is involved since all coursebook syllabuses are clones of one another anyway (and will be even more so once the English Profile checklist is mandated). And if it’s a semantic or functional or task-based syllabus they want, they will have to design it themselves anyway (but the exercise could do wonders for in-service development and staff morale).

If it’s texts that teachers want, they need only do what coursebooks writers do anyway: trawl the internet. At least the texts that they plunder themselves are likely to be more up-to-date than those in even a recently-published coursebook, and can be selected to match their learners’ needs and interests.

If it’s activities the teachers want, there are any number of excellent resource books available, and a school’s materials budget might be better spent on the complete Cambridge Handbooks series (I declare an interest) than on a truckload of Headway.

Syllabus. Texts. Activities. Is there anything else a coursebook offers? Comfort. Complacency. Conformity. Professional atrophy. Institutional malaise. Student boredom. Slow death by mcnuggets.”

Scott Thornbury, July 2015, commenting on Steve Brown’s post “Concerning Coursebooks” 

Chomsky’s Critics 2: Elizabeth Bates

bates

Elizabeth Bates (1947 – 2003) was a brilliant scholar perhaps best known for her work with Brian MacWhinney on the Competition Model and Connectionism. In her often outspoken work, Bates challenges the modular theory of mind and, more specifically, criticises the nativists’ use of accounts of “language savants” and those suffering from cognitive or language impairment disabilities to support their theory.  Specifically, in her review of Smith and Tsimpli’s The mind of a savant , Bates (2000) challenges the authors’ conclusions about Christopher, the savant in question, and, along the way, challenges the two main arguments supporting the UG “ideology”, as she calls it: the existence of universal properties of language, and the poverty of the stimulus.

First, the existence of language universals does not provide compelling evidence for the innateness of language, because such universals could arise for a variety of reasons that are not specific to language itself (e.g., universal properties of cognition, memory, perception, and attention).  (Bates, 2000: 5)

Bates, following Halliday, gives the analogy of eating food with ones’ hands (with or without tools like a fork or a chopstick), which can be said to be universal. Rather than posit “an innate hand-feeding module, subserved by a hand-feeding gene”, a simpler explanation is that, given the structure of the human hand, the position of the mouth, and the nature of the food we eat, this is the best solution to the problem.

In the same vein, we may view language as the solution (or class of solutions) to a difficult and idiosyncratic problem: how to map a rich high-dimensional meaning space onto a low-dimensional channel under heavy information-processing constraints, guaranteeing that the sender and the receiver of the message will end up with approximately the same high-dimensional meaning state.  Given the size and complexity of this constraint satisfaction problem, the class of solutions may be very small, and (unlike the hand-feeding example) not at all transparent from an a priori examination of the problem itself  (Bates, 2000: 5).

Bates gives other examples to support her argument that solutions to particular problems of perception and cognition often evolve in an ad hoc way, and that there is no need no jump to the convenient conclusion that the problem was solved by nature.  As she says “That which is inevitable does not have to be innate!”  (Bates, 2000:  6)

Bates sees language as consisting of a network, or set of networks, and she was one of the first to begin work on a connectionist model, known now as the Competition Model. She’s refreshingly frank in recognising that neural network simulations of learning are still in their infancy, and that it’s still not clear how much of human language learning such systems will be able to capture. Nevertheless, she says, the neural network systems which have already been constructed are able to generalise beyond the data and recover from error. “The point is, simply,” says Bates, “that the case for the unlearnability of language has not been settled one way or the other” (Bates, 2000: 6).

Bates goes on to say that when the nativists point to the “long list of detailed and idiosyncratic properties” described by UG, and ask how these could possibly have been learned, this begs the question of whether UG is a correct description of the human language faculty.  Bates paraphrases their argument as follows:

  1. English has property P.
  2. UG describes this property of English with Construct P’.
  3. Children who are exposed to English, eventually display the ability to comprehend and produce English sentences containing property P.
  4. Therefore English children can be said to know Construct P’.

Bates comments:

There is, of course, another possibility: Children derive Property P from the input, and Construct P’ has nothing to do with it. (Bates, 2000: 6)

An important criticism raised by many, and taken up by Bates, against Chomsky’s theory is that it is difficult to test. In principle, one of the strong points of UG is precisely its empirical testability – find a natural language where the description does not fit, or find a mature language user of a natural language who judges an ill-formed sentence to be grammatical, and you have counter-evidence. However, Bates argues that the introduction of parameters and parameter settings “serve to insulate UG from a rigorous empirical test.” In the case of binary universals (e.g., the Null Subject Parameter), any language either will or will not display them, they “exhaust the set of logical possibilities and cannot be disproven.” Other universals are allowed to be silent or unexpressed if a language does not offer the features to which these universals apply. For example universal constraints on inflectional morphology cannot be applied in Chinese, since Chinese has no inflectional morphology. Rather than allow Chinese to serve as a counter example to the universal, the apparent anomaly is resolved by saying that the universal is present but silent. Bates comments: “It is difficult to disprove a theory that permits invisible entities with no causal consequences.

A_Discussion_by_Robert_Adamson

Discussion

1. Poverty of the Stimulus

Many of the criticisms made by Sampson and Bates do not seem to me to be well-founded.  While Bates is obviously correct to say that language universals could arise for a variety of reasons that are not specific to language itself, Bates provides no evidence against Chomsky’s claims. To say that “the case for the unlearnability of language has not been settled” amounts to the admission that no damning evidence has yet been found against the poverty of the stimulus argument, and, of course, such an argument can never be “proved”.

In general, to suggest that learning a language is just one more problem-solving task that the general learning machinery of the brain takes care of ignores all the empirical evidence of those adults who attempt and fail to learn a second language, and the evidence of atypical populations who successfully learn their L1.  Despite Bates’ careful and convincing unpicking of the more strident claims made by nativists in their accounts of atypical populations, it’s hard to explain the cases of those with impaired general intelligence who have exceptional linguistic ability (see Smith, 1999: 24), or the cases of those with normal intelligence who, after a stroke, lose their language ability while retaining other intellectual functions (see Smith 1999: 24-29), if language learning is not in fact localised.

Turning to Sampson, when he challenges Chomsky’s poverty of the stimulus argument by saying that many children have in fact been subjected to input like Blake’s Tyger poem, he ignores the obvious fact that many children have not, and when he says that children need input of yes/no questions in order to learn how to form them, nobody would disagree; the question remains of how the child also learns about aspects of the grammar that are not present in the input. In my recent discussion with Scott about the poverty of the stimulus argument, he claimed, as does Sampson, that “everything the child needs” is, in fact, present in the input, and thus no resort to nativist arguments of modular mind, innate knowledge, the LAD, or any of that, is necessary. While Sampson attempts, bizarrely and without success, to use Popper’s arguments for progress in science through conjectures and refutations as a model for language acquisition, I think Scott was relying more on the kind of emergentist theory of learning that Bates has promoted. But, in my opinion, only Bates shows any appreciation for just how hard it is to do without any appeal to innateness. Let’s take a quick look.

faces[1]_tree

Nativism vs. Emergentism

Gregg (2003) highlights the differences between the two approaches. On the one hand, he says, we have Chomsky’s theory which posits a rich, innate representational system specific to the language faculty, and non-associative mechanisms, as well as associative ones, for bringing that system to bear on input to create a grammar. On the other hand, we have the emergentist position, which denies both the innateness of linguistic representations  and the domain-specificity of language learning mechanisms.

Starting from the premise that items in the mind get there through experience, emergentists adopt a form of associationism and argue that items that go together in experience will go together in thought. If two items are paired with sufficient frequency in the environment, they will go together in the mind.  In this way we learn that milk is white,  -ed is the past tenser marker for English verbs, and so on. Associationism shares the general empiricist view that complex ideas are constructed from simple “ideas”, which in turn are derived from sensations caused by interaction with the outside world. Gregg (2003) acknowledges that these days one certainly can model associative learning processes with connectionist networks, but he highlights the severe limitations of connectionist models by examining the Ellis and Schmidt model (see Gregg, 2003: 58 – 66) in order to emphasise just how little the model has learned and how much is left unexplained.  Re-reading the 2003 article makes me wonder if Scott and others who dismiss innateness as an explanation appreciate the sheer implausibility of a project which does without it. How can emergentists seriously propose that the complexity of language emerges from simple cognitive processes being exposed to frequently co-occurring items in the environment?

wellsartout

And so we return to the root of the problem of any empiricist account: the poverty of the stimulus argument.  Emergentists, by adopting an associative learning model and an empiricist epistemology, where some kind of innate architecture is allowed, but not innate knowledge, and certainly not innate linguistic representations, have a very difficult job explaining how children come to have the linguistic knowledge they do. They haven’t managed to explain how general conceptual representations acting on stimuli from the environment produce the representational system of language that children demonstrate, or to explain how, as Eubank and Gregg put it “children know which form-function pairings are possible in human-language grammars and which are not, regardless of exposure” (Eubank and Gregg, 2002: 238). Neither have emergentists so far dealt with “knowledge that comes about in the absence of exposure (i.e., a frequency of zero) including knowledge of what is not possible” (Eubank and Gregg, 2002: 238).

I gave Vivian Cook’s version of the PoS argument in Part 1, but let me here give  Gregg’s  summary of Laurence and Margolis’ (2001: 221) “lucid formulation”:

  1. An indefinite number of alternative sets of principles are consistent with the regularities found in the primary linguistic data.
  2. The correct set of principles need not be (and typically is not) in any pretheoretic sense simpler or more natural than the alternatives.
  3. The data that would be needed for choosing among those sets of principles are in many cases not the sort of data that are available to an empiricist learner.
  4. So if children were empiricist learners they could not reliably arrive at the correct grammar for their language.
  5. Children do reliably arrive at the correct grammar for their language.
  6. Therefore children are not empiricist learners (Gregg, 2003: 48).

To the extent that the emergentists insist on a strict empiricist epistemology, they’ll find it extremely difficult to provide any causal explanation of language acquisition, or, more relevant to us, of SLA. Combining observed frequency effects with the power law of practice, for example, and thus explaining acquisition order by appealing to frequency in the input doesn’t go far in explaining the acquisition process itself.  What role do frequency effects have, how do they interact with other aspects of the SLA process?  In other words, we need to know how frequency effects fit into a theory of SLA, because frequency and the power law of practice don’t provide a sufficient theoretical framework in themselves. Neither does connectionism; as Gregg points out “connectionism itself is not a theory….. It is a method, and one that in principle is neutral as to the kind of theory to which it is applied” (Gregg, 2003: 55).

richard-avedon1

 2. Idealisation

There is also the question of idealisation, stressed by Sampson in his criticisms, and probably the most frequently-expressed objection made to UG. The assumption Chomsky makes of instantaneous acquisition, like the idealisation of the “ideal speaker-listener in a completely homogenous speech-community”, is a perfectly respectable tool used in theory construction: it amounts to no more than the “ceteris paribus” argument that allows “all other things to be equal” so that we can isolate and thus better examine the phenomenon in question. Idealisations are warranted because they help focus on the important issues, and to get rid of distractions, which does not mean that this step is immune to criticism, of course. It’s up to Chomsky to make sure that any theories based on idealizations are open to empirical tests, and it is then up to those who disagree with Chomsky to come up with some counter evidence and/or to show that the idealisation in question has protected the theory from the influence of an important factor.  Thus, if Sampson wants to challenge Chomsky’s instantaneous acquisition assumption, he will have to show that there are differences in the stages of people’s language acquisition which result in significant differences in the end state of their linguistic knowledge.

While on the subject of idealisations, we may deal with the criticism of sociolinguists who challenge Chomsky’s idealisation to a homogenous speech community by saying that Chomsky is ruling out of court any discussion of variations within a community.  Chomsky would reply that he’s doing no such thing, and that if anybody is interested in studying such variations they are welcome to do so.  Chomsky’s opinion of the scant possibility of progress in such an investigation is well-known, but he of course admits that it’s  only an opinion. What Chomsky is interested in, however, is the language faculty, and the acquisition of a certain type of well-defined knowledge. In order to better investigate this domain, Chomsky idealises the speech community.  Sociolinguists can either produce arguments and data which show that such an idealization is illegitimate (i.e. that it isolates part of the theory from the influence of a significant factor), or say that they are interested in a completely different domain.  It seems to be often the case that criticisms of Chomsky arise from misunderstandings about the role of idealisations in theory construction, or about the domain of a theory.

imagesMFK5P702

Weaknesses of UG theory

Chomsky’s theory runs into difficulties in confronting the question of how UG evolves, and how the principles and parameters arrive at a stable state in a normal child’s development.  Furthermore, there’s  no doubt that the constant re-formulation of UG results in “moving the goal points” and protecting the theory from bad empirical evidence by the use of ad hoc hypotheses.

And we shouldn’t forget that when we discuss UG we have the “principles and parameters” theory in mind, and not the “Minimalist” programme, let alone Internalism. Internalism sees Chomsky insisting that the domain of his theory is not grammar but “I-language”, where “I” is “Internal” and where “Internal” means in the mind. While exposure to external stimuli is necessary for language acquisition, Chomsky maintains that, as Smith puts it “the resulting system is one which has no direct connection with the external world” (Smith, 1999: 138). This highly counter-intuitive claim takes us into the technicalities of a philosophical debate about semantics in general and “reference” in particular, where Chomsky holds the controversial view that semantic relations “are nothing to do with things in the world, but are relations between mental representations: they are entirely inside the head”  (Smith, 1999: 167).  Perhaps the most well-known example of this view is Chomsky’s assertion that while we may use the word “London” to refer to the capital city of the UK, it’s unjustified to claim that the word itself refers to some real entity in the world.  Go figure, as they say.

But the most important criticism I personally have of UG is that it is too strict and too narrow to be of much use to those trying to build a theory of SLA. I think it’s important to challenge Chomsky’s claim that questions about language use “lie beyond the reach of our minds”, and that they “will never be incorporated within explanatory theories intelligible to humans” (Chomsky, 1978).  Despite Chomsky’s assertion, I think we may assume that the L2 acquisition process is capable of being rationally and thoroughly examined.  Further, I suggest that it need not be, indeed should not be, idealised as an instantaneous event, which is to say, I assume that we can ask rational questions about the stages of development of interlanguages, that we can study the real-time processing required to understand and produce utterances in the L2, that we can talk about not just the acquisition of abstract principles but of skills, and even that we can study how different social environments affect SLA.

By insisting on a “scientific” status for his theory, Chomsky severely limits its domain, and to appreciate just how limited the domain of UG is, let us remind ourselves of Chomsky’s position on modularity.  Chomsky argues that in the human mind there is a language faculty, or grammar module, which is responsible for grammatical knowledge, and that other modules handle other kinds of knowledge. Not all of what is commonly referred to as “language” is the domain of the language module; certain parts of peripheral grammatical knowledge, and all pragmatic knowledge, are excluded. To put it another way, the domain of Chomsky’s theory is restricted by his distinction between I-language and E-language; Chomsky is concerned with the individual human capacity for language, and with the universal similarities between languages – his domain deliberately excludes the community. No justification needs to be offered for deciding to focus on a particular phenomenon or a particular hypothesis, but it is essential to grasp the domain of Chomsky’s theory.  Cook (1994) puts it this way:

Chomskian theory claims that, strictly speaking, the mind does not know languages but grammars; ‘the notion “language” itself is derivative and relatively unimportant’ (Chomsky, 1980, p. 126).  “The English Language” or “the French language” means language as a social phenomenon – a collection of utterances.  What the individual mind knows is not a language in this sense, but a grammar with the parameters set to particular values.  Language is another epiphenomenon: the psychological reality is the grammar that a speaker knows, not a language (Cook, 1994: 480).

Gregg (1996) has this to say:

… “language” does not refer to a natural kind, and hence does not constitute an object for scientific investigation.  The scientific study of language or language acquisition requires the narrowing down of the domain of investigation, a carving of nature at its joints, as Plato put it. From such a perspective, modularity makes eminent sense (Gregg, 1996: 1).

Chomsky himself says that what he seeks to describe and explain is

The cognitive state that encompasses all those aspects of form and meaning and their relation, including underlying structures that enter into that relation, which are properly assigned to the specific subsystem of the human mind that relates representations of form and meaning. A bit misleadingly perhaps, I will continue to call this subsystem ‘the language faculty’ (Chomsky 1980).

Pragmatic competence, on the other hand, is left out because

there is no promising approach to the normal creative use of language, or to other rule-governed acts that are freely undertaken…..  the creative use of language is a mystery that eludes our intellectual grasp (Chomsky, 1980).

Chomsky would obviously agree that syntax provides no more than clues about the content of any particular message that someone might try to communicate, and that pragmatics takes these clues and interprets them according to their context.  If one is interested in communication, then pragmatics is vital, but if one is interested in language as a code linking representations of sound and meaning, then it is not.  Chomsky’s strict demarcation between science and non-science effectively rules out the study of E-Language, and consequently his theory neither describes nor explains many of the phenomena that interest linguists. Far less does UG describe or explain the phenomena of SLA. By denying the usefulness of attempts to explain aspects of language use and usage that fall outside the domain of I-Language, UG  can’t be taken as the only valid frame of reference for SLA research and theory construction, or even as a good model.

 

References

Bates, E. (2000) Language Savants and The Structure of The Mind.  International Journal of Bilingualism. 

Bates, E.; Elman, J.; Johnson, M.; Karmiloff-Smith, A.; Parisi, D.; and Plunkett, K. (1998) Innateness and Emergentism.  In Bechtel, W., and Graham, G., (eds) A Companion to Cognitive Science. 590-601. Oxford: Basil Blackwell.

Bates, E. and Goodman, J. (1997) On the inseparability of grammar and the lexicon: evidence from apasia, acquisition and real-time processing.  Language and Cognitive Processes, 12 , 507-584.

Chomsky, N. (1980) Rules and representations. Oxford: Basil Blackwell.

Cook, V. J. (1994) The Metaphor of Access to Universal Grammar in L2 Learning.  In Ellis, N. (ed.)  Implicit and Explicit Learning of Languages.  London: Academic Press.

Gregg, K. R. (1996) The logical and developmental problems of second language acquisition.  In Ritchie, W.C. and Bhatia, T.K. (eds.) Handbook of second language acquisition.  San Diego: Academic Press.

Gregg, K. R. (2000) A theory for every occasion: postmodernism and SLA.  Second Language Research 16, 4, 34-59.

Gregg, K. R. (2003) The state of emergentism in second language acquisition.  Second Language Research, 19, 2, 42-75.

Laurence, S. and Margolis, E. (2001) The Poverty of the Stimulus Argument. British Journal for the Philosophy of Science, Vol. 52, 3.

Smith, N. (1999) Chomsky: Ideas and Ideals.  Cambridge: Cambridge University Press.

Smith, N., & Tsimpli, I-M. (1995). The mind of a savant: Language learning and modularity. Oxford: Basil Blackwell.

British Jnl. for the Philosophy of Sci.Volume 52, Issue 2 Pp. 217-276.

Chomsky’s Critics 1. Sampson

EducatingEve

Scott Thornbury’s latest Sunday post gave what I thought was a very poor account of the poverty of the stimulus argument and of objections to it.  While Scott was quite measured in his original remarks, his post showed a spectacular disregard for logic, and the wave of enthusiastic messages of support which flooded in from a frightening array of dimwits and cranks seemed to unhinge our normally restrained hero, provoking him to ever more outrageous and fanciful claims. I and a couple of other sensitive souls did our modest best to keep him on the rails, but we failed, the wheels came off, and last time I looked, the whole crazy bunch of them were swapping quotes from Derrida, counting backwards from 666, trying to communicate with each other without switching their brains on, and using impoverished input devices like the Microsoft keyboard. Since they’ve all shown themselves to be useless at marshalling a case against Chomsky for themselves, I thought I’d offer a helping hand. I’m all heart, really.  So here’s the case against Chomsky as argued by two of his leading critics: Geoffrey Sampson and Elizabeth Bates.

Before we start on Sampson, let’s quickly state the poverty of the stimulus argument. It says: since children know things about language that they’ve never been exposed to, that knowledge must be innate. Vivian Cook puts it like this:

Step A. A native speaker of a particular language knows a particular aspect of syntax.

Step B. This aspect of syntax could not have been acquired from language input. This involves considering all possible sources of evidence in the language the child hears and in the processes of interaction with parents.

Step C. This aspect of syntax is not learnt from outside. If all the types of evidence considered in Step B can be eliminated, the logical inference is that the source of this knowledge is not outside the child’s mind.

Step D. This aspect of syntax is built-in to the mind (Cook, 1991).

The UG argument is that all natural languages share the same underlying structure, and the knowledge of this structure is innate.

Sampson says that Chomsky’s claims about the linguistic data available to the child  are “untrue”, and he takes Chomsky’s example (used at the famous 1975 conference at Royaumont, where Piaget, Chomsky, Fodor, and others gathered to discuss the limitations of the genetic contribution to culture) of two different hypotheses about the grammar of yes/no questions in English. Turning an English statement into the corresponding yes/no question involves operating on a finite verb in the statement. Either the verb itself is moved to the left (if the verb is a form of be, do, have, or a modal verb such as will) – thus ‘The man is tall’ becomes ‘Is the man tall?’; or, in all other cases the verb is put into the infinitive and an inflected form of do is placed to the left – thus ‘The man swims well’ becomes ‘Does the man swim well?’  (Sampson, 1997: 40).

Chomsky says there are two hypotheses that the child learning English might try:  1. operate on the first finite verb;  2. operate on the finite verb of the main clause.  Hypothesis 1 violates the structure dependence universal and is false (applied to the sentence “The man who is tall is sad.”, it would give: “Is the man who tall is sad?”).  Hypothesis 2 is correct. Yet both hypotheses work in all questions except those formed from statements containing a subordinate clause which precedes the main verb.  The child cannot decide by observation whether one or the other hypothesis is true, because cases of statements containing a subordinate clause which precedes the main verb are extremely rare. Therefore, the child decides on the basis of innate knowledge. In reply to this Sampson says that many examples actually exist, including the well-known line from Blake’s The Tyger “Did he who made the Lamb make thee?”  Sampson goes on to give a number of other examples from a children’s corpus, and concludes:

Since Chomsky has never backed up his arguments from poverty of the child’s data with detailed empirical studies, we are entitled to reject them on the ground that the data available to the child are far richer than Chomsky supposes.  (Sampson, 1997: 42)

imagesF2T9NDPE

Sampson then attacks Chomsky’s “question-begging idealizations”.  Chomsky distinguishes between competence (a certain type of knowledge which is the phenomenon that he wants to explain), and performance (data, much of which he judges to be irrelevant). To examine competence, Chomsky argues that it’s necessary to make various simplifying assumptions, but Sampson claims that Chomsky’s use of simplifications distorts the substantial point at issue.  Each of the counterfactual simplifying assumptions about human language which Chomsky makes “eliminates a plausible alternative from consideration through what is presented as a harmless, uncontroversial assumption” (Sampson, 1997: 51).  Sampson gives the example of the assumption that language acquisition is an instantaneous process. This, says Chomsky, is “a harmless assumption, for if it mattered then we would expect to find substantial differences in the result of language learning depending on such factors as order of presentation of data, time of presentation, and so on.  But we do not find this” (Chomsky, cited in Sampson, 1997: 51-52). Sampson replies that language acquisition is not an instantaneous process (as Chomsky elsewhere admits), and it is not a harmless simplification to say that it is. As Sampson says:

To claim that it is harmless to pretend that language acquisition is instantaneous is, in effect, to assume that language acquisition does not work in a Popperian fashion, without going to the trouble of arguing the point.  (Sampson, 1997: 52)

Chomsky acknowledges that children do not move from ignorance to mastery of language instantaneously, but he insists that “fairly early in life” a child’s linguistic competence reaches a “steady state”, after which there are no significant changes.  Sampson points out, however, that this “steady state” idea is contested by Bloomfield and Whitney (both of whom see language learning as a lifelong process), and is also completely at odds with the Popperian approach to learning, which brings us to Sampson’s alternative explanation of language acquisition.

imagesH17H31U2

Sampson argues that the essential feature of languages is their hierarchical structure.  Children start with relatively crude systems of verbal communication, and gradually extended syntactic structures in a pragmatic way so as to allow them to express more ideas in a more sophisticated way.  The way they build up the syntax is piecemeal; they concentrate on assembling a particular part of the system from individual components, and then put together the subassemblies. This gives them low level structures which are then combined, with modifications on the basis of input, into higher level structures, and so on.

Sampson uses the Watchmaker parable, first made by Herbert Simon (see Sampson, 1997:111-113), to explain linguistic development.  I won’t go into it here, but Sampson says that Simon’s parable shows that “complex entities produced by any process of unplanned evolution, such as the Darwinian process of biological evolution, will have tree-structuring as a matter of statistical necessity” (Sampson, 1997: 113). Furthermore, in Sampson’s view, “the development of knowledge, as Popper describes it, is a clear case of the type of evolutionary process to which Simon’s argument applies, and can be applied to syntactic structures”.  Sampson describes how the communication system of our ancestors gradually became more complex as language learners made longer sentences, which would enter the language if they made a significant enough contribution to transmitting information more economically, or if they were semantically innovative.  Similarly, a child acquires language by composing sub-assemblies from individual components, and then putting together the sub-assemblies.

dis

Discussion

Only a general learning theory is involved in Sampson’s explanation, which adopts a decidedly Popperian approach. The child tests various hypotheses about grammaticality against input, and slowly builds up the right hierarchically structured language by following a Popperian programme of conjectures and refutations. This supposes, of course, that the child is exposed to adequate input.  Sampson’s argument has two main strands: first, following Simon, gradual evolutionary processes have a strong tendency to produce tree structures; and second, following Popper, knowledge develops in a conjectures-and-refutations evolutionary way.  Sampson claims that these two strands are enough to explain language acquisition.

Perhaps Sampson’s criticism of one of Chomsky’s most central assumptions can serve to highlight the differences between them.  Chomsky says that

Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogenous speech community, who knows its language perfectly (Chomsky, cited in Sampson, 1997: 53).

This assumption, which Chomsky describes as being of “critical importance” for his theory, excludes Sampson’s Popperian approach without even considering it.  For Sampson, learning is a “non-terminating process”, and language has no independent existence over and above the representations of the language in the minds of the various individuals belonging to the speech community that uses it.

What the language learner is trying to bring his tacit linguistic theory into correspondence with is not some simple, consistent grammar inhering in a collective national psyche…. Rather, he is trying to reconstruct a system underlying the usage of the various speakers to whom he is exposed; and these speakers will almost certainly be working at any given time with non-identical tacit theories of their own – so that there will not be any wholly coherent and unrefutable grammar available to be formulated.  The notion of a speaker-listener knowing the language of his community “perfectly” is doubly inapplicable – both because there is no particular grammar, achievement of which would count as “perfect” mastery of the language, and because even if there were such a grammar, there is no procedure by which a learner could discover it.  (Sampson, 1997: 53-54)

From Sampson’s Popperian perspective, even if language learners were “ideal” they would not attain “perfect” mastery of the language of the community.  As Sampson says:

Popperian learning is not an algorithm which, if followed without deviation, leads to a successful conclusion.  Therefore, to assume that it makes sense to describe an “ideal” speaker-listener as inhabiting a perfectly homogenous speech community and as knowing its language perfectly amounts, once again, to surreptitiously ruling the Popperian view of acquisition out of consideration. (Sampson, 1997: 55)

I personally don’t find Sampson’s arguments persuasive, and I’ll explain why after I’ve presented Bates’  case against Chomsky in the next post.

 

Cook, V. J. (1991) The poverty-of-the-stimulus argument and multi-competence.  Second Language Research, 7,2, 103-117

Sampson, G. (1999)  Educating Eve: the `language instinct’ debate. London: Cassell.