Why aviation English testing is in such a poor state
Post 5: Poor construct definition
Why is language testing for the ICAO LPRs in such a poor state? One reason is that test service providers fail to adequately conceive of the construct of professional language use which results in weak aviation English test instruments.
To test well, a test instrument must be built on a detailed description of the characteristics of the target language use domain. This description - the ‘construct’ - is fundamental because language use is the product of underlying skills which are not directly observable. To make inferences about these skills in any meaningful way, we need a construct definition that is both theoretically sound and as comprehensive and precise as possible. ICAO Document 9835 and the Rating Scale are an essential part of the test developer’s toolkit, but they are not in themselves the test construct. Construct definition is the job of the test service provider. Without a well-defined construct, a test is just a stab in the dark.
Let’s briefly explore one aspect of the construct: Professional language use.
If our agreed mission is to measure the professional language use of pilots and controllers, then tests must be firmly rooted in the operational domain. This means we need to clearly describe, with reference to established theory, how tests treat the relationship between operational knowledge and language use.
You cannot speak "aviation English" without operational knowledge. If you present any test prompt to a pilot or a controller that is even remotely connected with their work, what you get back is inevitably a blend of the two. Separating them is not only impossible (unless we have a chat about sport or cake), it is also undesirable: a basic principle of specific purpose language testing holds that tasks "should allow for an interaction between language knowledge and specific purpose content knowledge"1. Therefore, in aviation language testing, operational knowledge is an implicit part of the construct. This should be evident in the way test service providers communicate about their tests and how they realise the construct in test tasks.
This from a range of European aviation English test service providers:
- “[TEST] is not a test of operational knowledge"
- "[TEST] focus ... is not on operational procedures"
And this last one is the best:
- "Don't be afraid to say something with incorrect content, as long as your ICAO English is perfect, there's nothing to worry!" [sic]
To be clear, language testers have no business directly assessing the accuracy of technical subject matter presented by test takers in language tests. But operational knowledge will always be present in a test taker's language performance in any test of "aviation English", even in tests which are weak representations of the construct of aeronautical communication.
The mission of aviation language testing is not to assess 'language proficiency'. It is to assess 'operational language proficiency', i.e. the language pilots and controllers need to do their jobs safely. The closer we get to authentic interaction between operational knowledge and language proficiency in test tasks, the better our tests will be. Undermining the central role of operational knowledge indicates a failure to adequately conceive of the professional language use construct with a corresponding failure to operationalise the construct in test tasks.
Consider the difference between:
- “[TEST] is not a test of operational knowledge"; and
- “[TEST] assumes that candidates have professional knowledge of aeronautical operations and procedures and standard radiotelephony phraseology. [TEST] engages but does not directly assess this knowledge.”
There is a subtle yet powerful distinction between these two descriptions. The first (real-world) description is misleading. The second (hypothetical) description positions operational knowledge as a central (if not explicitly assessed) part of the construct which opens the door to the design of test tasks that directly address operational language use.
Until test service providers better understand the professional language use construct and develop more theoretically-sound test instruments that reflect the operational language use domain, pilots and controllers will continue to take poorly-constructed language tests that fail to address aeronautical radiotelephony communication.
1 Douglas, D. (2000) Assessing Languages for Specific Purposes (CUP)
Download this collection of blog posts here: