Why aviation English testing is in such a poor state
POST 10: The ICAO Rated Speech Samples Training Aid
Why is language testing for the ICAO LPRs in such a poor state? One reason is a lack of appropriate models for aviation English test and task design.
After Doc. 9835, the most authoritative reference for aviation language assessment is the ICAO Rated Speech Samples Training Aid (RSSTA). The RSSTA has played an important role in helping the international aeronautical community understand what spoken performances at various levels sound like, but it has had an undesirable side-effect: it has presented models for aviation English testing that have led to confusion around what we aim to measure and how to measure it.
A key objective of the RSSTA is to present samples of speech that are as representative as possible of test taker first language, professional background and language level. To achieve this objective, samples were gathered from far and wide with the caveat that ‘inclusion of a speech sample should in no way be interpreted as a judgement of the quality of the test tasks’. As with any project, the fundamental parameter for success is the quality of the raw material available and with the RSSTA, the result is a mixed bag. Let’s briefly explore three ways the RSSTA has influenced aviation English test design.
To make valid inferences about the ability of a test taker to communicate effectively on the radio:
1. We need test tasks that elicit performances which are:
Representative of aeronautical radiotelephony communication; and
Specific to the test-taker role.
Generic interview, picture description and discussion tasks may add value, but alone they are insufficient. Such tasks feature widely in the RSSTA, and this may explain, in part, the widespread misunderstanding of the construct we aim to measure with a corresponding failure of many tests to address the target of the ICAO LPRs.
2. We need tasks which specifically address listening comprehension in the context of aeronautical radiotelephony communication. Because:
The RSSTA is a training aid for the assessment of speech (the clue is in the name!); and
Assessment of listening through performance in speaking tasks is problematic;
The RSSTA does not provide appropriate models for listening comprehension tasks. This may explain, in part, why so many aviation English tests are under-representative of listening comprehension.
3. We need well-designed tasks administered by competent interlocutors. While the RSSTA features some good task design and delivery models, there are some poorer ones too. This may explain, in part, the poor standard of test design and administration which is so common in the field today.
Until we have clear, well-defined and appropriate models for aviation English test tasks alongside clear ICAO guidance on test design, pilots and controllers will continue to take poorly-constructed language tests that fail to address aeronautical radiotelephony communication.
For excellent guidance on aviation English test design, see the ICAEA Test Design Guidelines.
Download this collection of blog posts here: