Developing a validation process for an adaptive computer-based spoken English language test.

UNDERHILL, Nic. (2000). Developing a validation process for an adaptive computer-based spoken English language test. Doctoral, Sheffield Hallam University (United Kingdom)..

[img]
Preview
PDF (Version of Record)
10701115.pdf - Accepted Version
All rights reserved.

Download (22MB) | Preview

Abstract

This thesis explores the implications for language test validation of developments in language teaching and testing methodology, test validity and computer-based delivery. It identifies a range of features that tests may now exhibit in novel combinations, and concludes that these combinations of factors favour a continuing process of validation for such tests. It proposes such a model designed around a series of cycles drawing on diverse sources of data. The research uses the Five Star test, a private commercial test designed for use in a specific cultural context, as an exemplar of a larger class of tests exhibiting some or all of these features. A range of validation activities on the Five Star test is reported and analysed from two quite different sources, an independent expert panel that scrutinised the test task by task and an analysis of 460 test results using item-response theory (IRT). The validation activities are critically evaluated for the purpose of the model, which is then applied to the Five Star test. A historical overview of language teaching and testing methodology reveals the communicative approach to be the dominant paradigm, but suggests that there is no clear consensus about the key features of this approach or how they combine. It has been applied incompletely to language testing, and important aspects of the approach are identified which remain problematic, especially for the assessment of spoken language. They include the constructs of authenticity, interaction and topicality whose status in the literature is reviewed and determinability in test events discussed. The evolution of validity in the broader field of educational and psychological testing informs the development of validation in language testing and a transition is identified away from validity as a one-time activity attaching to the test instrument towards validation as a continuing process that informs the interpretation of test results. In test delivery, this research reports on the validation issues raised by computer-based adaptive testing, particularly with respect to test instruments such as the Five Star test that combine direct face-to-face interaction with computer-based delivery. In the light of the theoretical issues raised and the application of the model to the Five Star test, some implications of the model for use in other test environments are presented critically and recommendations made for its development.

Item Type: Thesis (Doctoral)
Additional Information: Thesis (Ph.D.)--Sheffield Hallam University (United Kingdom), 2000.
Research Institute, Centre or Group - Does NOT include content added after October 2018: Sheffield Hallam Doctoral Theses
Depositing User: EPrints Services
Date Deposited: 10 Apr 2018 17:22
Last Modified: 26 Apr 2021 12:32
URI: https://shura.shu.ac.uk/id/eprint/20468

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics