Sessions / Assessment / Testing
Development of a New Rubric for Assessing Interactional Competence in Group Discussions #2795
In recent years, more researchers have been assessing speaker interactional competence (IC), a concept that builds on earlier models of communicative competence. In contrast to earlier models of communication, IC more explicitly accounts for the co-constructed nature of talk. IC consists of interactive listening, topic development, dealing with communication breakdowns, inviting contributions, and responding accordingly (May et al., 2020). The roots of IC are conversation analysis (CA), a fine-grained and powerful form of analysis, but one that requires training and experience. As a result, there are currently few widely-used rubrics for assessing IC. In this presentation, I will describe how I created this rubric for assessing IC. As this research project is ongoing, I will examine some initial data in the form of audio recordings and transcripts, discuss some preliminary findings, and suggest possible improvements for future versions.
Improving Employability Through Innovating the English Assessment Experience with Mobile Technology #3109
British Council EnglishScore’s innovative use of technology is helping thousands of students around the world to prove their English language skills through a convenient, affordable and secure mobile test and certificate.
Discover how a mobile English test powered by AI technology allows for greater accessibility and fairness, and offers cost savings to both institutions and students.
EnglishScore aims to improve young people’s employability and career prospects by providing a quick and affordable way to prove their English language proficiency to future employers regardless of their location or background.
Note: This session includes a video introduction to EnglishScore followed by a live Q&A with representatives of the British Council.
Developing Equity-Driven Assessment In-Class #2993
Designing and administering effective and equitable in-class assessments is one of the most important tasks that language teachers need to engage in. However, making the assessment process equitable is not something that is discussed much in the field of TESOL. This session will address six principles that make the classroom-level, language assessments equitable and effective. The principles that will be discussed are validity, reliability, practicality, authenticity, washback and equity. The session hosts will first describe some of the specific ways to make the assessment equitable by discussing the examples of assessments with these principles in mind. The session hosts will also ask the participants to critique assessments based on these principles. Based on this model, participants will also be asked to come up with effective and equitable assessment examples from their own experience.
Revisiting Language Assessment Literacy: Past, Present and Future #2901
Graduate Student Showcase
Language assessment literacy (LAL), a term developed from assessment literacy (AL), has now become a crucial agenda in the language testing field. The last decade, indeed, has witnessed numerous studies exploring LAL from different perspectives. That said, a detailed and comprehensive review is still in its infancy. As a response, the current paper used thematic review to categorize and synthesize existing LAL studies (N = 38), published on major language testing journals (e.g., Language Testing) and other relevant sources (e.g., book chapters), from the aspects of a) origin, b) definition, c) frameworks, d) stakeholders, and e) instruments. Results reveal that other stakeholders (e.g., parents), to some extent, are difficult to understand and use existing LAL definitions and frameworks effectively, as language testing researchers largely prescribe them. Besides, teachers' LAL is overrepresented in the present literature, while fewer studies have been conducted on other stakeholders (e.g., test-takers). To conclude, prospective areas are suggested for further research and discussion.
Test Validation Issues in Remote L2 Assessment #2675
With the coronavirus pandemic that began in early 2020, there has come a heightened need for remote, “at-home” assessment of L2 ability. Yet for the use of online L2 test methods to succeed in terms of validity (Messick, 1989; Kane, 2006; Chapelle, 2012), it is necessary to review and modify theoretical constructs undergirding emergent assessments. Aiming to offer workable test-development guidelines during a pandemic that has challenged the well-being of L2 learners and TESOL professionals worldwide, this paper will analyze a number of models and frameworks of L2 ability (e.g., Canale and Swain, 1980; Bachman and Palmer, 1996; Celce-Murcia, Dornyei, and Thurrell, 1995; Hall et al., 2011; Pekarek Doehler, 2018), offering critical appraisals of their usefulness vis-à-vis both validation principles and practical constraints of online assessment. Analyses will be complemented by a qualitative examination of interactive data from an online version of a test of L2 oral pragmatics undergoing development.