Predicting Cloze Task Quality for Vocabulary Training: Difference between revisions

From HLT@INESC-ID

No edit summary
 
No edit summary
 
Line 9: Line 9:
== Date ==
== Date ==


* 15:00, Tuesday, July 2<sup>nd</sup>, 2010
* 15:00, Friday, July 2<sup>nd</sup>, 2010
* Room 336
* Room 336



Latest revision as of 12:19, 28 June 2010

Adam Skory
Adam Skory
Adam Skory
Adam Skory graduated from UCLA with a BA in Linguistics, and has worked as an English teacher in India and South Korea. He is currently a Masters student at the Carnegie Mellon University Language Technologies Institute, and works on language-learning games.
Addresses: www mail

Date

  • 15:00, Friday, July 2nd, 2010
  • Room 336

Speaker

  • Adam Skory, Language Technologies Institute, Carnegie Mellon University

Abstract

Computer generation of cloze tasks still falls short of full automation; most current systems are used by teachers as authoring aids. Improved methods to estimate cloze quality are needed for full automation. We investigated lexical reading difficulty as a novel automatic estimator of cloze quality, to which cooccurrence frequency of words was compared as an alternate estimator. Rather than relying on expert evaluation of cloze quality, we submitted open cloze tasks to workers on Amazon Mechanical Turk (AMT) and discuss ways to measure of the results of these tasks. Results show one statistically significant correlation between the above measures and estimators, which was lexical co-occurrence and Cloze Easiness. Reading difficulty was not found to correlate significantly. We gave subsets of cloze sentences to an English teacher as a gold standard. Sentences selected by co-occurrence and Cloze Easiness were ranked most highly, corroborating the evidence from AMT.