Selected Publications

BibTeX entries for all papers: settles.bib
More publications, citations, etc. on Google Scholar

  • Jump-Starting Item Parameters for Adaptive Language Tests
    A.D. McCarthy, K.P. Yancey, G.T. LaFlair, J. Egbert, M. Liao, and B. Settles
    Empirical Methods in Natural Language Processing (EMNLP), 2021
    Building on our TACL 2020 paper on ML-dreven test development, we introduce a novel mutli-task framework for combining human difficulty judgments with empirical item response data to rapidly develop language assessments. We use transformer text representations for items, and can accurately calibrate item parameters in as few as 6 exposures, with estimates that are highly related with lexico-grammatical features known to correlate with reading difficulty.
    pdf · Duolingo English Test
  • Machine Learning Driven Language Assessment
    B. Settles, G.T. LaFlair, and M. Hagiwara
    Transactions of the Association for Computational Linguistics, 8:247-263, 2020
    We describe a method for rapidly creating language proficiency assessments, end-to-end, using machine learning for item generation, calibration, administration, and scoring. We used these methods to develop an online proficiency exam called the Duolingo English Test, and demonstrate that its scores align significantly with other high-stakes English assessments while satisfying reliability and security requirements.
    pdf · Duolingo English Test
  • A Sleeping, Recovering Bandit Algorithm for Optimizing Recurring Notifications
    K.P. Yancey and B. Settles
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). 2020
    We present a novel bandit algorithm that exploits novelty effects and arm (in)eligibility, two characteristics absent from typical bandit formulations but are important for recurring notifications, such as practice reminders in educational applications.
    pdf · data
  • Simultaneous Translation and Paraphrase for Language Education
    S. Mayhew, K. Bicknell, C. Brust, B. McDowell, W. Monroe, and B. Settles
    ACL Workshop on Neural Generation and Translation (WNGT). 2020
    We present the task of simultaneous translation and paraphrasing for language education (STAPLE). Given a prompt in one language, the goal is to generate a diverse set of correct translations that language learners are likely to produce. We describe a novel and unique corpus for studying this task, and report on the results of a shared task challenge which attracted dozens of research teams worldwide, synthesizing work in machine translation, MT evaluation, and automatic paraphrasing.
    pdf · code+data · website
  • Second Language Acquisition Modeling
    B. Settles, C. Brust, E. Gustafson, M. Hagiwara, and N. Madnani
    NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 56-65. 2018
    We present the task of second language acquisition (SLA) modeling, the task of predicting learner errors based on a trace of their learning history. We introduce a corpus of 7M+ words produced by 6k+ learners of English, Spanish, and French using Duolingo. We also report on the results of a shared task challenge aimed at studying the SLA task using this corpus: 15 teams from the fields of cognitive science, linguistics, and machine learning participated.
    pdf · code+data · website
  • Never-Ending Learning
    T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J.Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling
    Communications of the ACM, 61(5):103-115, 2018
    Expanded "journal version" of our AAAI 2015 paper below.
    pdf · website
  • A Trainable Spaced Repetition Model for Language Learning
    B. Settles and B. Meeder
    Association of Computational Linguistics (ACL), pages 1848-1858. 2016
    We present a novel half-life regression model for spaced repetition practice, with applications to second language acquisition. The model marries psycholinguistic theory with modern machine learning techniques, indirectly estimating the "half-life" of words and concepts in a student's long-term memory. We use log data from Duolingo (a popular online language-learning app) to fit such a model, which improves both learning predictions and real student engagement.
    pdf · code+data · Duolingo
  • Self-directed Learning Favors Local, Rather Than Global, Uncertainty
    D.B. Markant, B. Settles, and T.M. Gureckis
    Cognitive Science, 40(1):100-120, 2016
    Drawing on active machine learning heuristics as models, we study active human learning in a multi-class categorization task. Heuristics that maximize local information gain (i.e., between two classes, rather than among all classes) are better predictors of human information-seeking behavior. Furthermore, people who use these local "query strategies" also perform better on the learning task.
    pdf · publisher's link
  • Never-Ending Learning
    T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, J. Welling
    AAAI Conference on Artificial Intelligence. 2015
    Position paper arguing for ongoing, long-term, self-supervised, cross-modal machine learning systems. As a case study, we describe lessons learned after 5 years of guiding an natural language learning system called NELL (see AAAI 2010 paper below).
    pdf · website
  • Learning from Human-Generated Lists
    K.S. Jun, X. Zhu, B. Settles, and T.T. Rogers
    International Conference on Machine Learning (ICML), pages 181-189. 2013
    We propose sampling with reduced replacement (SWIRL), a new computational cognitive model of how humans generate lists from memory. SWIRL captures the importance of both order and repetition in such lists, which we further exploit as priors and constraints in machine learning algorithms (e.g., text classifiers trained from human-generated word lists), and use for classification directly (e.g., to predict cognitive dysfunction).
    pdf · video · blog post
  • Let's Get Together: The Formation and Success of Online Creative Collaborations
    B. Settles and S. Dow
    Human Factors in Computing Systems (CHI), pages 2009-2018. ACM, 2013
    We study collaboration in an online music community, combining member surveys with a novel path-based regression analysis of the social network. We find that communication, compatible but complementary interests, and slight differences in status are key factors in collab formation; and that balanced efforts from both parties contribute to collab success. Our model also predicts new collaborations much more accurately than standard link-prediction methods.
    pdf · video · blog post
  • Modeling Online Creative Collaborations   Feature Article
    S. Dow and B. Settles
    XRDS: The ACM Magazine for Students, 19(4):21-25, 2013
    Shorter, prettier, less formal "magazine version" of the CHI 2013 paper above.
    pdf · publisher's link
  • Active Learning   Book
    B. Settles
    Morgan & Claypool, 2012
    A short intermediate text on active learning, a subfield of machine learning and artificial intelligence. For researchers, graduate students, and engineers working in computer and information sciences, statistics, psychology, and related areas.
    publisher's link · ·
  • Behavioral Factors in Interactive Training of Text Classifiers
    B. Settles and X. Zhu
    North American Association for Computational Linguistics - Human Language Technologies (NAACL HLT), pages 563-567. ACL, 2012
    As interactive annotation interfaces offer humans more expressive ways of "teaching" machine learning systems, what impact these varied annotation choices have? This paper looks at the effects of actions taken by human annotators on interactively-trained text classifiers.
  • Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances
    B. Settles
    Empirical Methods in Natural Language Processing (EMNLP), pages 1467-1478. ACL, 2011
    DUALIST is a novel active learning paradigm which solicits and learns from labels on both features (e.g., words) and instances (e.g., documents). This setting motivates a new, fast, and flexible semi-supervised training algorithm for such dual supervision. Human annotators in user studies were able to produce near-state-of-the-art results with only a few minutes of effort.
    pdf · software
  • SIRT3 Substrate Specificity Determined by Peptide Arrays and Machine Learning   Cover Article
    B.C. Smith, B. Settles, W.C. Hallows, M.W. Craven, and J.M. Denu
    ACS Chemical Biology, 6(2):146-157, 2011
    SIRT3 is an important mitochondrial enzyme, linked to survivorship in diabetes and various age-related diseases. Using high-throughput peptide screens as training data, we use machine learning to induce a model which (1) accurately predicts SIRT3 binding specificity, which we apply to the entire mitochondrial proteome to identify potential new binding targets, and (2) is highly interpretable, advancing understanding of the structure and function of SIRT3 and its interactions.
    pdf · supporting info
  • Plugged in to the Community: Social Motivators in Online Goal-Setting Groups
    M. Burke and B. Settles
    International Conference on Communities & Technologies (C&T), pages 1-10. ACM, 2011
    We use computational models to examine two social factors in an online songwriting community challenge: (1) early feedback evoking a shared social identity, and (2) one-on-one collaborations with other members. We find that users who engage in these social features perform better at their goals than those who are non-social. We also begin to characterize the properties of "successful" collaborative interactions.
  • Toward an Architecture for Never-Ending Language Learning
    A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E.R. Hruschka Jr. and T.M. Mitchell
    Conference on Artificial Intelligence (AAAI), pages 1306-1313. AAAI Press, 2010
    The architecture for NELL (never-ending language learner), a large-scale natural language processing system that runs continuously, 24x7, using multi-task semi-supervised learning methods to extract structured information from the World Wide Web.
    pdf · website · supporting info
  • Learning to Tag from Open Vocabulary Labels
    E. Law, B. Settles, and T.M. Mitchell
    European Conference on Machine Learning & Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), pages 211-226. Springer, 2010
    A new approach to classifying and retrieving media content, using "tags" from social websites and human computation systems as training data. Such labels are open-vocabulary and thus noisy and sparse, but we organize them into well-behaved semantic classes via topic modeling, and learn to predict these class distributions from media features. We demonstrate the scalability and accuracy of this approach on data collected from an online music annotation game, and also show the need for human evaluations in such open-vocabulary tasks.
  • Computational Creativity Tools for Songwriters
    B. Settles
    NAACL-HLT Workshop on Computational Approaches to Linguistic Creativity, pages 49-57. ACL, 2010
    This paper describes two NLP systems designed as lyric-writing aids for musicians. Titular automatically generates song titles, and LyriCloud is a word-level language "browser" that lets users select words and receive lyrical suggestions in return. We also suggest performance criteria for such creativity tools, and present case studies from use in an international songwriting contest.
    pdf · demos
  • Active Learning by Labeling Features
    G. Druck, B. Settles, and A. McCallum
    Empirical Methods in Natural Language Processing (EMNLP), pages 81-90. ACL, 2009
    In natural language tasks, features can often be intuitively labeled (e.g., in extracting information from apartment classifieds, "WORD=deposit" might indicate the label "lease," or "WORD=pets" indicate "restrictions"). We introduce novel query algorithms and user labeling interfaces for feature-based active learning in such domains.
  • Active Learning Literature Survey
    B. Settles
    Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009
    A survey of active learning literature. See the book Active Learning above for an updated an more comprehensive treatment of this topic.
  • Curious Machines: Active Learning with Structured Instances
    B. Settles
    PhD thesis, University of Wisconsin-Madison, 2008
    My PhD thesis on active learning for structured input representations (e.g., sequence labeling and multiple-instance learning tasks) and queries with potentially varying annotations costs. Also introduces the information density (ID) and expected gradient length (EGL) active learning frameworks.
  • An Analysis of Active Learning Strategies for Sequence Labeling Tasks
    B. Settles and M. Craven
    Empirical Methods in Natural Language Processing (EMNLP), pages 1069-1078. ACL, 2008
    Active learning has not been well-studied for structured prediction tasks such as information extraction. This paper expands the frontier of query strategies for sequence models (CRFs, HMMs, PCFGs, etc.) into several new query frameworks, and presents a large-scale empirical evaluation of these algorithms on eight benchmark data sets.
    pdf · code
  • Active Learning with Real Annotation Costs
    B. Settles, M. Craven, and L. Friedland
    NeurIPS Workshop on Cost-Sensitive Learning, 2008
    Do annotation costs vary across instances? Among annotators? Can these costs be accurately predicted? What impact might this have on active learning in practice? This paper addresses these questions with a detailed empirical study of real-world annotations costs, and presents a novel approach to cost-sensitive active learning by modeling unknown annotation costs directly.
    pdf · data
  • Multiple-Instance Active Learning
    B. Settles, M. Craven, and S. Ray
    Advances in Neural Information Processing Systems (NeurIPS), volume 20, pages 1289-1296. MIT Press, 2008
    In multiple-instance (MI) learning, instances are organized into bags, which can be labeled inexpensively but ambiguously. In some MI problems, finer-granularity instance labels can be obtained, which are less ambiguous but more costly. This paper motivates a novel active learning framework for MI learners that allow them to query and learn from labels at mixed levels of granularity.
    pdf · code · data
  • Ranking Biomedical Passages for Relevance and Diversity
    A. Goldberg, D. Andrzejewski, J. Van Gael, B. Settles, X. Zhu and M. Craven
    Text Retrieval Conference (TREC), 2007
    An information retrieval system for biomedical text, focused on query generation and result ranking using a PageRank-style algorithm. The proposed ranker encourages both relevance and diversity in top ranked items, by turning retrieved items into absorbing states on a graph.
    pdf · code
  • Classifying Biomedical Articles by Making Localized Decisions
    T. Brow, B. Settles and M. Craven
    Text Retrieval Conference (TREC), 2006
    This paper presents a variety of machine learning approaches that exploit document-passage relationships both in classification and in learning. Results support our hypothesis that, for some text classification tasks, only certain passages of text are relevant to the task at hand.
  • ABNER: An Open Source Tool for Automatically Tagging Genes, Proteins, and Other Entity Names in Text
    B. Settles
    Bioinformatics, 21(14):3191-3192. 2005
    An introduction to ABNER, a state-of-the-art, open-source, biomedical information extraction tool written in Java. It works stand-alone or as an API for inclusion in more sophisticated information management systems.
    pdf · software
  • Biomedical Named Entity Recognition Using Conditional Random Fields and Rich Feature Sets
    B. Settles
    International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications (NLPBA), pages 104-107. 2004
    This paper motivates biomedical named entity recognition using conditional random fields (CRFs) with a variety of orthographic and automatically induced semantic features. It was one of the top performing approaches in the NLPBA shared task evaluation.

© 2024. All rights reserved ⇔ Ridges shall revert.