Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills
Citations Over TimeTop 10% of 2022 papers
Abstract
Models pre-trained with a language modeling objective possess ample world knowledge and language skills, but are known to struggle in tasks that require reasoning. In this work, we propose to leverage semi-structured tables, and automatically generate at scale questionparagraph pairs, where answering the question requires reasoning over multiple facts in the paragraph. We add a pre-training step over this synthetic data, which includes examples that require 16 different reasoning skills such as number comparison, conjunction, and fact composition. To improve data efficiency, we sample examples from reasoning skills where the model currently errs. We evaluate our approach on three reasoning-focused reading comprehension datasets, and show that our model, PReasM, substantially outperforms T5, a popular pre-trained encoder-decoder model. Moreover, sampling examples based on model errors leads to faster training and higher performance.
Related Papers
- → AN ANALYSIS OF STUDENTS WRITING SKILL IN PARAGRAPH WRITING(2019)9 cited
- Computer-assisted language learning (CALL): Using paragraph punch software in developing EFL paragraph writing skills(2021)
- → An Intelligent Partner for Organizing a Paragraph(2009)
- Methods of Paragraph Writing in CET 4(2003)
- → To Teach or Not to Teach the Five-Paragraph Essay?(2021)