Scientific and Creative Analogies in Pretrained Language Models
Citations Over TimeTop 13% of 2022 papers
Abstract
This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2. Existing analogy datasets typically focus on a limited set of analogical relations, with a high similarity of the two domains between which the analogy holds. As a more realistic setup, we introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains. Using this dataset, we test the analogical reasoning capabilities of several widely-used pretrained language models (LMs). We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.
Related Papers
- → Partially isomorphic generalization and analogical reasoning(1994)3 cited
- → Leading by Analogy(2004)
- Analogical Reasoning in Physics Education(2003)
- On Reasoning Forms of the Investigation of Combining Cases(2005)
- → Analogy-Based Legal Reasoning(2022)