Improving Domain-Specific Retrieval by NLI Fine-Tuning
Citations Over Time
Abstract
The aim of this article is to investigate the finetuning potential of natural language inference (NLI) data to improve information retrieval and ranking.We demonstrate this for both English and Polish languages, using data from one of the largest Polish e-commerce sites and selected opendomain datasets.We employ both monolingual and multilingual sentence encoders fine-tuned by a supervised method utilizing contrastive loss and NLI data.Our results point to the fact that NLI fine-tuning increases the performance of the models in both tasks and both languages, with the potential to improve monoand multilingual models.Finally, we investigate uniformity and alignment of the embeddings to explain the effect of NLI-based fine-tuning for an out-of-domain use-case.
Related Papers
- → PACWON: A parallelizing compiler for workstations on a network(1998)
- Study and Two Types of Typical Usage of DataGrid Web Server Control(2005)
- Achieving Parameter of DBSCAN Based on Datagrid(2010)
- Using DataGrid Control to Realize DataBase of Querying in VB6.0(2000)
- Susquehanna Chorale Spring Concert "Roots and Wings"(2017)