Evaluating learned feature aggregators for writer retrieval
Citations Over Time
Abstract
Abstract Transformers have emerged as the leading methods in natural language processing, computer vision, and multi-modal applications due to their ability to capture complex relationships and dependencies in data. In this study, we explore the potential of transformers as feature aggregators in the context of patch-based writer retrieval, with the objective of improving the quality of writer retrieval by effectively summarizing the relevant features from image patches. Our investigation underscores the complexity of leveraging transformers as feature aggregators in patch-based writer retrieval. While we have experimented with various model configurations, augmentations, and learning objectives, the performance of transformers in this task has room for improvement. This observation highlights the challenges in this domain and emphasizes the need for further research to enhance their effectiveness. By shedding light on the limitations of transformers in this context, our study contributes to the growing body of knowledge in the field of writer retrieval and provides valuable insights for future research and development in this area.
Related Papers
- → STKVS: secure technique for keyframes-based video summarization model(2024)7 cited
- Study and Two Types of Typical Usage of DataGrid Web Server Control(2005)
- Using DataGrid Control to Realize DataBase of Querying in VB6.0(2000)
- Susquehanna Chorale Spring Concert "Roots and Wings"(2017)
- → ИСПОЛЬЗОВAНИЕ ПОТЕНЦИAЛA СОЦИAЛЬНЫХ ПAРТНЕРОВ В ПОДГОТОВКЕ БУДУЩИХ ПЕДAГОГОВ(2024)