User-LLM: Efficient LLM Contextualization with User Embeddings
Citations Over TimeTop 1% of 2025 papers
Abstract
Large language models (LLMs) hold immense potential for personalized AI, but effectively incorporating user history for personalized responses remains challenging.Existing methods often convert user timelines into lengthy text descriptions, leading to high computational cost and potential loss of nuanced information.Inspired by the successful integration of LLMs with other modalities, such as images, we introduce User-LLM, a novel framework that treats user timelines as a distinct modality and leverages user embeddings for efficient LLM contextualization.User embeddings, generated by a pretrained user encoder, capture latent user behaviors and interests from diverse interaction data.By integrating these embeddings with LLMs through cross-attention, User-LLM enables LLMs to dynamically adapt their responses to individual user history.Our evaluation on three diverse datasets (MovieLens, Amazon Review, and Google Local Review) demonstrates that User-LLM achieves substantial computation reduction (up to 78.1X) compared to text-prompt-based methods, without sacrificing performance.Importantly, User-LLM maintains or even improves performance on tasks requiring deep user understanding, particularly with long user histories, highlighting its effectiveness in efficiently capturing and leveraging user information for personalized responses.
Related Papers
- Worldview, Challenge of Contextualization and Church Planting in West Africa – Part 1: Definition of Worldview and the Historical Development of the Concept(2010)
- The Trinity and Contextualization(2010)
- Susquehanna Chorale Spring Concert "Roots and Wings"(2017)
- → DETERMINING QUALITY REQUIREMENTS AT THE UNIVERSITIES TO IMPROVE THE QUALITY OF EDUCATION(2018)
- → ИСПОЛЬЗОВAНИЕ ПОТЕНЦИAЛA СОЦИAЛЬНЫХ ПAРТНЕРОВ В ПОДГОТОВКЕ БУДУЩИХ ПЕДAГОГОВ(2024)