[PromptEng] First International Workshop on Prompt Engineering for Pre-Trained Language Models
Citations Over TimeTop 20% of 2024 papers
Abstract
The recent achievements and availability of Large Language Models have paved the road to a new range of applications and use-cases. Pre-trained language models are now being involved at-scale in many fields where they were until now absent from. More specifically, the progress made by causal generative models has open the door to using them through textual instructions aka. prompts. Unfortunately, the performances of these prompts are highly dependent on the exact phrasing used and therefore practitioners need to adopt fail-retry strategies. This first international workshop on prompt engineering aims at gathering practitioners (both from Academia and Industry) to exchange about good practices, optimizations, results and novel paradigms about the design of efficient prompts to make use of LLMs.
Related Papers
- → MDA: Revenge of the modelers or UML utopia?(2004)110 cited
- → Towards a Cognizant Virtual Software Modeling Assistant using Model Clones(2019)20 cited
- → Agile Development of Platform Independent Model in Model Driven Architecture(2010)9 cited
- → A Reference Model for the Analysis and Comparison of MDE Approaches for Web-Application Development(2010)7 cited
- → Guest editorial to the special issue on “modelling–foundations and applications”(2013)