diff --git a/data/xml/2024.teachingnlp.xml b/data/xml/2024.teachingnlp.xml index 5064ea76c2..0b9c2ee8d9 100644 --- a/data/xml/2024.teachingnlp.xml +++ b/data/xml/2024.teachingnlp.xml @@ -151,7 +151,7 @@ A Prompting Assignment for Exploring Pretrained <fixed-case>LLM</fixed-case>s - CarolynAndersonWellesley College + Carolyn JaneAndersonWellesley College 81-84 As the scale of publicly-available large language models (LLMs) has increased, so has interest in few-shot prompting methods. This paper presents an assignment that asks students to explore three aspects of large language model capabilities (commonsense reasoning, factuality, and wordplay) with a prompt engineering focus. The assignment consists of three tasks designed to share a common programming framework, so that students can reuse and adapt code from earlier tasks. Two of the tasks also involve dataset construction: students are asked to construct a simple dataset for the wordplay task, and a more challenging dataset for the factuality task. In addition, the assignment includes reflection questions that ask students to think critically about what they observe. 2024.teachingnlp-1.12