Eliciting Symbolic Association in Language Model Without Modes Tuning
January 27, 2022
Wednesday, February 2nd at 2pm, Summit Room 3346 TMCB
Advisor: David Wingate
MS Thesis Proposal for Josh Robinson
Abstract:
Pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. Researchers have thus taken interest in (1) probing the linguistic and world knowledge they contain and (2) improving their performance and ease of use. Recent methods often required task-specific prompts or fine-tuning to achieve these goals. Task-specific prompt engineering could be avoided by converting tasks into multiple choice question answering, but the symbol binding problem makes this surprisingly difficult. While recent work like UniFew has successfully elicited symbol binding behavior from a T5 model, this required additional training of model parameters using labeled data. I plan to focus my master's thesis research on prompt-based symbol binding that elicits symbol binding behavior from a language model without the need to fine-tune. In addition to finding out whether such a prompt exists, I'd also like to see whether such a prompt could be found algorithmically without labeled data and/or direct access to model weights.