Weekly Seminar: Graduate Paper Presentations

March 06, 2024

Flyer

Where: TMCB 1170

When: February 29th @11am

Come watch three graduate students present on their papers!

Garrett Smith

Talk title: "I Know I'm Being Observed”: Video Interventions to Educate Users about Targeted Advertising on Facebook

Abstract: Recent work explores how to educate and encourage users to protect their online privacy. We tested the efficacy of short videos for educating users about targeted advertising on Facebook. We designed a video that utilized an emotional appeal to explain risks associated with targeted advertising (fear appeal), and which demonstrated how to use the associated ad privacy settings (digital literacy). We also designed a version of this video which additionally showed the viewer their personal Facebook ad profile, facilitating personal reflection on how they are currently being profiled (reflective learning). We conducted an experiment (n = 127) in which participants watched a randomly assigned video and measured the impact over the following 10 weeks. We found that these videos significantly increased user engagement with Facebook advertising preferences, especially for those who viewed the reflective learning content. However, those who only watched the fear appeal content were more likely to disengage with Facebook as a whole.

Lawry Sorenson

Talk Title: Pretraining in Video-Guided Machine Translation

Abstract: Video-Guided Machine Translation (VMT) is a newer subset of multimodal machine translation, in which translation models are given video context to help disambiguate input text. This field suffers from a lack of relevant data due to the cost of aligning video with correlated text. In our project, we experiment with the new monolingual Movie Audio Description (MAD) dataset as a pretraining task to improve downstream translation. We evaluate performance on the existing baseline for VMT, the VaTeX dataset. We find that pretraining on lexically rich text helps improve model performance more than training with more videos.

Hao Yu

Talk Title: Asynchronous Signalling in Spike Neural Networks: Enabling on Chip Training with Intrinsic Temporal Learning Capacity

Abstract: High-fidelity modeling of complex neural processes requires algorithms that mimic brain properties including spike-time dependent plasticity; temporal attributes in the synaptic cleft; and the influence of modulatory neurotransmitters such as dopamine and GABA. In support of this, we propose a novel machine learning algorithm, Asynchronous Spike Neural Network (A-SNN), with temporal learning capacities and a biologically inspired weight update algorithm. The algorithm is appropriate for on-chip training, with with low latency and space consumption. We validate its effectiveness on two small-scale learning tasks and show that it achieves better performance than the well-known LSTM temporal learning algorithm, making it one of the first SNN implementations to outperform traditional machine learning algorithms. Our algorithm shows promise on small-scale tasks, however, additional development is needed before it can scale to more complex data.