CS Weekly Seminar Series
September 20, 2021
jason weiss.jpeg

Jason Wiese is an Assistant Professor in the School of Computing at the University of Utah where he leads the Personal Data and Empowerment Lab (PeDEL). His research takes a user-centric perspective of personal data, everyday computing experiences, and end-user empowerment. His work spans personal informatics, accessibility, privacy, user-centered design, and real-world deployments. Dr. Wiese’s research excellence has been recognized by paper awards at DIS, CHI, and EICS, and through individual awards, including: recognition as a Yahoo Fellow in 2014, the Stu Card Fellowship in 2012, and the Yahoo! Key Scientific Challenges Award in 2011. He publishes work in top Computer Science and HCI venues including CHI, DIS, CSCW, and UbiComp/IMWUT. He received his Ph.D. in Human-Computer Interaction from Carnegie Mellon University in 2015.

This coming Thursday, September 23rd at 11am in 1170 TMCB, Jason Wiese will be speaking on: “Not *just* another user study: Uncovering the systematic shortcomings of familiar research methods."

Whom does computing serve, whom does it underserve, and do we even know whom we’re missing? Human-computer interaction has matured as a research community over the last two decades with a goal of understanding the effects of technology on people; through that maturity, the research methods we use in the field have mostly stabilized around a familiar and reliable set of qualitative and quantitative methods that help us to take a broad human-centered perspective. But these methods also have limits for what they can tell us about how people might engage with technology, and if we as a field fail to inspect those limits we run the risk of systematically ignoring the needs of end users.

In this talk I explore methodological limitations we have encountered in my research group’s recent projects, including work with individuals who have had a spinal cord injury and a project examining air quality data with parents of asthmatic children. In both cases, there were relatively obvious considerations we needed to make to accommodate research with these participants. However, there was something more subtle lurking underneath: in both cases there were also deeper methodological challenges that would have led to an incomplete picture of those user populations. I argue that researchers and practitioners in human-computer interaction, and more broadly across computing, have a responsibility to interrogate ourselves; to ask in earnest “How do our methods fall short, and whom do we harm in those shortcomings?”