Collaboration with Microsoft Research
Paper writing phase
Study type: interview and design
Methods: thematic analysis, critical design, interviews
Key words: ML, AI, research ethics, limitations, reflection, critical HCI
Interviewed 31 ML researchers and experts to prototype and iterate on an "ML Research Limitations Tool" that can be used to help identify and express limitations appropriately in peer-reviewed papers.
Collaboration with CU Boulder
Data analysis phase
Study type: systematic lit review
Key words: ethics, computer science, education
Methods: grounded theory, thematic analysis
Summative study seeking to explore the ways that ethics pedagogy has evolved in computer science classrooms in the SIGCHI , CS, and ACM communities.
Collaboration with CU Boulder
Study design phase
Study type: survey study
Key words: ethics, computer science, education, attitudes
Methods: survey, quantitative analysis, thematic analysis
Formative study seeking to explore the attitudes that computing educators have towards incorporating ethics in their classroom, including roadblocks and challenges.
Collaboration with CU Boulder and Kiva
Study design phase
Study type: semi-structured interviews
Methods: Interview, thematic analysis, algorithmic design
Assessing various approaches for algorithmic design to promote provider fairness in a multistakeholder, high-stakes recommendation ML system. Designing and evaluating transparency design interventions for users.
Collaboration with IEEE Standards Assocation
Data collection phase
Study type: semi-structured interviews and webinars
Methods: Interview, thematic analysis, group discussions
Analyzing current approaches towards measuring success in AI and discussing alternative metrics for AI that include the triple bottom line (people, planet, profit) in order to prepare for a better future.
Study Completed: Spring 2021
Study type: semi-structured interview
Key words: folk theories, fairness, recommender systems, algorithms, transparency, explanations
Methods: interviews, thematic analysis, design
Exploratory, formative study seeking to explore what users need from explanations in a fairness-aware recommender system.
Study Completed: Spring 2021
Study type: semi-structured interview
Key words: XAI, transparency, explanations, predictive resource allocation
Methods: Interview, thematic analysis, design
Exploratory formative study seeking to explore what users of a predictive maintenance system need from XAI while preserving institutional logic.
Study Completed: Fall 2020
Study type: curriculum development and evaluation
Key words: Computer science, curriculum, pedagogy, ethics, evaluation
Methods: curriculum development, survey design, teaching and evaluation
Formative study to incorporate ethics curricula and assignments into introductory CS classes. Future studies are being planned.
Study Paused: Spring 2021
Study type: Participatory, Community-Based Design Workshops
Key words: participatory design, explanation, transparency, education
Methods: design workshops, survey, community based design, interviews
Working with high schoolers to explore folk theories of algorithmic literacy and speculative futures for transparency / explainability design.
Study Paused: Fall 2020
Study type: structured interview
Key words: radical, AI, resistance, power, ethics, responsible technology
Methods: Interview, thematic analysis, systematic literature review
Exploratory summative study seeking to co-define "Radical AI" with the responsible technology community from interviews on the Radical AI podcast.
Study Completed: Fall 2019
Study type: empirical machine learning experiments and statistical analysis
Methods: Recommender system algorithmic design and deployment, statistical testing, coding
Exploratory study testing various metrics for auditing a recommender system for sexist discrimination and exploring potential causes and solutions.
Study Completed: Fall 2018
Study type: curriculum development
Methods: algorithmic development, curriculum development, design
Creating lesson plans to teach introductory data scientists about fairness concerns, fairness metrics, and auditing ML classification systems for fairness.