We investigated how to harness users’ everyday habits to create the Youtube Bias Tracker, a tool that encourages them to flag problematic content on Youtube to improve the platform’s algorithm.
This allows users to be in control of what they see on social media and online streaming platforms, and consequently reduce the occurrence of bias.
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Algorithmic bias is cropping up all over the news recently, from the racist Twitter cropping algorithm to gender discrimination in Apple credit cards. The problem with bias in AI is that it is often emergent, meaning that it can surface unexpectedly in the system’s context of use or as the algorithms learn from data over time (prime example: Microsoft’s Tay Twitter Bot). Emergent bias is notoriously hard to anticipate, so researchers at Carnegie Mellon Univserity, led by my professor Motahhare Eslami, have been investigating the power of everyday algorithm auditing to mitigate the cultural blindspots of machine learning development teams.
My class in User Centered Research and Evaluation at Carnegie Mellon University was tasked with the open-ended problem of using user research methods to identify stakeholder needs and propose a solution to detect and mitigate harmful algorithmic behaviors.
To understand how to audit algorithmic bias on Youtube and social media, we conducted a variety of generative and evaluative research methods to create a solution to fit our users’ needs, including:
Our user interviews revealed key evidence points to inform our solution.
1/ Users have different reasons for disliking videos. Therefore, there is no consistent metric for gauging if others believe a video is problematic.
“I only dislike videos that are spreading misinformation or low-quality.”
“I only use dislikes if I’m not interested or think it’s harmful”
2/ Because of the nature of social media, users often just scroll and not take actions. Any reporting functionality needs to be fast, convenient, and prominently integrated into the interface users are familiar with.
“I ignore a lot of content since a lot of posts on social media are rough”
“If I see problematic video from channels on my recommended feed, I just think the channel is stupid and won’t do anything about it”
3/ People expect reporting to have an effect. Users need to be assured of the efficacy of their actions — whether that be to inform the algorithm, or simply flagging errors and expressing their opinions on the video. The effect of their actions, however, are often unclear to them due to a lack of documentation and feedback.
“I prefer to report directly to the platform as a way to inform the algorithm”
“ I just hope enough people report it to get it taken down”
With these insights, we came up with a new YouTube metric — a Bias Indicator. People can click this button to easily provide their opinion about the biases that appeared in the video. If enough users indicate that the video is biased, the video will be taken down. This button not only helps the platform track problematic videos to update the recommendation algorithm, but also provides users transparency about other users’ opinions with color-coded indicators (red, orange, green) and the number of reports. The bias indicator also comes with clear and concise documentation that informs users about the appropriate context to use the indicator while reassuring them about the efficacy and impact of their actions.
At a high level, our intervention aims to empower users to audit bias by providing accessible auditing actions and transparency about the impact of their actions. Thank you for tuning in and please feel free to contact me with any questions at naraya@andrew.cmu.edu or on LinkedIn.