Wikimedia Product/Inclusive Product Development/Removing Bias One Pager
Addressing Bias in our Products
[edit]This page represents the one pager developed prior to the formation of the Inclusive Product Development Working Group.
Why are we building?
[edit]In the early stages of determining what features we are going to build, we need to be explicit and transparent in determining why we are making changes, or creating something new. Are we enhancing or creating tools for our existing users, if so which existing users are we addressing and in what context? Are we trying to create tools to engage voices that are not currently represented? Do we have data that indicates what voices are overrepresented/underrepresented, if not why? Are we pursuing conversations/processes that will change that?
To begin, we must start with understanding our why and utilize that as a lens to determine how equity will be measured.
Our current process for multiple feature teams does not begin with a thorough baseline analysis that incorporates qualitative information of who is being left out of the narrative and possible current inhibitors for readers and contributors as a result of our tools. In addition, when engaging in community consultation, we donât ensure feedback is equitable by moving beyond cohorts of the loudest and more established voices in the movement. We do not prioritize engaging voices outside of this small echochamber or one off editors on usertesting.com. Â
Who is building
[edit]The word âdiversityâ is routinely used to describe the makeup of a team, but âdiversityâ simply isnât enough to ensure equitable product/feature design. The goal should be to be representative of the world, at all levels, from the individual contributor to upper management. When teams represent the world it is building for, it helps identify blind spots we wouldnât know existed otherwise. There are short and long term solutions to rectify the lack of representation in our department.
The initial step is to perform a comprehensive audit of the composition of teams to understand our current baseline. We are a global organization and must identify how our teams measure up against the composition of the world. Utilizing data from the audit that will identify team composition and function, it is critical to follow up with a gap analysis.
The gap analysis becomes a framework in which can be applied to multiple areas of the work. In the case of recruitment, data from the analysis will allow hiring managers to identify where they have deficits in representation on their teams. A more intentional recruitment process can be launched with this information and allow specialized recruitment from job boards in areas where we lack representation. A thorough gap analysis will also help identify potential opportunities to partner with organizations that support diverse youth in securing interning positions. However, active recruitment is only the first step to ensure increased representation on teams. We must ensure hiring panels are not homogenous while avoiding tokenism and that there are accountability processes in place for hiring managers.
How are we building
[edit]When entering the building and testing phase, are we engaging diverse Wikipedian user groups (ex. Women in Red, AfroCrowd) to ensure equitable impact? When recruiting ambassadors, are we ensuring the ambassadors arenât being engaged in solely European countries? If we are trying to engage an editor from French Wikipedia, are we considering French speaking African countries or are we relying on defaults that prioritize European insights? Is QA testing on devices that are slower than iPhones or modern Androids to consider how our feature fares in low bandwidth environments? Outside of the product lense, when evaluating support we provide to the movement around convenings, is the support provided to WikiIndaba comparable to other Wiki events? Are we utilizing regional events like WikiIndaba as opportunities to get input from the community as we plan our projects?
There are still many questions that continually need to be addressed:
- When designing products/features, are we putting early stage designs in front of representative audiences and creating spaces and processes that would allow them the time to weigh in meaningfully?
- Are we translating our tests to underserved/underrepresented languages and actively addressing accessibility gaps? Â
- Do we have a formal way of assessing risks raised by community members so that we can account for âedge casesâ and have a mitigation and contingency plan?
- Are the personas missing important identity details and is the connection being made to our existing and aspirational users?
- Are the personas being introduced, coupled with adequate quantitative research, early enough in the software development lifecycle?
- Are we watching how users behave with our tools and are we willing to prioritize intervening when it is harmful?
- Are we ready to regularly incorporate these topics in team ceremonies through discussion in meetings, attending training, and through holding ourselves/peers accountable?
Will leadership and management allow teams to prioritize Equity and Fairness over speed of development? Will leadership support and provide resources for a team-based metric that actively checks for bias in our projects?
How do we move forward??
[edit]- Leadership will need to make the explicit decision that they are ready to prioritize and provide resources for combating bias in our projects and products
- Distribute a questionnaire that incorporates the above questions and have each team collectively and honestly answer the questions with the understanding there will be no backlash for responses provided.
- Get support on the questionnaireâs creation from the Equity Council and Design Research
- Leadership publically shares insights from the questionnaire and support teams as they create at minimum one metric to improve in an issue areas exposed in the process of completing their questionnaire
- Teams provide routine updates on their progress towards their metric in each Quarter in Review
- Iterate