A Critical Look at the Practitioners’ Discourse and the Way Forward
In September 2019 Mike Monteiro, director of Mule Design and an authority in his field, gave a provocative keynote at the international EuroIA conference for information architects and UX-designers (held in Riga, Latvia). The title was “Let’s destroy Silicon Valley” and his message was relatively simple: everything that is going wrong in tech and the world was designed that way. His criticism was broad in scope and connected to several current challenges, ranging from hate speech on social media, cyberbullying, algorithmic manipulation, racism, sexism, to privacy invasion. His not-so-subtle claim was that all these nasty things are outcomes of conscious design decisions.
“Designers are gatekeepers and must seek maximum positive impact; they can’t do so without taking a stance and having a voice in the debate”
Monteiro took no prisoners in his vocal critique as he directly accused the audience, i.e., professionals from the field of digital design, of being enablers for these harmful impacts of tech. In harsh but clear words he tried to get one important point across: that designers carry great responsibility and should no longer hide behind the “narrow” focus of their daily work that usually revolves around quite practical design questions. According to Monteiro, who wrote a book about the issue, designers are as culpable for the harms of technology as their managers and CEOs. He postulated that saying “I am just a designer” may not only be naïve but downright irresponsible. At the end of what some might call an epic rant, Monteiro made a plea for designers to be aware of the power that they wield, to unite, and resist unethical practices that have become so prevalent in the tech sector. Designers are gatekeepers and must seek maximum positive impact; they can’t do so without taking a stance and having a voice in the debate. Many in the audience agreed, while others felt unduly attacked and some even left obviously upset.
“The same people felt a bit surprised, if not overwhelmed, that all these “wicked problems” suddenly fell into their responsibilities”
I was one of the very few attendants at the event who had no connection to the field of UX design. As an outsider, it was interesting to observe the reactions and to talk to a few attendants afterwards. Some felt Monteiro spelled out how they felt. Some thought his talk was out of place and that the conference was not the right forum for that kind of discussion. What stood out were the different levels to which design practitioners paid attention to the critical issues that I often summarise as data risks and/or data malpractices. Several of my conversation partners had quite a profound understanding of what was going wrong in tech. Others didn’t seem to have paid much attention to these challenges or were only as much aware of the problems as anybody else who follows the news. Often, the same people felt a bit surprised, if not overwhelmed, that all these “wicked problems” suddenly fell into their responsibilities. Monteiro’s talk stood in contrast to the vibe of the conference until then: positive, creative, and tech-/innovation-focused. Most were there to exchange experiences, ideas, and to talk about trends. Just as one would expect from a conference for designers. Many did not saw this fundamental criticism on their profession coming.
What surprised me a bit was that I considered none of the things Monteiro talked about as really new or surprising. He surely amped up the visual and rhetoric delivery, but at its core he summarised issues that had been subject to critical discussion in academia at least since the early 2010s. One could say that discussing and analysing social impact and formulate criticism is the main job for many “academic types” like myself. However, I assumed that practitioners in tech and design paid as much attention to the downsides of the digital transformation as their colleagues at university. Now I was no longer sure of that. It felt a bit as if he burst my “intellectual bubble”. Since then, I try to understand better how much design practitioners talk about ethical challenges, how they make sense of them, and what solutions they propose. However, to narrow down the scope I focus primarily on data bias as one concrete data risk. Below I summarise what I could gather about the UX- and tech debate on data bias and discrimination in the digital society. It is not a representative quantitative study but rather an exploration of anecdotal findings from my very limited investigation so far. Still, this includes an overview of current themes and topics and a preliminary categorisation of bias-related challenges in UX.
Data Bias as a Challenge for UX
Looking at current discussions about biases and discrimination especially in “big tech”, one can safely say that little has changed for the better in the past two years. However, it would be wrong to assume that designers were ignorant, lazy, or just still unaware about the problem. The reality looks much more complicated. It has a lot to do with the complexity of the problem, its roots in culture and society, the way the digital economy works, and how long it simply takes to initiate tangible change.
“Data bias challenges often derive from a complex interplay between cultural biases, a lack of diversity in design teams, practical limitations, and economic pressures”
Data bias has always been an important topic for UX-professionals. For starters, to design UX-solutions that offer value, it takes thorough research on the target audience. Your observations are only useful if what you observe is actually representative of the people for whom you intend to devise a digital solution. UX-researchers test a lot, and the quality of the results highly depends on the chosen research design and a critical look at all the factors that may play a role, especially on part of “the user” (i.e., her expectations, social environment, or technological eco-system). Research is also resource-intensive, and you want to make sure not to waste time and money on results that are potentially misleading. Hence, a quick Google search will yield quite a few hits for websites that offer advice on how to deal with different forms of researcher bias: confirmation bias, availability bias, wording bias, Hawthorne effects, social desirability etc. These are indeed research biases that may yield data that are unfit for developing a truly effective design. However, these are more technical types of biases that are not (always) directly connected to forms of exclusion and (involuntary) discrimination. Such more profound challenges often derive from a complex interplay between cultural biases, a lack of diversity in design teams, practical limitations, and economic pressures. Admittedly, many UX sites also list “cultural bias” as a trap to be avoided but it may concern a lack for cultural sensitivity when designing for foreign markets (e.g., from USA to China) or it is circumscribed rather superficially as a problem that can hamper with UX research; they do not really explain what the deeper roots are and how to lastingly change the status quo in professional practice. Taken together, data bias is here primarily a challenge for UX research design in the professional discourse.
Questions of racism, sexism, ableism, and other exclusions based on demographic factors are a bit more complicated to address. They often connect to the above-mentioned challenges but also ask UX designers to critically review their own assumptions about society and how they perform culture in their companies, i.e., define values, diversity, and engage in critical thinking on broader issues than “just” sampling. Addressing these broader issues asks designers to also acknowledge their responsibilities for society that Monteiro pointed to without mincing any words.
The Underlying Problems
Over the past few years, many UX professionals indeed took a more critical view on the state of their profession and how they can address the graver challenges related to exclusion and discrimination. At places such as Medium thought-leaders share their views, analyses, and offer departure points for finding ways to rectify what’s going wrong -though developing solutions appears very difficult due to the complexity of the underlying problems. Looking at what has been put forward as the main causes for data bias beyond the individual choices of UX-researchers, several issues stand out here: first, there can be a mix of naivety, misunderstanding, and ignorance among tech creators and designers that prevents them from foreseeing how a digital solution may cause harm to specific groups of people. Racist or sexist algorithms are partially the outcome of a lack of empathy (if you’ve never experienced racism, it is difficult to envision racist uses of a technology without somebody telling you about it). Next, a deficit in critical thinking is equally impactful and directly relates to this. For example, designer Amrutha Palaniyappan rightly points out that the creators of Microsoft’s TAY chatbot grossly underestimated how toxic social media debates are. Other current examples are so-called neighbourhood apps such a Nextdoor, Amazon’s Neighbors or Citizens. These locally oriented social media platforms allow communities to keep each other up to date, share local news, share resources, advertise local businesses but also to report suspicious behaviour to the police. In several instances in the U.S.A., these apps had a mixed impact: while they are very useful for some communities to increase overall quality of life, in others they have become tools for racial profiling and exclusion. In some neighbourhoods, residents use the app to determine who is welcome to live next to them and who is not, often in clearly racist language. As a response, Nextdoor cancelled certain functions and included a reminder to be polite before a user posts a message to their local community. It is unlikely that this last intervention will help much.
“We need a broader public debate on the role of tech in our daily lives”
While UX designers cannot be singled out and held accountable for prevailing forms of racism in society with deep historical roots, they should take a more critical look at how the affordances of their designs could be abused and think of effective countermeasures. The lack of understanding, empathy, and a “vision for what could go wrong” is often explained with a lack of demographic diversity in technology companies and in the design profession. This can give rise to such unconscious biases. With more diverse backgrounds in a team, the higher the chance to spot biases and to flag potentially very consequential one-sided assumptions. A quite pragmatic recommendation for smaller organisations is to network and collaborate with others from within and outside the field. This can reduce the blindness for harm, which can also result from a lack of (interdisciplinary) cooperation with experts and people affected by data biases. However, while diversity in the workforce is indeed an important issue to discuss (all over society), it won’t be an effective antidote all by itself. One critical factor that gives rise to data bias and its negative effects are rather pragmatic in nature; they derive from a prioritisation of business considerations: tight delivery deadlines, high work pressure, building on legacy code, and a prevailing “profit-first” philosophy in the tech business. These are causes that are as fundamental as existing cultural biases in society and cannot be fully tackled by design professionals alone but also need a broader public debate on the role of tech in our daily lives. Consumers and (democratically elected) regulators need to take a stance here as well. For example, by punishing companies that rush evidently flawed digital solutions that were built with inherent data biases.
The Way Forward
To sum up, we can identify two broader categories of data biases in the professional discourses: 1) biases in practical, day-to-day UX-research and 2) more profound biases in UX and tech design as professional cultures, which directly derive from societal biases and their deep histories. Both are closely connected, as especially the latter can give rise to the former. Taking a cursorily look at the discussion among UX-professionals, one can see that there is awareness and willingness to solve the problem. The field is responding to the challenges raised by Monteiro and critically self-reflects on its responsibilities. However, it is important to acknowledge that not every problem that comes with technology and design can be efficiently approached with a tech-design mindset.
“With more diverse backgrounds in a team, the higher the chance to spot biases and to flag potentially very consequential one-sided assumptions”
Especially when it comes to the bigger underlying problems: there won’t be a quick and elegant fix. Data bias is a complex and difficult issue that cannot be permanently solved. Instead, it takes constant vigilance, dialogue with diverse stakeholders from different domains in society, e.g., governance, research, and citizens/users. When even the very words we use in tech can reflect biases internalised in society, then it would be unfair to let UX-designers think about how to make technology more inclusive all by themselves. At the same time, UX-professionals should increase their understanding for how technology impacts diverse people and their relationships and seek alliances with other experts to minimise harm and maximise benefits for their target groups. Research in the field of Human-Computer-Interaction (HCI) and related disciplines that focus on UX offers here useful insights and guidelines for ethical professional practices.
De producties voor deze missie worden ondersteund door redacteur Aaron Golub
Here you will find all the other items of this mission: