As a design and user experience practitioner, I’ve long been interested in ethics as it applies to design and the professional practice of design. In my conferences over the last two years—namely, the IA Summit (IAS)—this topic has been getting more and more attention. In his IAS 2017 keynote talk, Alan Cooper gave this the most powerful treatment I’ve seen to date. (The videos aren’t online yet, but here’s a video of his talk from a previous conference. Also, Mike Monteiro’s talks are another favorite of mine in the ‘powerful’ category.) Thomas Wendt argued that many of the prevailing assumptions about “human-centered design” in business are, in fact, opposed to the interests of humans. With those examples, I don’t mean to exclude the many other IA Summit talks that addressed ethics directly or indirectly, nor do I want to gloss over the many years of work on ethics and responsibility in the design, architecture, and engineering disciplines, in the business world, and in scholarship (science and technology studies, philosophy of technology, and so forth).
Ethics were central to the way I was taught to think about technology. My high school Technology Education teacher taught that the purpose of technology—by definition—is to meet human needs. To use technology in a frivolous, excessive, or self-serving way is undesirable. As an undergraduate, I attended a design program taught by Science and Technology Studies (STS) faculty. In the social studies of technology, it is a fundamental principle that the practice of design always has cultural and political implications. The products of design have vast unintended consequences for society, and they are put to unintended uses across multiple contexts and cultures. Human wellbeing, health, knowledge, freedom, politics, economics, and community are tangled up with the intractable complexity of sociotechnical systems that have the power to harm people. Most of us are aware of the many biological, economic, political, and environmental issues that make it critically important for designers and technologists to consider this now, in the twenty-first century. My fundamental belief is that a designer is responsible for understanding the systemic context of his or her work and making design choices that serve the interests of people, communities, society, and the natural environment.
But not all designers and technologists have these fundamental beliefs. Or, perhaps they share these beliefs, but they lack the opportunities, incentives, tools, and know-how to incorporate the larger social context into their work. Or, they are working in circumstances with inescapable constraints that prevent them from doing so. Today, there are many advocates of socially oriented, culturally sensitive, and sustainable design—both individuals and entire firms—but they are minorities in the tide of a global economy that rewards businesses who build private, short-term value over those who created shared, lasting value. (Hence Alan Cooper tells us to get wise to “extractive” industry.)
The notion of user-centered design (UCD) or human-centered design already implies an ethic. In user-centered design, both the design product and the design process aim to satisfy the needs and desires of the “user”. Typically, light research is used to discern those needs and wants. Then we design a product that is usable, useful, and desirable based on our understanding of the user. But what constitutes usable, useful, and desirable is narrow. We tend to emphasize usability, and within usability, principles such as efficiency, comprehensibility, learnability, ergonomics, and in some contexts, education. These are necessary and valuable, but they are not sufficient to guarantee that a solution is culturally appropriate or beneficial in the long run. Years of advocacy have made accessibility a higher priority, but ‘accessibility’ is incorrectly reduced to access by people with specific impairments rather than access by a broad variety of people. Our “user” is usually a single, individual person, and sometimes a couple or family, but communities as a whole are not directly considered unless explicitly warranted by the project, as in community health or civic projects. We observe user behavior, uncover mental models, and analyze culture, but mainly to ensure we design a product that is likely to be used—that is, adopted. We are taught to articulate and design for KPIs (Key Performance Indicators) and map user goals to business goals in order to make ourselves successful in a business context. Perhaps this is important and necessary in order to sell user-centered design to our clients and employers, but it results in the deprioritization of any user need that does not translate to a visible return on investment, and it also does not guarantee that the business goals correspond to outcomes that are ‘good’ for human beings. And finally, in the attempt to sell to our clients and employers, we have proliferated a bastardized version of “design thinking” as a self-congratulatory practice of “empathy” that is free to disregard the real benefits or harms we offer to our communities (and ourselves).
In short, I believe that design and technology communities need a more expansive ethic. To say that “design is the rendering of intent” is true as a descriptive statement but inadequate as a definition: It elides the facts that our intent may be problematic in the first place; that our intent is often rendered unsuccessfully; that the rendering will be subjected to unintended uses; and that the uses (including the intended ones) will have unintended consequences. If we believe the wellbeing of our world is important, then the idea that every technology has systemic and moral implications should and must be a fundamental assumption for designers and technologists.
But an ethic is not enough, either. So many of us struggle to implement even the most basic design methods within our organizations. Either design is not well understood in the organization, or the organization’s structure, culture, business model, and management approach get in the way. We make compromises, and then we wash our hands of responsibility for the bad decisions made by others. Whether due to our work situations or our own shortcomings as user advocates, our organizations are not even producing usable and accessible products today. Even if we could develop an ethical framework for our disciplines, how can we expect it to come to fruition if we cannot successfully practice ‘user-centered design’ as it is?
To build a better world, we need to change the way we work. To change the way we work, we need to shape the organizations and institutions in which we work. To shape our organizations, we may even need to shape our political and economic institutions. How will we facilitate ethical decision-making in our organizations? How will we persuade our colleagues to approach technology in new ways? Our professional community needs to develop practical frameworks that enable designers and technologists to conduct ethical practice and to situate that practice within and throughout institutions.
So an agenda for a system of applied design ethics would include at least the following questions:
- How can design and technical practices account for a more expansive view of the benefits and consequences of technology in our world?
- How can practitioners act as change agents in their own organizations, communities, and nations?
- How can whole institutions engage in ethical technical practice?
- How can our professional communities work together to enact these approaches?
Before exploring this further, I want to provide some additional clarifications and caveats:
- Design. When I use the word design, I mean it broadly. To me, any deliberate, strategic attempt to effect systemic change can potentially be included under the umbrella of “design”. The products of design may be artifacts, services, organizational models, policies, and more. There may be other qualifications to be made, but I won’t get into them here. For now, I’m only making the distinction between the generalized concept of design as a type of human activity and the specific disciplines in which design activities take place, such as industrial design and graphic design. (I find Richard Buchanan’s concept of “placements” helpful in teasing these apart; see “Wicked Problems in Design Thinking” in Design Studies, 1992.)
- Design is an activity and product of an entire organization and its stakeholders. Consciously or unconsciously, systematically or arbitrarily, the design process is shaped by all those individuals who are directly involved in the process as well as those who indirectly affect it through policies, budgets, communications, management decisions, and so forth. This extends to other actors such as clients, vendors, regulators, and other stakeholders. (For those working independently or in small companies, this point still applies because those practitioners are usually part of a larger network of people working together.) Although I argue that design practitioners need techniques as change agents in the organizations or networks around them, the framing of “designer as change agent” can be problematic if taken to mean that the designer is the lone architect of change. The designer is not working alone, cannot in good faith make determinations for the rest of the organization, and does not have the expertise or wisdom to take responsibility for the success of the entire organization, ethically or otherwise. Instead, the entire organization must work together to improve itself. But designers can help make the organization more conscious of its process, help to systematize design practice, and more broadly, become an important influencer and facilitator of organizational change.
- We will never completely “fix” our organizations and institutions. Again, I argue that we must pursue organizational change to the extent that ethical practice depends on it; we must recognize when organizational structure and culture present ethical conflicts. However, we must also recognize that social structures, organizational structures, and culture are so deeply rooted and slow to change that we cannot expect to remove all conflicts within a practical time horizon.
- It is impossible to fully understand complex sociotechnical systems. We must attempt to understand these systems as well as possible in order to improve them, but they are too complex to understand fully. As it applies to our project of design ethics, we must have the humility to realize that a system of ethics will never be quite sufficient to clarify the problems we want to solve.
- It is impossible to completely solve systemic problems. Furthermore, all solutions introduce new problems. These problems are indeterminate and are rooted in complex systems that we cannot fully understand. Furthermore, the scope of a problem, its causes, and effects is too large for any comprehensive solution to be achievable, even if it could be conceived. This forces designers to make choices and tradeoffs that contain ethical dilemmas. Even with a mature ethical framework, we will be unable to resolve some of these dilemmas, nor will we have the right to make the decisions we are making.
- Designers will not stop designing. Sometimes, it would be better if a certain invention had not been invented at all. But as long as designers, engineers, programmers, entrepreneurs, and makers exist in the world, it is unrealistic to persuade them to give up their livelihoods or to stop doing what they love to do.
- There are many ethics. The variety of values and principles that may apply to technology across places and cultures is uncountable. There may be a handful of identifiable “universal” human values, but most values are held by cultures, communities, and individuals. I will revisit this in Part 2.
- Nothing can be purely ‘good’. For all the reasons above, no technology or design product can be said to have a purely ‘good’ impact on human society. This gives us a paradox: We must be leaders in the design and use of technology for ‘good’, yet we cannot authoritatively describe what ‘good’ is. So we can only hope to advocate for ‘good’ based on what we know to be good in a specific, local, relative situation. And again, we must have the humility to recognize our own insufficiency, which we can only mitigate by having communities of stakeholder participate in the process as thoroughly as possible, allowing them to define what is ‘good’ for them.
All being said, I do believe that:
- Meaningful improvement is possible.
- Future disasters can be prevented.
And although there are many ethics, I believe we can have these guiding (if tautological) principles in design:
- People should be able to live well.
- People should be able to live in the kinds of environments that human beings would want to live in.
- Our institutions should produce the kinds of products, services, and policies that human beings would want to live with.
- Our communities and institutions should be the kinds of communities and institutions that human beings would want to live and work in.
- Any design ethic demands inclusivity. Solutions that serve a limited population result in a loss of opportunity cost and make the situation worse for everyone else. Inclusion means accounting for the greatest possible variations in people’s values and beliefs, physical capabilities, cognitive capabilities, knowledge and skill, geographic location, and economic situation. This is not to say that every product must serve every member of the human population, but one must acknowledge the entire population that is potentially affected by a particular product, and one must also recognize that technology can be copied or transported to unexpected places.
- A design ethic demands long-term thinking. Short-term solutions result in a loss of opportunity cost and make the situation worse in the future—for a longer period of time and for a larger number of people—negating the benefits of the short-term solution.
The first four of these statements might seem tautological, but one can easily see that there are many environments today where people are not living well.
With that, I think I’ve stated my position as well as I could within a few hours of writing. Going forward, the question I am considering is: In order to develop an a system of applied ethics in design, what do we need to talk about?
In Part 2, I will discuss what this agenda might look like. My conceptual map is insufficient and inaccurate, but I hope it will start a discussion about the scope and framing of this topic.
Finally, I want to acknowledge the limitations of this piece. First, I am not saying that applied ethics does not exist—there is a great deal of scholarship on this topic, and many practitioners are dealing with these issues already. [More of] this knowledge and experience now needs to be gathered and synthesized [in a way that is more accessible, digestible, and actionable for practitioners].
Second, as an information architect, my experience with the “design and technology communities” is limited mainly to the user experience design (UX) and information architecture (IA) communities in the United States. My critique emerges from that point of view and the particular changes I want to see in the UX and IA communities. That said, this discussion does not “belong” to UX. This discussion is meant to include all sorts of designers and technologists. These issues need to be addressed across all of our disciplines, with shared knowledge and shared action.