“Make sense” is a mantra of the information architecture community. What if we don’t make sense? What if we make nonsense?
This poster is an inquiry into what we mean by “nonsense”, what it is to make or not to make sense, and the many ways in which meaning can fail or be failed. There is a long history of the intentional use of nonsense in art, literature, music, and politics as a means of expression, exertion of power, or resistance to power. Sense and nonsense lie in the interpreter and the interpreter’s relationship to the idea or material. Nonsense, even most earnest attempt at the production of nonsense, implies the possibility or loss of meaning. Nonsense betrays meaning.
The poster imitates a Dadaist poster and assemblage. It was created by Dan Zollman and Phoebe Meskill.
This poster was presented at the 2018 Information Architecture Summit in Chicago on March 23, 2018.
Earlier this year, I promised a number of people that I’d be sharing updates on my summer research. As you can see, I haven’t posted much, but I’ve been hard at work nevertheless. One of the outcomes of this work was a proposal for a session at next year’s IA Summit. Now that the blind review phase has ended, I think I can share this publicly. Crossing my fingers as I wait to hear whether or not I’ve been accepted.
Here is the short description from my proposal. Continue below for some elaboration on the concept.
Designing, building, architecting, growing: Rethinking IA through the lens of developmental systems.
Problem/solution, form/function, user/object, designer/client: Our traditional models of design are showing signs of strain. We face increasing complexity and volatility in our work. We struggle to reconcile timelines and budgets with reality. We try to effect change, but it doesn’t stick. And as Thomas Wendt has shared in his critique of human-centered design, we end up with unsustainable solutions that repackage and perpetuate systemic problems. But what would it mean to “decenter” design?
In the quest for alternatives, this session will explore the concept of developmental systems, in which contemporary IA and UX practices converge with systems theory. We’ll take a tour through several kinds of developmental systems in biology, psychology, and social theory. Developmental systems give us another way to see the world. Instead of objects and categories, we find processes, relationships, scaffolds, and networks. The paradoxes of problem/solution and form/function disappear. Instead, we designers and architects grow our products, much like farmers grow crops.
Through this lens, we will de- and re-construct our model of design and architecture, treating users, organizations, and technologies as developmental systems. We will see how designers do not merely specify solutions, but participate in the growth of these systems. We will also see how information architecture—the coordination of meaningful relationships—is fundamental to these processes of change. We will discuss how IA/UX practices can coordinate the relationships needed for healthy growth, promoting ethical, just, and sustainable structures.
This talk responds to Wendt, Peter Morville, and other previous speakers who have encouraged us to think systemically, ecologically, and politically. A focus on individual users and problems tends to limit the potential for change. Instead, a relational and systemic approach opens up new possibilities for designers and architects to improve the world around us.
I’ll say a little more about the thinking behind this work, although the actual presentation will be a bit more focused and concrete:
At the conceptual level (bear with me, all you “practical takeaways” people) this talk is an attempt to reexamine the ontology of “design” and “architecture” in light of systems theory and philosophy. This stems from the recognition that larger systems are always implicated in the practice of design: design always has social, political, and ethical dimensions that extend beyond the product being designed. Furthermore, professional designers must take responsibility for their role in the creation of technologies that either serve to mitigate or exacerbate social and ecological injustices. Much work has been done in areas like systemic design—the application of design methods and systems thinking to large-scale social systems—but what does this mean for user experience design, information architecture, and product design? More pointedly, what does it mean to use “systemic design” if you are designing an object or a product?
The current sketch for this talk begins by discussing two ontologies or lenses for design. (It would be a cliche to invoke Kuhn here, but what I’m discussing are paradigms in the truest sense.) The first lens is that of mainstream design. Meanstream design is predominantly rationalistic design within the paradigm of the scientific revolution. This is the paradigm of subjectivity and objectivity, causes and effects, measurement, quantification, and the Cartesian dualisms of mind/body, human/environment, man/nature. The lens I’d like to introduce is a systemic, relational, and processual lens. In this paradigm, humans and their environments, designers and users, problems and solutions are interrelated. Rather than dualisms, where one thing causes an effect on another, they are all part of a continuously developing system that cannot be “solved” by a designer. Users, organizations, technologies, and societies are always developing and they continue to develop long after your work as a designer is gone. Now, in many ways, mainstream design has adopted elements of this, e.g. in design thinking, in “lean”, in discussions of service design, organizational design, and design governance—but practitioners have mostly appropriated these elements while remaining in the rationalistic paradigm. Why does this matter? I will argue that mainstream design, with all its ROI, its KPIs, its leanness and agility, is incapable of resolving the ecological, ethical, and organizational challenges faced by designers, while systemic and relational thinking may help us with these challenges.
The theoretical component of this project—of which the 45-minute session will be just a subset—is an assemblage of perspectives in systems theory, theoretical biology, anthropology, ecological psychology, social and political theory, design and technology studies, management research, and even metaphysics. Once you start looking, it’s exciting and overwhelming to discover that every one of these fields has so much to say about the topic. We can talk about: How human beings and other organisms develop and grow through participation in a network of relationships within their physical and social environments. How objects and technologies also grow within a network of relationships, and how humans experience that network. How social “structures” can be understood as relationships that are consistently reproduced by people and objects over time. How all of these things can be understood as processes. It’s process, not form, Process, not product. This challenges designers to rethink our attempts to “solve problems” and to be “change agents”. Only in very limited ways are we solvers of problems or agents of change. Instead, we must commit more deeply to understanding the world and its ongoing processes of change if we want to play a positive, meaningful role in that change. People (“users”) grow. Technologies grow. Organizations grow. Events and appearances are temporary: a product launch, a redesign. The question is whether our work has contributed to the development of a healthy system. As designers, our great opportunity is to pursue and advocate for the health of the world around us, to scaffold its development, to coordinate the network of relationships in which human lives and good technologies grow. And I’d also like to suggest that information architecture is a very special part of this relational practice.
As for practice, my goal is not to provide tactical exercises and how-tos for your next design meeting, but I do want to show how relational thinking translates to very real, practical, and viable ways of working. To appropriate Tim Ingold, it’s not theory versus practice, but “practical knowledge and knowledgeable practice”. I’m mainly looking at this from two angles: The first is that this philosophy is well-aligned to a design practice that prioritizes the wellbeing of “users” and their communities. This means treating them as long-living people and communities who are continually growing, learning, developing skills, constructing their identities, and providing for their economic and political health. Physical artifacts, communications, and technologies are always enmeshed in these processes, and we can aim to design artifacts that play an enabling role in the right processes. Second, as professional designers, we are encouraged to examine the growth and health of the organizations in which we work, or of our client. Because objects and technologies are grown and reproduced over time in a network of relationships, the way the organization collectively grows, learns, develops skills has a much greater impact on the nature and the quality of products in the long run than the impact that a designer has on any single product. Looking to both systemic design and to management studies, such as Peter Senge’s and others’ work on learning organizations, we can find some fascinating principles and techniques that will resonate with contemporary designers and offer powerful tools.
The ideas discussed here will be new to some designers, while much of this may come across as obvious if you’re well-read in any one of these topics. However, I am suggesting more than this—a relational approach brings a very different meaning to our work. We need to go deeper than methods. It’s not enough to adopt new techniques and to be more human-centered. We need to revisit what we think design is and what role it plays in the world—its ontology, and our mental models. Our mental models shape how we practice, lead, and sell it to the business. As long as designers are problem-solvers and returns-on-investment, we pit ourselves against a system that cannot be changed from the outside. We need to rethink our relationships.
This summer, I’ve set aside time to investigate a topic that has long interested me: the politics of design.
What do I mean by ‘the politics of design’?
If you are a designer, developer, or engineer who has worked in an organization of any size, you’ve experienced “politics”. First, there are workplace politics: the dynamics of power, relationships, goals, incentives, alliances, trust, knowledge, emotions, and personalities that affect everyday work. Second, the design process itself is uniquely political. Design and engineering involve many stakeholders with different interests and different forms of influence, sometimes agreeing with each other and sometimes conflicting, sometimes engaging in productive debate and sometimes pulling rank or exercising stubbornness.
You’ve also likely discovered that unless you have a competent and savvy manager who is insulating you from these politics, you are forced to engage in politics in some way. Depending on your personality, you might enjoy it. Or perhaps you avoid it; your ideal job is the one where you don’t have to do politics at all. But no matter who you are, it is helpful to learn a little about working the politics of your organization. You’ll get your way more often, and you’ll be (or at least, be perceived as being) more successful in your job.
One of my initial research question was: Do traditional politics—that is, government, and the political process—offer lessons and techniques that we can apply in the practice of design?
Stepping back to a broader context, though, this is part of a broader set of themes in which “design” and “political” come together:
Do the products and systems we create have political implications?
Do politics within the design process affect the products we develop?
Does our political system (government, laws, regulation, etc.) affect design and technology?
Can we apply design to our politics and political systems?
Each of these questions are substantive topics in their own right, but they are also interrelated. As it turns out, the politics of the world we live in have fundamental implications for how we work as design and technology professionals.
I will unpack this claim in future articles, but for now, I’ll keep moving.
Why should we care?
Because we, as design and technology professionals, are so much more involved in this world than we tend to realize. We live and work in a time characterized by incomprehensible degrees of inequality and unsustainability, reinforced by entrenched political and economic structures. It is a world where the poor and disadvantaged have less access to the information, tools, and institutions that would otherwise give them a say in the issues that affect their lives. The products and systems we create—the ones you create—are the ones that establish this environment, from city plans to smartphone apps. We create the world that we, and others, live in.
This too must be unpacked, but for now, let me assert that the small things matter as well as the big ones. Even if you think the work you do now is so small or neutral that it is inconsequential, you as an individual professional are more important than you realize. Your work does affects people, and it plays a role in a larger system. And even if you think it’s the business who owns the product and makes decisions that you only implement, it is worth considering how your work serves the economic and political interests of that business and how you might safeguard the interests of the populations who might be affected, directly or indirectly.
Even if your main concern is that you simply need to make a living, and you don’t have energy for much else, it behooves you to understand the politics of design as it relates to your work and the people you work with. You’ll be more successful in your office, and you’ll be that much closer to the salary, rate, or position you want.
But if you practice user-centered design, or if you believe that the purpose of your work in technology is to solve problems, or if you care about designing solutions that improve people’s lives, it is imperative to consider politics at three levels:
How do the products and systems we design exert power in people’s lives, often in spite of our best intentions?
How do the politics in a business or organization shape the way we practice design and, in turn, shape the products?
How do the structural conditions of our work (economic, political, cultural, and institutional) shape the organizations we work in, in turn shaping the way we practice design?
Of course, the ultimate question at each level is: How can we influence these circumstances in order to achieve better outcomes? The proposition is that by understanding these questions, you (and I) will be better able to:
Work effectively and successfully in your organization.
Practice ethical and socially responsible design.
Effect change to improve your organization and, more importantly, the world.
Making it actionable
These topics are broad and daunting. However, they can be broken down, explained, and made actionable for practitioners like you and me.
These are not new questions, either. The social sciences have been studying these issues for a long time, while the design and technology professions have already developed strategies we can use in many areas. This is enough to give us traction. From these areas, we can draw concepts and tools that we, as designers, developers, and engineers, can use to shape our everyday politics of design.
I see an opportunity to pull this knowledge together and organize it for a practitioner audience. I think that many people in the design and technology are ready for these conversations—alongside the ethics of technology and design—but we need channels for productive conversation and action towards the political challenges in our work. This is one motivation for my research.
In future articles, I’ll go deeper into the statements made above. Meanwhile, I invite all those who are interested in this topic to join me in discussion and experimentation. If you have ideas, feedback, suggestions, or additional resources to share, please comment on this article or email me at dan [at] danzollman.com.
At World IA Day 2017 in Boston, Dan Klyn handed out these great little card decks representing Christopher Alexander’s Fifteen Fundamental Properties of Wholeness. Recommended use: Carry these cards around with you to help you notice the properties of wholeness wherever you go.
Yesterday, looking through my box of Oblique Strategies made me wonder: Are the Oblique Strategies related to the Fifteen Fundamental Properties?
So I gave it a try. I found that over half of the Oblique Strategies could be read as metaphors for physical structure or space. Some can be read as statements about the properties of wholeness. Others can be read as operations to achieve one or more of the properties. For example:
“Define an area as ‘safe’ and use it as an anchor” achieves a strong center.
“Emphasize differences” achieves contrast.
“Disciplined self-indulgence” might achieve good shape.
“Be dirty” achieves roughness.
“Simple subtraction” achieves positive space (or good shape).
“Repetition is a form of change” describes echoes.
“Make a blank valuable by putting it in an exquisite frame” is the void.
The remaining half of the Strategies, which I left in the box, were those that described ways of working rather than properties of the solution:
“Don’t be afraid of things because they’re easy to do”.
“Discover the recipes you are using and abandon them.”
“Use an old idea.”
“Use ‘unqualified’ people.” (Which, incidentally, is also pertinent to Dan Klyn’s IA Day talk, where he suggested “how wonderful it is to be dumb”.)
And finally, there were two Strategies that did not map to a specific Property, but seemed appropriate to place in the middle:
“Trust in the you of now.”
“Gardening, not architecture.”
I don’t think I necessarily grouped them “correctly” or well. I’m still learning about the Fifteen Properties, and I understand some better than others. One thing I found interesting is that many of the Oblique Strategies are expressed as negations—which is appropriate, because they are meant to break assumptions and force change—but it also made it more difficult to associate them with positive properties (e.g. good shape).
If nothing else, the fact that I affinitized them instead of using them for their intended purpose is proof that I’m an information architect.
I subscribe to this statement completely. Those piles of unread books remind me how much I don’t know. It’s good not to read! Yes, that’s what I tell myself.
But in choosing not to read books, there is a bounded gain in intellectual humility and an unbounded loss in potential learning. In other words, I couldn’t be more painfully aware of the mountains of books I wish I’d read. Reading a few of them would teach me a lot but do nothing for the anxiety.
I still have books I purchased as an undergrad that I’ve been saying I “have to read” for the last seven years. I still haven’t read all of them. I’ve finally realized that the pile will never be empty: Books accumulate as I hear about them, as I inquire about new subjects, as I enter a used bookstore (big mistake), as I find them on the sidewalk (which, by the way, is one of the things I love about Cambridge, MA). There’s only one outflow from the stock of books I want to read, which is reading.
But I’ve also realized that the pile will never stop growing, either. The more I read, the more I discover, and the more I want to know. It’s a runaway loop. The implication is important: Even if I spent every waking hour reading for the rest of my life, I will never read all the things I want to read.
There are only two things I can do about this situation: Be selective about what I choose to read, and increase my amount of reading.
Be selective about what I choose to read.
I already do this, of course, but it’s important to make it deliberate. In the long run, I can’t read all the books I want, and I can’t even read all the books that will be “good for me”.
I learn more by reading books outside my own disciplines; books that challenge my beliefs; books that stretch my abilities. There is marginal value in reading books that reiterate my existing knowledge and/or confirm my beliefs.
So, I’m sorry, but I won’t be reading the next UX book out on O’Reilly. (I mean, depending on what it is.)
Spend more of my free time reading.
One lever to increase reading is to find more time in my personal life.
There are many competing priorities, but my habits also have a role.
I could also make more free time by sleeping less, dropping commitments, or working less. I’ll come back to that in a minute.
Spend less time on client work in order to spend more time reading.
The other lever is to find a way make a sufficient living in fewer work hours. This is an instance of the more general problem of how can I make enough money to support the passion that doesn’t make money, be it comic-writing or mountain climbing.
For those working in freelance, agency, and consulting contexts, the answer is partly in pricing strategy—namely, the shift from hourly pricing to some form of value pricing.
This line of thought led me to a long brainstorm on weird strategies for value pricing, which I’ll post in a separate article at some future date.
Starting June 1, I’ll be taking some time off from working (ideally, one to three months) to do some full-time reading, research, and possibly some writing as a result.
I have a topic I’ve been wanting to research in depth, but I’d also be happy if I could simply catch up on some of those “have to read” books. A summer isn’t quite enough time for a thesis, so I’ve decided it’s okay if I don’t come out of it with a complete product. But where I can, I’ll write about the research I’m doing.
As a byproduct, I’m also hoping that this will help me establish a more consistent reading and writing habit that will continue after I go back to work.
We’ll see what happens.
I’ll be working on some sort of “statement of purpose” for my main topic, which I will discuss in a follow-up.
Meanwhile, here are just a few of the items in my long-standing reading pile:
Michel de Certeau, The Practice of Everyday Life
Paulo Friere, Pedagogy of the Oppressed
Peter Senge, The Fifth Discipline
Herbert Simon, The Sciences of the Artificial
Bruno Latour, Reassembling the Social and We Have Never Been Modern
Kevin Lynch, The Image of the City
Donald Schön, The Reflective Practitioner or something else
Emerson, Fretz, and Shaw, Writing Ethnographic Fieldnotes
Lucy Suchman, Human-Machine Configurations: Plans and Situated Actions
We (me and a group of fellow IA Summit attendees) have started an online discussion group on ethical design and technology. It’s a topic I’ve been thinking about for a long time, and I’m excited that there’s so much interest in talking about it.
(I apologize if Slack is not your style; I’m going out of my comfort zone, too. I’d encourage you to give it a try, adjust the notification settings to make it work for you, or give slackdigest.com a try [and let me know how it goes].)
In Part 1, I explained some of what I mean by “ethics” in design practice and why I think our communities of practice—communities of designers, technologists, and particularly my own communities, the UX and IA communities—need a more expansive, practical framework for applied ethics. You can skip that part if this makes sense to you already.
My next question is: In order to build systems of applied ethics in design and technology, what do we need to talk about?
To restate some ideas from Part 1, I think an agenda for applied design ethics would include at least these questions:
How can design and technical practices account for a more expansive view of the benefits and consequences of technology in our world?
How can practitioners act as change agents in their own organizations, communities, and nations?
How can whole institutions engage in ethical technical practice?
How can our professional communities (or communities of practice) work together to enact these approaches?
This quick & dirty concept diagram (below) outlines possible topics in a discussion of design ethics. It’s incomplete—so I’m looking for your feedback on this. What’s missing? What’s wrong? Do you think about this differently?
I intended to write a longer post with an elaboration of each box below, but since I’m strapped for time this week, I’ll have to save that for a future version. This is also a good time to mention the Slack group on technology and design ethics I’ve started with a group of fellow IA Summit attendees. While you’re welcome to comment directly on this article, I’d love to continue the conversation over there.
As a design and user experience practitioner, I’ve long been interested in ethics as it applies to design and the professional practice of design. In my conferences over the last two years—namely, the IA Summit (IAS)—this topic has been getting more and more attention. In his IAS 2017 keynote talk, Alan Cooper gave this the most powerful treatment I’ve seen to date. (The videos aren’t online yet, but here’s a video of his talk from a previous conference. Also, Mike Monteiro’s talks are another favorite of mine in the ‘powerful’ category.) Thomas Wendt argued that many of the prevailing assumptions about “human-centered design” in business are, in fact, opposed to the interests of humans. With those examples, I don’t mean to exclude the many other IA Summit talks that addressed ethics directly or indirectly, nor do I want to gloss over the many years of work on ethics and responsibility in the design, architecture, and engineering disciplines, in the business world, and in scholarship (science and technology studies, philosophy of technology, and so forth).
Ethics were central to the way I was taught to think about technology. My high school Technology Education teacher taught that the purpose of technology—by definition—is to meet human needs. To use technology in a frivolous, excessive, or self-serving way is undesirable. As an undergraduate, I attended a design program taught by Science and Technology Studies (STS) faculty. In the social studies of technology, it is a fundamental principle that the practice of design always has cultural and political implications. The products of design have vast unintended consequences for society, and they are put to unintended uses across multiple contexts and cultures. Human wellbeing, health, knowledge, freedom, politics, economics, and community are tangled up with the intractable complexity of sociotechnical systems that have the power to harm people. Most of us are aware of the many biological, economic, political, and environmental issues that make it critically important for designers and technologists to consider this now, in the twenty-first century. My fundamental belief is that a designer is responsible for understanding the systemic context of his or her work and making design choices that serve the interests of people, communities, society, and the natural environment.
But not all designers and technologists have these fundamental beliefs. Or, perhaps they share these beliefs, but they lack the opportunities, incentives, tools, and know-how to incorporate the larger social context into their work. Or, they are working in circumstances with inescapable constraints that prevent them from doing so. Today, there are many advocates of socially oriented, culturally sensitive, and sustainable design—both individuals and entire firms—but they are minorities in the tide of a global economy that rewards businesses who build private, short-term value over those who created shared, lasting value. (Hence Alan Cooper tells us to get wise to “extractive” industry.)
The notion of user-centered design (UCD) or human-centered design already implies an ethic. In user-centered design, both the design product and the design process aim to satisfy the needs and desires of the “user”. Typically, light research is used to discern those needs and wants. Then we design a product that is usable, useful, and desirable based on our understanding of the user. But what constitutes usable, useful, and desirable is narrow. We tend to emphasize usability, and within usability, principles such as efficiency, comprehensibility, learnability, ergonomics, and in some contexts, education. These are necessary and valuable, but they are not sufficient to guarantee that a solution is culturally appropriate or beneficial in the long run. Years of advocacy have made accessibility a higher priority, but ‘accessibility’ is incorrectly reduced to access by people with specific impairments rather than access by a broad variety of people. Our “user” is usually a single, individual person, and sometimes a couple or family, but communities as a whole are not directly considered unless explicitly warranted by the project, as in community health or civic projects. We observe user behavior, uncover mental models, and analyze culture, but mainly to ensure we design a product that is likely to be used—that is, adopted. We are taught to articulate and design for KPIs (Key Performance Indicators) and map user goals to business goals in order to make ourselves successful in a business context. Perhaps this is important and necessary in order to sell user-centered design to our clients and employers, but it results in the deprioritization of any user need that does not translate to a visible return on investment, and it also does not guarantee that the business goals correspond to outcomes that are ‘good’ for human beings. And finally, in the attempt to sell to our clients and employers, we have proliferated a bastardized version of “design thinking” as a self-congratulatory practice of “empathy” that is free to disregard the real benefits or harms we offer to our communities (and ourselves).
In short, I believe that design and technology communities need a more expansive ethic. To say that “design is the rendering of intent” is true as a descriptive statement but inadequate as a definition: It elides the facts that our intent may be problematic in the first place; that our intent is often rendered unsuccessfully; that the rendering will be subjected to unintended uses; and that the uses (including the intended ones) will have unintended consequences. If we believe the wellbeing of our world is important, then the idea that every technology has systemic and moral implications should and must be a fundamental assumption for designers and technologists.
But an ethic is not enough, either. So many of us struggle to implement even the most basic design methods within our organizations. Either design is not well understood in the organization, or the organization’s structure, culture, business model, and management approach get in the way. We make compromises, and then we wash our hands of responsibility for the bad decisions made by others. Whether due to our work situations or our own shortcomings as user advocates, our organizations are not even producing usable and accessible products today. Even if we could develop an ethical framework for our disciplines, how can we expect it to come to fruition if we cannot successfully practice ‘user-centered design’ as it is?
To build a better world, we need to change the way we work. To change the way we work, we need to shape the organizations and institutions in which we work. To shape our organizations, we may even need to shape our political and economic institutions. How will we facilitate ethical decision-making in our organizations? How will we persuade our colleagues to approach technology in new ways? Our professional community needs to develop practical frameworks that enable designers and technologists to conduct ethical practice and to situate that practice within and throughout institutions.
So an agenda for a system of applied design ethics would include at least the following questions:
How can design and technical practices account for a more expansive view of the benefits and consequences of technology in our world?
How can practitioners act as change agents in their own organizations, communities, and nations?
How can whole institutions engage in ethical technical practice?
How can our professional communities work together to enact these approaches?
Before exploring this further, I want to provide some additional clarifications and caveats:
Design. When I use the word design, I mean it broadly. To me, any deliberate, strategic attempt to effect systemic change can potentially be included under the umbrella of “design”. The products of design may be artifacts, services, organizational models, policies, and more. There may be other qualifications to be made, but I won’t get into them here. For now, I’m only making the distinction between the generalized concept of design as a type of human activity and the specific disciplines in which design activities take place, such as industrial design and graphic design. (I find Richard Buchanan’s concept of “placements” helpful in teasing these apart; see “Wicked Problems in Design Thinking” in Design Studies, 1992.)
Design is an activity and product of an entire organization and its stakeholders. Consciously or unconsciously, systematically or arbitrarily, the design process is shaped by all those individuals who are directly involved in the process as well as those who indirectly affect it through policies, budgets, communications, management decisions, and so forth. This extends to other actors such as clients, vendors, regulators, and other stakeholders. (For those working independently or in small companies, this point still applies because those practitioners are usually part of a larger network of people working together.) Although I argue that design practitioners need techniques as change agents in the organizations or networks around them, the framing of “designer as change agent” can be problematic if taken to mean that the designer is the lone architect of change. The designer is not working alone, cannot in good faith make determinations for the rest of the organization, and does not have the expertise or wisdom to take responsibility for the success of the entire organization, ethically or otherwise. Instead, the entire organization must work together to improve itself. But designers can help make the organization more conscious of its process, help to systematize design practice, and more broadly, become an important influencer and facilitator of organizational change.
We will never completely “fix” our organizations and institutions. Again, I argue that we must pursue organizational change to the extent that ethical practice depends on it; we must recognize when organizational structure and culture present ethical conflicts. However, we must also recognize that social structures, organizational structures, and culture are so deeply rooted and slow to change that we cannot expect to remove all conflicts within a practical time horizon.
It is impossible to fully understand complex sociotechnical systems. We must attempt to understand these systems as well as possible in order to improve them, but they are too complex to understand fully. As it applies to our project of design ethics, we must have the humility to realize that a system of ethics will never be quite sufficient to clarify the problems we want to solve.
It is impossible to completely solve systemic problems. Furthermore, all solutions introduce new problems. These problems are indeterminate and are rooted in complex systems that we cannot fully understand. Furthermore, the scope of a problem, its causes, and effects is too large for any comprehensive solution to be achievable, even if it could be conceived. This forces designers to make choices and tradeoffs that contain ethical dilemmas. Even with a mature ethical framework, we will be unable to resolve some of these dilemmas, nor will we have the right to make the decisions we are making.
Designers will not stop designing. Sometimes, it would be better if a certain invention had not been invented at all. But as long as designers, engineers, programmers, entrepreneurs, and makers exist in the world, it is unrealistic to persuade them to give up their livelihoods or to stop doing what they love to do.
There are many ethics. The variety of values and principles that may apply to technology across places and cultures is uncountable. There may be a handful of identifiable “universal” human values, but most values are held by cultures, communities, and individuals. I will revisit this in Part 2.
Nothing can be purely ‘good’. For all the reasons above, no technology or design product can be said to have a purely ‘good’ impact on human society. This gives us a paradox: We must be leaders in the design and use of technology for ‘good’, yet we cannot authoritatively describe what ‘good’ is. So we can only hope to advocate for ‘good’ based on what we know to be good in a specific, local, relative situation. And again, we must have the humility to recognize our own insufficiency, which we can only mitigate by having communities of stakeholder participate in the process as thoroughly as possible, allowing them to define what is ‘good’ for them.
All being said, I do believe that:
Meaningful improvement is possible.
Future disasters can be prevented.
And although there are many ethics, I believe we can have these guiding (if tautological) principles in design:
People should be able to live well.
People should be able to live in the kinds of environments that human beings would want to live in.
Our institutions should produce the kinds of products, services, and policies that human beings would want to live with.
Our communities and institutions should be the kinds of communities and institutions that human beings would want to live and work in.
Any design ethic demands inclusivity. Solutions that serve a limited population result in a loss of opportunity cost and make the situation worse for everyone else. Inclusion means accounting for the greatest possible variations in people’s values and beliefs, physical capabilities, cognitive capabilities, knowledge and skill, geographic location, and economic situation. This is not to say that every product must serve every member of the human population, but one must acknowledge the entire population that is potentially affected by a particular product, and one must also recognize that technology can be copied or transported to unexpected places.
A design ethic demands long-term thinking. Short-term solutions result in a loss of opportunity cost and make the situation worse in the future—for a longer period of time and for a larger number of people—negating the benefits of the short-term solution.
The first four of these statements might seem tautological, but one can easily see that there are many environments today where people are not living well.
With that, I think I’ve stated my position as well as I could within a few hours of writing. Going forward, the question I am considering is: In order to develop an a system of applied ethics in design, what do we need to talk about?
In Part 2, I will discuss what this agenda might look like. My conceptual map is insufficient and inaccurate, but I hope it will start a discussion about the scope and framing of this topic.
Finally, I want to acknowledge the limitations of this piece. First, I am not saying that applied ethics does not exist—there is a great deal of scholarship on this topic, and many practitioners are dealing with these issues already. This knowledge and experience now needs to be gathered and synthesized.
Second, as an information architect, my experience with the “design and technology communities” is limited mainly to the user experience design (UX) and information architecture (IA) communities in the United States. My critique emerges from that point of view and the particular changes I want to see in the UX and IA communities. That said, this discussion does not “belong” to UX. This discussion is meant to include all sorts of designers and technologists. These issues need to be addressed across all of our disciplines, with shared knowledge and shared action.