A Process for Responsible Design: Embedding Ethics in Innovation
By Cennydd Bowles, Instructor of ProThink Learning online course The Impact of Ethics in Innovation and Technology
The most common question I get on responsible design is “How do I actually embed ethical considerations into our innovation process?” (They don’t actually phrase it like that, but you know . . . trying to be concise.)
Although I don’t love cramming a multifaceted field like ethics into a linear diagram, it’s helpful to show a simple process map. So here’s my attempt.
Step One: Anticipate
In my opinion, the first step is to spend time trying to anticipate potential moral consequences of your decisions. Ignore those who tell you this is impossible; most technology firms have no muscles for this because they’ve never seriously tried to do it.
What we’re attempting to do here is stretch our moral imaginations. This can involve uncovering hidden stakeholders and externalities that might befall them — to do that, we almost always have to think beyond narrow user-centricity and embrace wider inclusion.
Every business textbook offers a step-by-step guide to stakeholder analysis, but most only cover teammates or suspiciously homogenous groups like “users” or “residents.” This perspective, reinforced by the individualist focus of user-centered design, means we often overlook important groups. Stakeholders aren’t just the people who can affect a project; they’re also the people the project might affect. To force ourselves to consider the right people, try using a prompt list to capture a wider range of potential stakeholders, and use this as an input to futuring exercises and the design process. Such a list may include:
· Companies and professional organizations
· Governments and militaries
· Negative stakeholders:
o Terrorists
o Criminals
o Hackers
· Workers and unions
· Children and future generations
· The environment
Anticipating moral implications is also made easier when we involve techniques from the futures toolkit, e.g., horizon scanning, scenario development, or speculative design.
Step Two: Evaluate
Then we need a way to evaluate these impacts. Complex systems create competing consequences: how do we decide whether a benefit outweighs a harm? Again, the tech industry is inexperienced at this, so an evaluation of tradeoffs usually devolves into an opinion-driven debate, won by the most senior voice.
But, of course, philosophers have been doing the hard work for us these last couple of millennia. We can take advantage of the ethical theory they created to break past this flawed belief that ethics is subjective. We can use structured, robust methods to examine consequences and evaluate decisions rationally but compassionately, based on well-founded contemporary thinking, and hopefully fostering the wellbeing of all.
Step Three: Take Action
Having identified and evaluated potential consequences, we can now take action. If there’s potential harm, we can try to minimize it or design it out of the system before it happens.
But ethics isn’t just about stopping bad things happening; there’s a growing realization that responsible design is also a seed of innovation, a competitive differentiator. So we use this evaluation to create new products and features with positive ethical impacts, too.
Beneath It All: Infrastructure
Underpinning all of this, we need ethical infrastructure. This is the lever most newcomers immediately reach for: a C-level appointment, a documented code of ethics, etc. And these things can have their place, for sure, but they’re only there to support an ongoing process and culture of making good ethical decisions. They achieve virtually nothing on their own. So building this capacity is important, but is only effective if it’s deployed in parallel with proper decision-making processes like this.