A Future Owners Test: Ensuring Our Technologies Today Are Ethical Tomorrow

By Cennydd Bowles, Instructor of ProThink Learning online course The Impact of Ethics in Innovation and Technology

ProThink Learning
4 min readDec 21, 2021

When it comes to ethics in technology, there is a big question we should be asking more: “Is this technology safe in the hands of plausible future owners?” Naturally, we assess ethical risk mostly in the present-day world, looking at today’s norms, laws, and policies. But things change.

A Question of Tomorrow

Say you’re building a public-sector app or algorithm. The department you’re working for may have good protocols in place — regulations, perhaps, or internal processes — that give you confidence they’ll oversee this technology properly. But what about future governments? You’re hopefully building something that will stick around, but if the past couple of years have taught us anything it’s that nothing is guaranteed. Would this system still be safe in the hands of a government pursuing different politics? A police state? An ethnonationalist government? An autocracy? Policies and laws can be overturned; you might be relying on protections a future authority could easily revoke.

technology, commercial, and company.

These same concerns go for commercial work too. There’s a hostile takeover of your company, or maybe it fails, and its digital assets are snapped up in a fire sale, and suddenly your system belongs to someone else. Would it still be safe in the hands of a defense contractor? A data broker? Palantir?

Or perhaps the nature of your company itself shifts, and an initially benign use case takes on a different color when the company moves into a new sector . . .

Repurposing Surveillance

The modern reality of data gathering creates the danger in which targeted information consolidation can be repurposed for mass surveillance. Targeted surveillance is easier to justify. It has historically required authorities to show probable cause and to seek warrants; mass surveillance, however, circumvents many of these protections. A system that scoops up everything it can on an entire population and can change its targets seamlessly is a system to be feared. It’s important that you consider how easily your technology’s purposes can shift from a targeted population to a mass population and what safeguards you can put in place to prevent it.

The bottleneck in which entities had no efficient means to analyze the large amounts of data they collected has now been shattered. Once sensors and analysis tools are in place, changing the target of surveillance is now just an algorithmic tweak. In the twentieth century, to switch from tracking communists to tracking the Mafia would take hundreds of new bugs and wiretaps, a slew of new vehicles to be tailed, and piles of new records to sift through. Governments today can, more or less, look at a different row in the database. The original intent of surveillance no longer matters. Any surveillance can become any surveillance: a system used to listen out for gunfire can be repurposed to detect Urdu.

surveillance, database, and protection.

The US census removed questions on religion in 1957 on grounds of religious liberty, but say a new administration decided to create a Muslim database. The core systems, although embryonic, are already in place to automate this work. Outside the scope of governmental protections, data brokers offer detailed files. It took Amnesty International just five clicks to get a quote for data on 1.8 million US Muslims. Combine this with neo-physiognomic facial recognition, messaging app backdoors, and GPS tracking, and it’s clear a first stab at a Muslim database — including thousands of false positives — could be assembled in short order.

Prevention Before Reaction

Ideally, abuse is better prevented than treated; better to stop harm from even occurring than handle it after the event. In reality, the problem needs both prophylaxis and remedy. We need to be sure that our preventative design tactics, such as designated dissenters and persona non grata, are underpinned by a deep understanding of the contexts and impacts of abuse. Research teams should talk with victims (past, current, and even potential) of abuse on their platforms; as with offline abuse, these abuses will probably fall disproportionately on women and people of color.

This research will doubtless prove more effective if teams take the (sadly too rare) step of sharing their findings publicly; although abuse may manifest differently on each platform, the underlying attack patterns and human impacts certainly overlap. Silicon Valley will best solve its harassment problem through collaboration.

platform, tech, and responsibility.

A Future Owners Test

The word “plausible” is, of course, doing a lot of work here. The depth of your questioning should be proportionate to the risk; many eventualities can be safely ignored in many contexts. I’m not worried about, say, the far-right getting their hands on Candy Crush data, but I sure would be if they inherited a national carbon-surveillance program.

I’d like to see more of this thinking — maybe we could call it the future owners test — in contemporary responsible tech work. We mustn’t get so wrapped up in today that we overlook tomorrow.

--

--

ProThink Learning

Affordable, flexible, engaging, and relevant online learning taught by leading thought leaders. Learn from the best at your own pace.