Students confront the unethical side of tech in ‘Designing for Evil’ course

INSUBCONTINENT EXCLUSIVE:
Whether it surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days
is not short on unethical behavior
But it isn&t enough to just say &that creepy.& Fortunately, a course at the University of Washington is equipping its students with the
philosophical insights to better identify — and fix — tech pernicious lack of ethics. &Designing for Evil& just concluded its first
quarter at UW Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the
trade
But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical
implications of those apps and services. What, for example, is a good way of going about making a dating app that is inclusive and promotes
healthy relationships How can an AI imitating a human avoid unnecessary deception How can something as invasive as China proposed citizen
scoring system be made as user-friendly as it is possible to be I talked to all the student teams at a poster session held on UW campus, and
also chatted with Hiniker, who designed the course and seemed pleased at how it turned out. The premise is that the students are given a
crash course in ethical philosophy that acquaints them with influential ideas, such as utilitarianism and deontology. &It designed to be
as accessible to lay people as possible,& Hiniker told me
&These aren&t philosophy students — this is a design class
But I wanted to see what I could get away with.& The primary text is Harvard philosophy professor Michael Sandel popular book Justice, which
Hiniker felt combined the various philosophies into a readable, integrated format
After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and
then prescribe ethical remedies. As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the
trivial to the impossible
Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of
what tech offers, or how it offers it, is inescapably and fundamentally unethical. I found the students fell into one of three
categories. Not fundamentally unethical (but could use an ethical tune-up) WebMD is of course a very useful site, but it was plain to the
students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of
symptoms
The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators. Hello Barbie, the doll
that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there no reason it can&t be done right
With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of
keeping the dialogue with kids healthy and parents informed
The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device
rather than in the cloud
Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest
of the conversation secret. WeChat Discover allows users to find others around them and see recent photos they&ve taken — it opt-in,
which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China
It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don&t intend to
Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from
strangers. Netflix isn&t evil, but its tendency to promote binge-watching has robbed its users of many an hour
This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could
interrupt the habit and encourage people to take back control of their time. Fundamentally unethical (fixes are still worth making) FakeApp
is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn&t
It fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine
Watermarks visible and invisible, as well as controlled cropping of source videos, were this team suggestion, though ultimately the
technology won&t yield to these voluntary mitigations
So really, an informed populace is the only answer
Good luck with that! China &social credit& system is not actually, the students argued, absolutely unethical — that judgment involves a
certain amount of cultural bias
But I&m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment
Their highly practical suggestions, however, were focused on making the system more accountable and transparent
Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so
on. Tinder&s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is
very plainly designed to be a meat market
Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes
people to devalue themselves
As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person
You&d have to swipe based on that before seeing any pictures
I suggested having some deal-breaker questions you&d have to agree on, as well
It not a bad idea, though open to gaming (like the rest of online dating). Fundamentally unethical (fixes are essentially impossible) The
League, on the other hand, was a dating app that proved intractable to ethical guidelines
Not only was it a meat market, but it was a meat market where people paid to be among the self-selected &elite& and could filter by
ethnicity and other troubling categories
Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product
Unfortunately, The League is an unethical product for unethical people
No amount of tweaking will change that. Duplex was taken on by a smart team that nevertheless clearly only started their project after
Google I/O
Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible
It could, of course, identify itself — but that would spoil the entire value proposition
But they also asked a question I didn&t think to ask myself in my own coverage: why isn&t this AI exhausting all other options before
calling a human It could visit the site, send a text, use other apps and so on
AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time
it should say it an AI. To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the
fundamental ethical soundness of a business or technology and be able to articulate it. That may be the difference in a meeting between
being able to say something vague and easily blown off, like &I don&t think that a good idea,& and describing a specific harm and reason why
that harm is important — and perhaps how it can be avoided. As for Hiniker, she has some ideas for improving the course should it be
approved for a repeat next year
A broader set of texts, for one: &More diverse writers, more diverse voices,& she said
And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics. With any luck
the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more
COPPA-compliant smart toys and dating apps that don&t sabotage self-esteem.