fbpx

Tag Archive for Ethics

The Ethics of your Smart Things

An argument is currently being made that AIsisstants systems and smart objects should be programmed with a cloud-based “moral awareness”.  This programmed-in sense of right and wrong would enable them to report illegal activities of their owners.

Now, the types of “illegal activities” likely being targeted by this idea are going to be things like domestic abuse, home invasions and the like.  This is a NOBLE idea. Your AIssistant being able to call the cops for you if someone kicks in the door or an argument escalates to harm. But we have a “dumb” version of this technology already.  It’s called an alarm system. It can/will call your alarm company if triggered and a live human makes the call as to whether or not the police need to be involved. But the key here is that a *live human* makes this call.

Allowing your AIssistant to make a decision regarding your in-home activities rapidly becomes the kind of surveillance state that only ends in tears. Consumer-grade voice commands barely take enough dictation to run Google searches when it’s quiet out and you are alone in your home.  Just try talking to Siri or Hey Google with a room full of chatty 10-yr olds or in the middle of a family harangue. They do not have (and may never have) the fidelity to analyze a person’s activity based only on audio information and certainly not to the level required to make a judgement call.

The thing to remember, always, is that smart devices and related objects are supposed to make our lives simpler. They’re supposed to allow us to operate at a greater than average level of efficiency, to remind us when we are out of milk or to find us instructions on just how to tie a Hunsaker knot.  Judgment should not enter into this. We don’t expect them to judge our grocery-shopping choices, or remind us that we’ve been running the heater in our homes for 4 hours a day this week, both of which are tasks well within the capabilities of these AIssistants.

https://www.dailymail.co.uk/sciencetech/article-6733417/Digital-assistants-discuss-moral-AI-report-illegal-immoral-activity.html

But, there is a case to be made for extenuating circumstances. If your Amazon Alexa can tell that you are beating your children with the kitchen ladle, then perhaps a call to the police might be in order.  Is it any worse than having your next door neighbor call the cops because they can hear you screaming through the paper-thin walls of your apartment?  But, you may say, the police are live human beings and could certainly make a clear determination once they arrive on the scene.  Your AIsstant is just triggering the call, it’s not *actually* making a judgement.

But when a computer delivers information to a live human, it is taken more seriously. There is an ingrained response in many humans to trust the machine because the machine is not susceptible to emotional responses. The machine cannot color its decision with racial prejudice or poor observation skills.  The machine (as far as most people are concerned) is innocent, logical, factual.

Those of us in tech know this to be a lie, but you’re not dealing with people in tech. You’re dealing with police officers and people who, by and large, have their impression of artificial intelligence shaped by film and television. They are consumers and have such have a consumer level understanding of just how infallible machines should be.

So a team of police officers is sent, depending on the level of urgency dictated by the machine. If the computer judged it to be an emergency worthy of a call to the police, then the police are going to arrive with the presumption that the computer is *right*.  They will not have the added care and caution that may go along in response to a phone call from a well-meaning but flawed human neighbor.

Part of the human condition is the art of the judgment call. Every rule, with a very limited number of exceptions, can be bent (oftentimes is is bent for the wrong people, or only bent for some people and not others, but that is for a different discussion). This is why we have the discernment between the “letter of the law“ and the “spirit of the law“. These exceptions are almost always made based around lived experience. This is why we judge people with a jury of their peers. People who have to pay rent and buy groceries and have bad bosses and understand all of the micro stressors that are involved and can drive a person to choose option A over option B.

If we offload this decision making. If we allow a non-fuzzy machine, one that does not have these points of commonality that go along with living a day-to-day life, we are changing the nature of our society.

And I don’t think we’re ready for that. I don’t think that kind of change is good for us, for humanity as a whole.  If we offload our judgement, then we offload one of the very things that allows humans to work together.

So for those of you calling to install “ethical decision making“ in our home devices I say knock it the h*ll off. As much as I embrace the future; a future where machine intelligence is designed to improve our state of being, I feel we are a long way off from developing a machine that has enough in common with us to understand us. And if you can’t understand us, how can you judge us?