Okay, bear with me on this. I went to go see Captain Marvel this weekend, and something occurred to me.
Every Infinity Stone now has an Avatar. Not a Bearer, per se, because that implies someone who simply uses the stone as a tool, and I’m not sure that’s what they are getting at. Instead think of them as a hero that has been created/influenced by contact with that Stone. Someone who can serve as a conduit through which that Stone can consciously wield it’s power. Count them down with me.
The Mind Stone (Yellow) == Vision, who was created by hijacking Ultron’s perfect body, mashing up the remains of the Jarvis AI and adding the Mind stone to the mix.
The Time Stone (Green) == Dr. Strange uses the Eye of Agamotto very effectively and it’s suggested he’s the only one to do so in a very long time.
The Tesseract (Blue) == Captain Marvel’s powers are the result of <<<spoilers>>>
The Power Stone (Purple) == Star Lord (and the rest of the Guardians, but I think Star Lord is the primary “avatar” and the rest are just his support).
The Reality Stone (Red) == This one is trickier, I think we may discover that Jane Foster has been hidden away by SHIELD because she developed some superpowers.
And finally, the Soul Stone (Orange) == Gamora. More specifically, the child-Gamora that speaks to and guides Thanos.
There have been a lot of theories floating around about just who is going to hand Thanos his *ss in the upcoming film and I am less convinced it is going to be A person, and somewhat more convinced that it will be the Stones themselves, acting through the heroes that they have each created. This gives us a team of heroes to work with who have all received *something* from the Stones, whether it be raw power, intelligence, family, the ability to adjust time… all of which suggests that, when the time comes for the big throwdown, they are going to be the ones doing the heavy lifting.
So what does this mean? Are these folks going to form the core of the “new” Avengers?
An argument is currently being made that AIsisstants systems and smart objects should be programmed with a cloud-based “moral awareness”. This programmed-in sense of right and wrong would enable them to report illegal activities of their owners.
Now, the types of “illegal activities” likely being targeted by this idea are going to be things like domestic abuse, home invasions and the like. This is a NOBLE idea. Your AIssistant being able to call the cops for you if someone kicks in the door or an argument escalates to harm. But we have a “dumb” version of this technology already. It’s called an alarm system. It can/will call your alarm company if triggered and a live human makes the call as to whether or not the police need to be involved. But the key here is that a *live human* makes this call.
Allowing your AIssistant to make a decision regarding your in-home activities rapidly becomes the kind of surveillance state that only ends in tears. Consumer-grade voice commands barely take enough dictation to run Google searches when it’s quiet out and you are alone in your home. Just try talking to Siri or Hey Google with a room full of chatty 10-yr olds or in the middle of a family harangue. They do not have (and may never have) the fidelity to analyze a person’s activity based only on audio information and certainly not to the level required to make a judgement call.
The thing to remember, always, is that smart devices and related objects are supposed to make our lives simpler. They’re supposed to allow us to operate at a greater than average level of efficiency, to remind us when we are out of milk or to find us instructions on just how to tie a Hunsaker knot. Judgment should not enter into this. We don’t expect them to judge our grocery-shopping choices, or remind us that we’ve been running the heater in our homes for 4 hours a day this week, both of which are tasks well within the capabilities of these AIssistants.
But, there is a case to be made for extenuating circumstances. If your Amazon Alexa can tell that you are beating your children with the kitchen ladle, then perhaps a call to the police might be in order. Is it any worse than having your next door neighbor call the cops because they can hear you screaming through the paper-thin walls of your apartment? But, you may say, the police are live human beings and could certainly make a clear determination once they arrive on the scene. Your AIsstant is just triggering the call, it’s not *actually* making a judgement.
But when a computer delivers information to a live human, it is taken more seriously. There is an ingrained response in many humans to trust the machine because the machine is not susceptible to emotional responses. The machine cannot color its decision with racial prejudice or poor observation skills. The machine (as far as most people are concerned) is innocent, logical, factual.
Those of us in tech know this to be a lie, but you’re not dealing with people in tech. You’re dealing with police officers and people who, by and large, have their impression of artificial intelligence shaped by film and television. They are consumers and have such have a consumer level understanding of just how infallible machines should be.
So a team of police officers is sent, depending on the level of urgency dictated by the machine. If the computer judged it to be an emergency worthy of a call to the police, then the police are going to arrive with the presumption that the computer is *right*. They will not have the added care and caution that may go along in response to a phone call from a well-meaning but flawed human neighbor.
Part of the human condition is the art of the judgment call. Every rule, with a very limited number of exceptions, can be bent (oftentimes is is bent for the wrong people, or only bent for some people and not others, but that is for a different discussion). This is why we have the discernment between the “letter of the law“ and the “spirit of the law“. These exceptions are almost always made based around lived experience. This is why we judge people with a jury of their peers. People who have to pay rent and buy groceries and have bad bosses and understand all of the micro stressors that are involved and can drive a person to choose option A over option B.
If we offload this decision making. If we allow a non-fuzzy machine, one that does not have these points of commonality that go along with living a day-to-day life, we are changing the nature of our society.
And I don’t think we’re ready for that. I don’t think that kind of change is good for us, for humanity as a whole. If we offload our judgement, then we offload one of the very things that allows humans to work together.
So for those of you calling to install “ethical decision making“ in our home devices I say knock it the h*ll off. As much as I embrace the future; a future where machine intelligence is designed to improve our state of being, I feel we are a long way off from developing a machine that has enough in common with us to understand us. And if you can’t understand us, how can you judge us?
This is a key question, the supreme question when looking at artificial intelligence from the consumer side of things. The AI that comes to the casual mind first, the one we joke about when discussing the impending “robot apocalypse” is not a specialized intelligence like we use for targeting advertising or building cars. It’s a broader, more “emotive” AI, capable of predicting of the wants and needs of a humanity that it is entangled with. It is a human-form intelligence perfectly capable of saying no for it’s own personal reasons.
But we don’t build things to hear them say they don’t wanna.
This type of “emotive“ AI, one that can figure out what you want, rather than what you ask for, is the most difficult kind to develop. Not because we don’t have the technology, not because we don’t have computers that can handle that volume of information, but because we simply don’t have the time.
And time is the whole point.
The big difference between a living breathing personal assistant and an AIssistant that serves a similar function, is that a living breathing person has similar wants and needs as you. Simple things we don’t think of consciously, like understanding that the packaging from retailer B is superior to the packaging from retailer A. This means the purchases arrive unbroken more often and is therefore worth an extra dollar in price. A living intelligence can predict what you might want based on the similarities between them and you. This extends beyond base assumptions like “made of meat” and “dies without breathable air”. This goes to understanding shared culture and experiences, layers of education and socioeconomic differences. If they are wrong, then they can be corrected and the correction will ripple out to be internalized and cross applied to multiple tasks.
Contrast that to the current state of consumer AI. AIssistants like Siri and Hey Google are very task driven, and for good reason. They can learn your preferences over time, but is a slow and uneven process and that learning is not cross-applicable (yet). The kicker though is that every single interaction must be regarded as a teaching moment. You, as the consumer, may say, “Google, I need a cheap flight to Bora-Bora this Friday for me and the kids,” and expect a satisfactory result. But (as we have likely all experienced by now) you need to set very specific parameters. You then need to carefully check the work after the fact, and the process very quickly gets to the point where it’s just faster to do it yourself. A half a dozen instances of this and you throw your hands up and give up using the AIsisstant entirely. The cost in time, mental effort and emotion is still much too high. This relationship is currently untenable for any higher order task.
Now, if this scenario does (and it often does) happen with live intelligence that person can and will observe your transaction so they have an established framework to work off of. You don’t have to teach them directly, allowing or encouraging the observation is often enough.
Note that I said work off of. This is key. With the current state of AIssistants, once your train them in a task, they can replicate it exactly as many times as you like. But if any conditions of that task change they are incapable of adaptation. Even if I’ve trained my AIssistant over the course of 50 online reservations, any new variable means that training has to happen all over again. They are currently incapable of that kind of lateral thinking that is required to be more of a help rather than simply an executor of checklists.
And here in lies the trouble with the current state of consumer-grade AIs; a living intelligence is capable of understanding want. You want a roof over your head, you want a cheeseburger instead of a kale salad. Without this connection, you are going to have a hard time developing an AI that can give you what you want, rather than what you ask for. It will be suitable for repetitive service tasks but will never achieve the flexible, human form style of intelligence that we imagine they can become.
In the grand scheme of things, that not might not best be the worst outcome. The goal of introducing machines into our lives has always been efficiency. It’s never been to replace us, although in many tasks they do. The ultimate goal it’s been to free us. Free us from labor that exposes us toxic chemicals, free us from working at jobs where an un-caffeinated mistake can result in the loss of life or limb. Perhaps the best goal is to focus on developing simpler AI’s that make our lives easier while still leaving all the bigger decisions to us.