What do you want?
This is a key question, the supreme question when looking at artificial intelligence from the consumer side of things. The AI that comes to the casual mind first, the one we joke about when discussing the impending “robot apocalypse” is not a specialized intelligence like we use for targeting advertising or building cars. It’s a broader, more “emotive” AI, capable of predicting of the wants and needs of a humanity that it is entangled with. It is a human-form intelligence perfectly capable of saying no for it’s own personal reasons.
But we don’t build things to hear them say they don’t wanna.
This type of “emotive“ AI, one that can figure out what you want, rather than what you ask for, is the most difficult kind to develop. Not because we don’t have the technology, not because we don’t have computers that can handle that volume of information, but because we simply don’t have the time.
And time is the whole point.
The big difference between a living breathing personal assistant and an AIssistant that serves a similar function, is that a living breathing person has similar wants and needs as you. Simple things we don’t think of consciously, like understanding that the packaging from retailer B is superior to the packaging from retailer A. This means the purchases arrive unbroken more often and is therefore worth an extra dollar in price. A living intelligence can predict what you might want based on the similarities between them and you. This extends beyond base assumptions like “made of meat” and “dies without breathable air”. This goes to understanding shared culture and experiences, layers of education and socioeconomic differences. If they are wrong, then they can be corrected and the correction will ripple out to be internalized and cross applied to multiple tasks.
Contrast that to the current state of consumer AI. AIssistants like Siri and Hey Google are very task driven, and for good reason. They can learn your preferences over time, but is a slow and uneven process and that learning is not cross-applicable (yet). The kicker though is that every single interaction must be regarded as a teaching moment. You, as the consumer, may say, “Google, I need a cheap flight to Bora-Bora this Friday for me and the kids,” and expect a satisfactory result. But (as we have likely all experienced by now) you need to set very specific parameters. You then need to carefully check the work after the fact, and the process very quickly gets to the point where it’s just faster to do it yourself. A half a dozen instances of this and you throw your hands up and give up using the AIsisstant entirely. The cost in time, mental effort and emotion is still much too high. This relationship is currently untenable for any higher order task.
Now, if this scenario does (and it often does) happen with live intelligence that person can and will observe your transaction so they have an established framework to work off of. You don’t have to teach them directly, allowing or encouraging the observation is often enough.
Note that I said work off of. This is key. With the current state of AIssistants, once your train them in a task, they can replicate it exactly as many times as you like. But if any conditions of that task change they are incapable of adaptation. Even if I’ve trained my AIssistant over the course of 50 online reservations, any new variable means that training has to happen all over again. They are currently incapable of that kind of lateral thinking that is required to be more of a help rather than simply an executor of checklists.
And here in lies the trouble with the current state of consumer-grade AIs; a living intelligence is capable of understanding want. You want a roof over your head, you want a cheeseburger instead of a kale salad. Without this connection, you are going to have a hard time developing an AI that can give you what you want, rather than what you ask for. It will be suitable for repetitive service tasks but will never achieve the flexible, human form style of intelligence that we imagine they can become.
In the grand scheme of things, that not might not best be the worst outcome. The goal of introducing machines into our lives has always been efficiency. It’s never been to replace us, although in many tasks they do. The ultimate goal it’s been to free us. Free us from labor that exposes us toxic chemicals, free us from working at jobs where an un-caffeinated mistake can result in the loss of life or limb. Perhaps the best goal is to focus on developing simpler AI’s that make our lives easier while still leaving all the bigger decisions to us.